Vision-Based UAV Flight Control and Obstacle Avoidance
2006-01-01
denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
Topology Optimization of Lightweight Lattice Structural Composites Inspired by Cuttlefish Bone
NASA Astrophysics Data System (ADS)
Hu, Zhong; Gadipudi, Varun Kumar; Salem, David R.
2018-03-01
Lattice structural composites are of great interest to various industries where lightweight multifunctionality is important, especially aerospace. However, strong coupling among the composition, microstructure, porous topology, and fabrication of such materials impedes conventional trial-and-error experimental development. In this work, a discontinuous carbon fiber reinforced polymer matrix composite was adopted for structural design. A reliable and robust design approach for developing lightweight multifunctional lattice structural composites was proposed, inspired by biomimetics and based on topology optimization. Three-dimensional periodic lattice blocks were initially designed, inspired by the cuttlefish bone microstructure. The topologies of the three-dimensional periodic blocks were further optimized by computer modeling, and the mechanical properties of the topology optimized lightweight lattice structures were characterized by computer modeling. The lattice structures with optimal performance were identified.
A rate-constrained fast full-search algorithm based on block sum pyramid.
Song, Byung Cheol; Chun, Kang-Wook; Ra, Jong Beom
2005-03-01
This paper presents a fast full-search algorithm (FSA) for rate-constrained motion estimation. The proposed algorithm, which is based on the block sum pyramid frame structure, successively eliminates unnecessary search positions according to rate-constrained criterion. This algorithm provides the identical estimation performance to a conventional FSA having rate constraint, while achieving considerable reduction in computation.
Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.
Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo
2015-11-01
The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.
Determining the Mechanical Properties of Lattice Block Structures
NASA Technical Reports Server (NTRS)
Wilmoth, Nathan
2013-01-01
Lattice block structures and shape memory alloys possess several traits ideal for solving intriguing new engineering problems in industries such as aerospace, military, and transportation. Recent testing at the NASA Glenn Research Center has investigated the material properties of lattice block structures cast from a conventional aerospace titanium alloy as well as lattice block structures cast from nickel-titanium shape memory alloy. The lattice block structures for both materials were sectioned into smaller subelements for tension and compression testing. The results from the cast conventional titanium material showed that the expected mechanical properties were maintained. The shape memory alloy material was found to be extremely brittle from the casting process and only compression testing was completed. Future shape memory alloy lattice block structures will utilize an adjusted material composition that will provide a better quality casting. The testing effort resulted in baseline mechanical property data from the conventional titanium material for comparison to shape memory alloy materials once suitable castings are available.
East Cameron Block 270, offshore Louisiana: a Pleistocene field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, D.S.; Sutley, C.E.; Berlitz, R.E.
1976-01-01
Exploration of the Plio-Pleistocene in the Gulf of Mexico since 1970 has led to the discovery of significant hydrocarbon reserves. One of the better gas fields found to date has been the East Cameron Block 270 field, offshore Louisiana. Utilization of a coordinated exploitation plan with Schlumberger Offshore Services has allowed Pennzoil Co., as operator, to develop and put the Block 270 field on production in minimum time. The structure at Block 270 field is a north-south-trending, faulted nose at 6000 ft (1825 m). At the depth of the ''G'' sandstone (8700 ft or 2650 m), the structure is closed;more » it is elongated north-south and dips in all directions from the Block 270 area. Closure is the result of contemporaneous growth of the east-bounding regional fault. Structural and stratigraphic interpretations from dipmeters were used to determine the most favorable offset locations. The producing zones consist of various combinations of bar-like, channel-like, and distributary-front sandstones. The sediment source for most of the producing zones was southwest of the area, except for two zones which derived their sediments from the north through a system of channels paralleling the east-bounding fault. Computed logs were used to convert conventional logging measurements into a more readily usable form for evaluation. The computed results were used for reserve calculations, reservoir-quality determinations, and confirmation of depositional environments as determined from other sources.« less
Efficient low-bit-rate adaptive mesh-based motion compensation technique
NASA Astrophysics Data System (ADS)
Mahmoud, Hanan A.; Bayoumi, Magdy A.
2001-08-01
This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).
Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean; Cheung, Kenneth C
2017-03-01
We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures.
Digital Morphing Wing: Active Wing Shaping Concept Using Composite Lattice-Based Cellular Structures
Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean
2017-01-01
Abstract We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures. PMID:28289574
Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin
2003-04-15
A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003
Hybrid computing using a neural network with dynamic external memory.
Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis
2016-10-27
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.
NASA Technical Reports Server (NTRS)
Whittenberger, J. Daniel
2001-01-01
Present structural concepts for hot static structures are conventional "sheet & stringer" or truss core construction. More weight-efficient concepts such as honeycomb and lattice block are being investigated, in combination with both conventional superalloys and TiAl. Development efforts for components made from TiAl sheet are centered on lower cost methods for sheet and foil production, plus alloy development for higher temperature capability. A low-cost casting technology recently developed for aluminum and steel lattice blocks has demonstrated the required higher strength and stiffness, with weight efficiency approach- ing honeycombs. The current effort is based on extending the temperature capability by developing lattice block materials made from IN-718 and Mar-M247.
Gkionis, Konstantinos; Kruse, Holger; Šponer, Jiří
2016-04-12
Modern dispersion-corrected DFT methods have made it possible to perform reliable QM studies on complete nucleic acid (NA) building blocks having hundreds of atoms. Such calculations, although still limited to investigations of potential energy surfaces, enhance the portfolio of computational methods applicable to NAs and offer considerably more accurate intrinsic descriptions of NAs than standard MM. However, in practice such calculations are hampered by the use of implicit solvent environments and truncation of the systems. Conventional QM optimizations are spoiled by spurious intramolecular interactions and severe structural deformations. Here we compare two approaches designed to suppress such artifacts: partially restrained continuum solvent QM and explicit solvent QM/MM optimizations. We report geometry relaxations of a set of diverse double-quartet guanine quadruplex (GQ) DNA stems. Both methods provide neat structures without major artifacts. However, each one also has distinct weaknesses. In restrained optimizations, all errors in the target geometries (i.e., low-resolution X-ray and NMR structures) are transferred to the optimized geometries. In QM/MM, the initial solvent configuration causes some heterogeneity in the geometries. Nevertheless, both approaches represent a decisive step forward compared to conventional optimizations. We refine earlier computations that revealed sizable differences in the relative energies of GQ stems computed with AMBER MM and QM. We also explore the dependence of the QM/MM results on the applied computational protocol.
Neuromorphic computing enabled by physics of electron spins: Prospects and perspectives
NASA Astrophysics Data System (ADS)
Sengupta, Abhronil; Roy, Kaushik
2018-03-01
“Spintronics” refers to the understanding of the physics of electron spin-related phenomena. While most of the significant advancements in this field has been driven primarily by memory, recent research has demonstrated that various facets of the underlying physics of spin transport and manipulation can directly mimic the functionalities of the computational primitives in neuromorphic computation, i.e., the neurons and synapses. Given the potential of these spintronic devices to implement bio-mimetic computations at very low terminal voltages, several spin-device structures have been proposed as the core building blocks of neuromorphic circuits and systems to implement brain-inspired computing. Such an approach is expected to play a key role in circumventing the problems of ever-increasing power dissipation and hardware requirements for implementing neuro-inspired algorithms in conventional digital CMOS technology. Perspectives on spin-enabled neuromorphic computing, its status, and challenges and future prospects are outlined in this review article.
Program structure-based blocking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.
2017-09-26
Embodiments relate to program structure-based blocking. An aspect includes receiving source code corresponding to a computer program by a compiler of a computer system. Another aspect includes determining a prefetching section in the source code by a marking module of the compiler. Yet another aspect includes performing, by a blocking module of the compiler, blocking of instructions located in the prefetching section into instruction blocks, such that the instruction blocks of the prefetching section only contain instructions that are located in the prefetching section.
Kuang, Hua; Ma, Wei; Xu, Liguang; Wang, Libing; Xu, Chuanlai
2013-11-19
Polymerase chain reaction (PCR) is an essential tool in biotechnology laboratories and is becoming increasingly important in other areas of research. Extensive data obtained over the last 12 years has shown that the combination of PCR with nanoscale dispersions can resolve issues in the preparation DNA-based materials that include both inorganic and organic nanoscale components. Unlike conventional DNA hybridization and antibody-antigen complexes, PCR provides a new, effective assembly platform that both increases the yield of DNA-based nanomaterials and allows researchers to program and control assembly with predesigned parameters including those assisted and automated by computers. As a result, this method allows researchers to optimize to the combinatorial selection of the DNA strands for their nanoparticle conjugates. We have developed a PCR approach for producing various nanoscale assemblies including organic motifs such as small molecules, macromolecules, and inorganic building blocks, such as nanorods (NRs), metal, semiconductor, and magnetic nanoparticles (NPs). We start with a nanoscale primer and then modify that building block using the automated steps of PCR-based assembly including initialization, denaturation, annealing, extension, final elongation, and final hold. The intermediate steps of denaturation, annealing, and extension are cyclic, and we use computer control so that the assembled superstructures reach their predetermined complexity. The structures assembled using a small number of PCR cycles show a lower polydispersity than similar discrete structures obtained by direct hybridization between the nanoscale building blocks. Using different building blocks, we assembled the following structural motifs by PCR: (1) discrete nanostructures (NP dimers, NP multimers including trimers, pyramids, tetramers or hexamers, etc.), (2) branched NP superstructures and heterochains, (3) NP satellite-like superstructures, (4) Y-shaped nanostructures and DNA networks, (5) protein-DNA co-assembly structures, and (6) DNA block copolymers including trimers and pentamers. These results affirm that this method can produce a variety of chemical structures and in yields that are tunable. Using PCR-based preparation of DNA-bridged nanostructures, we can program the assembly of the nanoscale blocks through the adjustment of the primer intensity on the assembled units, the number of PCR cycles, or both. The resulting structures are highly complex and diverse and have interesting dynamics and collective properties. Potential applications of these materials include chirooptical materials, probe fabrication, and environmental and biomedical sensors.
Protein based Block Copolymers
Rabotyagova, Olena S.; Cebe, Peggy; Kaplan, David L.
2011-01-01
Advances in genetic engineering have led to the synthesis of protein-based block copolymers with control of chemistry and molecular weight, resulting in unique physical and biological properties. The benefits from incorporating peptide blocks into copolymer designs arise from the fundamental properties of proteins to adopt ordered conformations and to undergo self-assembly, providing control over structure formation at various length scales when compared to conventional block copolymers. This review covers the synthesis, structure, assembly, properties, and applications of protein-based block copolymers. PMID:21235251
Electromagnetic scattering of large structures in layered earths using integral equations
NASA Astrophysics Data System (ADS)
Xiong, Zonghou; Tripp, Alan C.
1995-07-01
An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.
Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.
Roperto, Renato; Akkus, Anna; Akkus, Ozan; Lang, Lisa; Sousa-Neto, Manoel Damiao; Teich, Sorin; Porto, Thiago Soares
2016-01-01
The aim of this study was to determine the microtensile bond strength (μTBS) of ceramic and composite computer aided design-computer aided manufacturing (CAD-CAM) blocks bonded to dentin using different adhesive strategies. In this in vitro study, 30 crowns of sound freshly extracted human molars were sectioned horizontally 3 mm above the cementoenamel junction to produce flat dentin surfaces. Ceramic and composite CAD/CAM blocks, size 14, were sectioned into slices of 3 mm thick. Before bonding, CAD/CAM block surfaces were treated according to the manufacturer's instructions. Groups were created based on the adhesive strategy used: Group 1 (GI) - conventional resin cement + total-etch adhesive system, Group 2 (GII) - conventional resin cement + self-etch adhesive system, and Group 3 (GIII) - self-adhesive resin cement with no adhesive. Bonded specimens were stored in 100% humidity for 24h at 37C, and then sectioned with a slow-speed diamond saw to obtain 1 mm × 1 mm × 6 mm microsticks. Microtensile testing was then conducted using a microtensile tester. μTBS values were expressed in MPa and analyzed by one-way ANOVA with post hoc (Tukey) test at the 5% significance level. Mean values and standard deviations of μTBS (MPa) were 17.68 (±2.71) for GI/ceramic; 17.62 (±3.99) for GI/composite; 13.61 (±6.92) for GII/composite; 12.22 (±4.24) for GII/ceramic; 7.47 (±2.29) for GIII/composite; and 6.48 (±3.10) for GIII/ceramic; ANOVA indicated significant differences among the adhesive modality and block interaction (P < 0.05), and no significant differences among blocks only, except between GI and GII/ceramic. Bond strength of GIII was consistently lower (P < 0.05) than GI and GII groups, regardless the block used. Cementation of CAD/CAM restorations, either composite or ceramic, can be significantly affected by different adhesive strategies used.
Huang, Chao-Tsung; Wang, Yu-Wen; Huang, Li-Ren; Chin, Jui; Chen, Liang-Gee
2017-02-01
Digital refocusing has a tradeoff between complexity and quality when using sparsely sampled light fields for low-storage applications. In this paper, we propose a fast physically correct refocusing algorithm to address this issue in a twofold way. First, view interpolation is adopted to provide photorealistic quality at infocus-defocus hybrid boundaries. Regarding its conventional high complexity, we devised a fast line-scan method specifically for refocusing, and its 1D kernel can be 30× faster than the benchmark View Synthesis Reference Software (VSRS)-1D-Fast. Second, we propose a block-based multi-rate processing flow for accelerating purely infocused or defocused regions, and a further 3- 34× speedup can be achieved for high-resolution images. All candidate blocks of variable sizes can interpolate different numbers of rendered views and perform refocusing in different subsampled layers. To avoid visible aliasing and block artifacts, we determine these parameters and the simulated aperture filter through a localized filter response analysis using defocus blur statistics. The final quadtree block partitions are then optimized in terms of computation time. Extensive experimental results are provided to show superior refocusing quality and fast computation speed. In particular, the run time is comparable with the conventional single-image blurring, which causes serious boundary artifacts.
1990-02-01
copies Pl ,...,P. of a multiple module fp resolve nondeterminism (local or global) in an identical manner. 5. The copies PI,...,P, axe physically...recovery block. A recovery block consists of a conventional block (like in ALGOL or PL /I) which is provided with a means of error detection, called an...improved failures model for communicating processes. In Proceeding. NSF- SERC Seminar on Concurrency, volume 197 of Lecture Notes in Computer Science
Block-structured grids for complex aerodynamic configurations: Current status
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Sanetrik, Mark D.; Parlette, Edward B.
1995-01-01
The status of CFD methods based on the use of block-structured grids for analyzing viscous flows over complex configurations is examined. The objective of the present study is to make a realistic assessment of the usability of such grids for routine computations typically encountered in the aerospace industry. It is recognized at the very outset that the total turnaround time, from the moment the configuration is identified until the computational results have been obtained and postprocessed, is more important than just the computational time. Pertinent examples will be cited to demonstrate the feasibility of solving flow over practical configurations of current interest on block-structured grids.
Superalloy Lattice Block Structures
NASA Technical Reports Server (NTRS)
Nathal, M. V.; Whittenberger, J. D.; Hebsur, M. G.; Kantzos, P. T.; Krause, D. L.
2004-01-01
Initial investigations of investment cast superalloy lattice block suggest that this technology will yield a low cost approach to utilize the high temperature strength and environmental resistance of superalloys in lightweight, damage tolerant structural configurations. Work to date has demonstrated that relatively large superalloy lattice block panels can be successfully investment cast from both IN-718 and Mar-M247. These castings exhibited mechanical properties consistent with the strength of the same superalloys measured from more conventional castings. The lattice block structure also accommodates significant deformation without failure, and is defect tolerant in fatigue. The potential of lattice block structures opens new opportunities for the use of superalloys in future generations of aircraft applications that demand strength and environmental resistance at elevated temperatures along with low weight.
Cooperative storage of shared files in a parallel computing system with dynamic block size
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-11-10
Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).
Yeh, Chun-Ting; Brunette, T J; Baker, David; McIntosh-Smith, Simon; Parmeggiani, Fabio
2018-02-01
Computational protein design methods have enabled the design of novel protein structures, but they are often still limited to small proteins and symmetric systems. To expand the size of designable proteins while controlling the overall structure, we developed Elfin, a genetic algorithm for the design of novel proteins with custom shapes using structural building blocks derived from experimentally verified repeat proteins. By combining building blocks with compatible interfaces, it is possible to rapidly build non-symmetric large structures (>1000 amino acids) that match three-dimensional geometric descriptions provided by the user. A run time of about 20min on a laptop computer for a 3000 amino acid structure makes Elfin accessible to users with limited computational resources. Protein structures with controlled geometry will allow the systematic study of the effect of spatial arrangement of enzymes and signaling molecules, and provide new scaffolds for functional nanomaterials. Copyright © 2017 Elsevier Inc. All rights reserved.
Bindewald, Eckart; Grunewald, Calvin; Boyle, Brett; O'Connor, Mary; Shapiro, Bruce A
2008-10-01
One approach to designing RNA nanoscale structures is to use known RNA structural motifs such as junctions, kissing loops or bulges and to construct a molecular model by connecting these building blocks with helical struts. We previously developed an algorithm for detecting internal loops, junctions and kissing loops in RNA structures. Here we present algorithms for automating or assisting many of the steps that are involved in creating RNA structures from building blocks: (1) assembling building blocks into nanostructures using either a combinatorial search or constraint satisfaction; (2) optimizing RNA 3D ring structures to improve ring closure; (3) sequence optimisation; (4) creating a unique non-degenerate RNA topology descriptor. This effectively creates a computational pipeline for generating molecular models of RNA nanostructures and more specifically RNA ring structures with optimized sequences from RNA building blocks. We show several examples of how the algorithms can be utilized to generate RNA tecto-shapes.
Bindewald, Eckart; Grunewald, Calvin; Boyle, Brett; O’Connor, Mary; Shapiro, Bruce A.
2013-01-01
One approach to designing RNA nanoscale structures is to use known RNA structural motifs such as junctions, kissing loops or bulges and to construct a molecular model by connecting these building blocks with helical struts. We previously developed an algorithm for detecting internal loops, junctions and kissing loops in RNA structures. Here we present algorithms for automating or assisting many of the steps that are involved in creating RNA structures from building blocks: (1) assembling building blocks into nanostructures using either a combinatorial search or constraint satisfaction; (2) optimizing RNA 3D ring structures to improve ring closure; (3) sequence optimisation; (4) creating a unique non-degenerate RNA topology descriptor. This effectively creates a computational pipeline for generating molecular models of RNA nanostructures and more specifically RNA ring structures with optimized sequences from RNA building blocks. We show several examples of how the algorithms can be utilized to generate RNA tecto-shapes. PMID:18838281
Enhancing instruction scheduling with a block-structured ISA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melvin, S.; Patt, Y.
It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less
1980-02-08
hours 0 Input Format: Integer b. Creatina Rescource Allocation Blocks The creation of a specific resource allocation block as a directive component is...is directed. 0 Range: N/A . Input Format: INT/NUC/CHM b. Creatina Employment Packages An employment package block has the structure portrayed in Figure
Sun, Jiedi; Yu, Yang; Wen, Jiangtao
2017-01-01
Remote monitoring of bearing conditions, using wireless sensor network (WSN), is a developing trend in the industrial field. In complicated industrial environments, WSN face three main constraints: low energy, less memory, and low operational capability. Conventional data-compression methods, which concentrate on data compression only, cannot overcome these limitations. Aiming at these problems, this paper proposed a compressed data acquisition and reconstruction scheme based on Compressed Sensing (CS) which is a novel signal-processing technique and applied it for bearing conditions monitoring via WSN. The compressed data acquisition is realized by projection transformation and can greatly reduce the data volume, which needs the nodes to process and transmit. The reconstruction of original signals is achieved in the host computer by complicated algorithms. The bearing vibration signals not only exhibit the sparsity property, but also have specific structures. This paper introduced the block sparse Bayesian learning (BSBL) algorithm which works by utilizing the block property and inherent structures of signals to reconstruct CS sparsity coefficients of transform domains and further recover the original signals. By using the BSBL, CS reconstruction can be improved remarkably. Experiments and analyses showed that BSBL method has good performance and is suitable for practical bearing-condition monitoring. PMID:28635623
Automated drafting system uses computer techniques
NASA Technical Reports Server (NTRS)
Millenson, D. H.
1966-01-01
Automated drafting system produces schematic and block diagrams from the design engineers freehand sketches. This system codes conventional drafting symbols and their coordinate locations on standard size drawings for entry on tapes that are used to drive a high speed photocomposition machine.
B-spline Method in Fluid Dynamics
NASA Technical Reports Server (NTRS)
Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)
2001-01-01
B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.
Automatic Blocking Of QR and LU Factorizations for Locality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Q; Kennedy, K; You, H
2004-03-26
QR and LU factorizations for dense matrices are important linear algebra computations that are widely used in scientific applications. To efficiently perform these computations on modern computers, the factorization algorithms need to be blocked when operating on large matrices to effectively exploit the deep cache hierarchy prevalent in today's computer memory systems. Because both QR (based on Householder transformations) and LU factorization algorithms contain complex loop structures, few compilers can fully automate the blocking of these algorithms. Though linear algebra libraries such as LAPACK provides manually blocked implementations of these algorithms, by automatically generating blocked versions of the computations, moremore » benefit can be gained such as automatic adaptation of different blocking strategies. This paper demonstrates how to apply an aggressive loop transformation technique, dependence hoisting, to produce efficient blockings for both QR and LU with partial pivoting. We present different blocking strategies that can be generated by our optimizer and compare the performance of auto-blocked versions with manually tuned versions in LAPACK, both using reference BLAS, ATLAS BLAS and native BLAS specially tuned for the underlying machine architectures.« less
Parallel block schemes for large scale least squares computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golub, G.H.; Plemmons, R.J.; Sameh, A.
1986-04-01
Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less
"Grinding" cavities in polyurethane foam
NASA Technical Reports Server (NTRS)
Brower, J. R.; Davey, R. E.; Dixon, W. F.; Robb, P. H.; Zebus, P. P.
1980-01-01
Grinding tool installed on conventional milling machine cuts precise cavities in foam blocks. Method is well suited for prototype or midsize production runs and can be adapted to computer control for mass production. Method saves time and materials compared to bonding or hot wire techniques.
Structure-property relationships in low-temperature adhesives. [for inflatable structures
NASA Technical Reports Server (NTRS)
Schoff, C. K.; Udipi, K.; Gillham, J. K.
1977-01-01
Adhesive materials of aliphatic polyester, linear hydroxyl end-capped polybutadienes, or SBS block copolymers are studied with the objective to replace conventional partially aromatic end-reactive polyester-isocyanate adhesives that have shown embrittlement
Quasi-Block Copolymers Based on a General Polymeric Chain Stopper.
Sanguramath, Rajashekharayya A; Nealey, Paul F; Shenhar, Roy
2016-07-11
Quasi-block copolymers (q-BCPs) are block copolymers consisting of conventional and supramolecular blocks, in which the conventional block is end-terminated by a functionality that interacts with the supramolecular monomer (a "chain stopper" functionality). A new design of q-BCPs based on a general polymeric chain stopper, which consists of polystyrene end-terminated with a sulfonate group (PS-SO3 Li), is described. Through viscosity measurements and a detailed diffusion-ordered NMR spectroscopy study, it is shown that PS-SO3 Li can effectively cap two types of model supramolecular monomers to form q-BCPs in solution. Furthermore, differential scanning calorimetry data and structural characterization of thin films by scanning force microscopy suggests the existence of the q-BCP architecture in the melt. The new design considerably simplifies the synthesis of polymeric chain stoppers; thus promoting the utilization of q-BCPs as smart, nanostructured materials. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Field-Programmable Gate Array Computer in Structural Analysis: An Initial Exploration
NASA Technical Reports Server (NTRS)
Singleterry, Robert C., Jr.; Sobieszczanski-Sobieski, Jaroslaw; Brown, Samuel
2002-01-01
This paper reports on an initial assessment of using a Field-Programmable Gate Array (FPGA) computational device as a new tool for solving structural mechanics problems. A FPGA is an assemblage of binary gates arranged in logical blocks that are interconnected via software in a manner dependent on the algorithm being implemented and can be reprogrammed thousands of times per second. In effect, this creates a computer specialized for the problem that automatically exploits all the potential for parallel computing intrinsic in an algorithm. This inherent parallelism is the most important feature of the FPGA computational environment. It is therefore important that if a problem offers a choice of different solution algorithms, an algorithm of a higher degree of inherent parallelism should be selected. It is found that in structural analysis, an 'analog computer' style of programming, which solves problems by direct simulation of the terms in the governing differential equations, yields a more favorable solution algorithm than current solution methods. This style of programming is facilitated by a 'drag-and-drop' graphic programming language that is supplied with the particular type of FPGA computer reported in this paper. Simple examples in structural dynamics and statics illustrate the solution approach used. The FPGA system also allows linear scalability in computing capability. As the problem grows, the number of FPGA chips can be increased with no loss of computing efficiency due to data flow or algorithmic latency that occurs when a single problem is distributed among many conventional processors that operate in parallel. This initial assessment finds the FPGA hardware and software to be in their infancy in regard to the user conveniences; however, they have enormous potential for shrinking the elapsed time of structural analysis solutions if programmed with algorithms that exhibit inherent parallelism and linear scalability. This potential warrants further development of FPGA-tailored algorithms for structural analysis.
Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers
NASA Technical Reports Server (NTRS)
Guruswamy, Guru; VanDalsem, William (Technical Monitor)
1994-01-01
Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.
User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.
1988-01-01
Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.
San, Phyo Phyo; Ling, Sai Ho; Nuryani; Nguyen, Hung
2014-08-01
This paper focuses on the hybridization technology using rough sets concepts and neural computing for decision and classification purposes. Based on the rough set properties, the lower region and boundary region are defined to partition the input signal to a consistent (predictable) part and an inconsistent (random) part. In this way, the neural network is designed to deal only with the boundary region, which mainly consists of an inconsistent part of applied input signal causing inaccurate modeling of the data set. Owing to different characteristics of neural network (NN) applications, the same structure of conventional NN might not give the optimal solution. Based on the knowledge of application in this paper, a block-based neural network (BBNN) is selected as a suitable classifier due to its ability to evolve internal structures and adaptability in dynamic environments. This architecture will systematically incorporate the characteristics of application to the structure of hybrid rough-block-based neural network (R-BBNN). A global training algorithm, hybrid particle swarm optimization with wavelet mutation is introduced for parameter optimization of proposed R-BBNN. The performance of the proposed R-BBNN algorithm was evaluated by an application to the field of medical diagnosis using real hypoglycemia episodes in patients with Type 1 diabetes mellitus. The performance of the proposed hybrid system has been compared with some of the existing neural networks. The comparison results indicated that the proposed method has improved classification performance and results in early convergence of the network.
De Stavola, Luca; Fincato, Andrea; Albiero, Alberto Maria
2015-01-01
During autogenous mandibular bone harvesting, there is a risk of damage to anatomical structures, as the surgeon has no three-dimensional control of the osteotomy planes. The aim of this proof-of-principle case report is to describe a procedure for harvesting a mandibular bone block that applies a computer-guided surgery concept. A partially dentate patient who presented with two vertical defects (one in the maxilla and one in the mandible) was selected for an autogenous mandibular bone block graft. The bone block was planned using a computer-aided design process, with ideal bone osteotomy planes defined beforehand to prevent damage to anatomical structures (nerves, dental roots, etc) and to generate a surgical guide, which defined the working directions in three dimensions for the bone-cutting instrument. Bone block dimensions were planned so that both defects could be repaired. The projected bone block was 37.5 mm in length, 10 mm in height, and 5.7 mm in thickness, and it was grafted in two vertical bone augmentations: an 8 × 21-mm mandibular defect and a 6.5 × 18-mm defect in the maxilla. Supraimposition of the preoperative and postoperative computed tomographic images revealed a procedure accuracy of 0.25 mm. This computer-guided bone harvesting technique enables clinicians to obtain sufficient autogenous bone to manage multiple defects safely.
Template based parallel checkpointing in a massively parallel computer system
Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN
2009-01-13
A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.
Optical design of cipher block chaining (CBC) encryption mode by using digital holography
NASA Astrophysics Data System (ADS)
Gil, Sang Keun; Jeon, Seok Hee; Jung, Jong Rae; Kim, Nam
2016-03-01
We propose an optical design of cipher block chaining (CBC) encryption by using digital holographic technique, which has higher security than the conventional electronic method because of the analog-type randomized cipher text with 2-D array. In this paper, an optical design of CBC encryption mode is implemented by 2-step quadrature phase-shifting digital holographic encryption technique using orthogonal polarization. A block of plain text is encrypted with the encryption key by applying 2-step phase-shifting digital holography, and it is changed into cipher text blocks which are digital holograms. These ciphered digital holograms with the encrypted information are Fourier transform holograms and are recorded on CCDs with 256 gray levels quantized intensities. The decryption is computed by these encrypted digital holograms of cipher texts, the same encryption key and the previous cipher text. Results of computer simulations are presented to verify that the proposed method shows the feasibility in the high secure CBC encryption system.
Compact microwave lamp having a tuning block and a dielectric located in a lamp cavity
Simpson, James E.
2000-01-01
A microwave lamp having a compact structure utilizing a coupling slot which has a dielectric member extending therethrough and a tuning block adjoining the coupling slot. A non-conventional waveguide is used which has about the width of a WR-284 waveguide and about the length of a WR-340 waveguide.
Fast realization of nonrecursive digital filters with limits on signal delay
NASA Astrophysics Data System (ADS)
Titov, M. A.; Bondarenko, N. N.
1983-07-01
Attention is given to the problem of achieving a fast realization of nonrecursive digital filters with the aim of reducing signal delay. It is shown that a realization wherein the impulse characteristic of the filter is divided into blocks satisfies the delay requirements and is almost as economical in terms of the number of multiplications as conventional fast convolution. In addition, the block method leads to a reduction in the needed size of the memory and in the number of additions; the short-convolution procedure is substantially simplified. Finally, the block method facilitates the paralleling of computations owing to the simple transfers between subfilters.
Heterocyclic energetic materials: Synthesis, characterization and computational design
NASA Astrophysics Data System (ADS)
Tsyshevsky, Roman; Pagoria, Philip; Smirnov, Aleksander; Kuklja, Maija
2017-06-01
Achievement of the tailored properties (high performance, low sensitivity, etc.) in targeted new energetic materials (EM) remains a great challenge. Recently, attention of researchers has shifted from conventional nitroester-, nitramine-, and nitroaromatic-based explosives to new heterocyclic EM with oxygen- and nitrogenrich molecular structures. They have increased densities and formation enthalpies complemented by attractive performance and high stability to external stimuli. We will demonstrate that oxadiazol-containing heterocycles offer a convenient playground to probe specific chemical functional groups as building blocks for design of EM. We discuss a joint experimental and computational approach for design, characterization, synthesis, and modeling of novel heterocyclic EM. Combinatorically, we comprehensively analyzed how overall stability and performance of each material in the family (BNFF, LLM-172, LLM-175, LLM-191, LLM-192, LLM-200) depends upon their chemical composition and details of the molecular structure (such as a substitution of a nitro group by an amino group and 1,2,5-oxadiazole fragment by 1,2,3- or 1,2,4-oxadiazol ring). We will also discuss proposed new EM with predicted superior chemical and physical properties. P. Pagoria, R. Tsyshevsky, A. Smirnov.
Cone beam computed tomography in the diagnosis of dental disease.
Tetradis, Sotirios; Anstey, Paul; Graff-Radford, Steven
2011-07-01
Conventional radiographs provide important information for dental disease diagnosis. However, they represent 2-D images of 3-D objects with significant structure superimposition and unpredictable magnification. Cone beam computed tomography, however, allows true 3-D visualization of the dentoalveolar structures, avoiding major limitations of conventional radiographs. Cone beam computed tomography images offer great advantages in disease detection for selected patients. The authors discuss cone beam computed tomography applications in dental disease diagnosis, reviewing the pertinent literature when available.
Combinatorics of γ-structures.
Han, Hillary S W; Li, Thomas J X; Reidys, Christian M
2014-08-01
In this article we study canonical γ-structures, a class of RNA pseudoknot structures that plays a key role in the context of polynomial time folding of RNA pseudoknot structures. A γ-structure is composed of specific building blocks that have topological genus less than or equal to γ, where composition means concatenation and nesting of such blocks. Our main result is the derivation of the generating function of γ-structures via symbolic enumeration using so called irreducible shadows. We furthermore recursively compute the generating polynomials of irreducible shadows of genus ≤ γ. The γ-structures are constructed via γ-matchings. For 1 ≤ γ ≤ 10, we compute Puiseux expansions at the unique, dominant singularities, allowing us to derive simple asymptotic formulas for the number of γ-structures.
GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-04-01
Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.
Bolliger, Stephan A; Thali, Michael J; Bolliger, Michael J; Kneubuehl, Beat P
2010-11-01
By measuring the total crack lengths (TCL) along a gunshot wound channel simulated in ordnance gelatine, one can calculate the energy transferred by a projectile to the surrounding tissue along its course. Visual quantitative TCL analysis of cut slices in ordnance gelatine blocks is unreliable due to the poor visibility of cracks and the likely introduction of secondary cracks resulting from slicing. Furthermore, gelatine TCL patterns are difficult to preserve because of the deterioration of the internal structures of gelatine with age and the tendency of gelatine to decompose. By contrast, using computed tomography (CT) software for TCL analysis in gelatine, cracks on 1-cm thick slices can be easily detected, measured and preserved. In this, experiment CT TCL analyses were applied to gunshots fired into gelatine blocks by three different ammunition types (9-mm Luger full metal jacket, .44 Remington Magnum semi-jacketed hollow point and 7.62 × 51 RWS Cone-Point). The resulting TCL curves reflected the three projectiles' capacity to transfer energy to the surrounding tissue very accurately and showed clearly the typical energy transfer differences. We believe that CT is a useful tool in evaluating gunshot wound profiles using the TCL method and is indeed superior to conventional methods applying physical slicing of the gelatine.
Unilateral cervical plexus block for prosthetic laryngoplasty in the standing horse.
Campoy, L; Morris, T B; Ducharme, N G; Gleed, R D; Martin-Flores, M
2018-04-20
Locoregional anaesthetic techniques can facilitate certain surgeries being performed under standing procedural sedation. The second and third spinal cervical nerves (C2, C3) are part of the cervical plexus and provide sensory innervation to the peri-laryngeal structures in people; block of these nerves might permit laryngeal lateralisation surgery in horses. To describe the anatomical basis for an ultrasound-guided cervical plexus block in horses. To compare this block with conventional local anaesthetic tissue infiltration in horses undergoing standing prosthetic laryngoplasty. Cadaveric study followed by a double-blinded prospective clinical trial. A fresh equine cadaver was dissected to characterise the distribution of C2 and C3 to the perilaryngeal structures on the left side. A second cadaver was utilised to correlate ultrasound images with the previously identified structures; a tissue marker was injected to confirm the feasibility of an ultrasound-guided approach to the cervical plexus. In the clinical study, horses were assigned to two groups, CP (n = 17; cervical plexus block) and INF (n = 17; conventional tissue infiltration). Data collection and analyses included time to completion of surgical procedure, sedation time, surgical field conditions and surgeon's perception of block quality. We confirmed that C2 and C3 provided innervation to the perilaryngeal structures. The nerve root of C2 was identified ultrasonographically located between the longus capitis and the cleidomastoideus muscles, caudal to the parotid gland. The CP group was deemed to provide better (P<0.0002) surgical conditions with no differences in the other variables measured. Further studies with larger numbers of horses may be necessary to detect smaller differences in surgical procedure completion time based on the improved surgical filed conditions. For standing unilateral laryngeal surgery, a cervical plexus block is a viable alternative to tissue infiltration and it improves the surgical field conditions. © 2018 EVJ Ltd.
Structure and Dynamics of Ionic Block Copolymer Melts: Computational Study
Aryal, Dipak; Agrawal, Anupriya; Perahia, Dvora; ...
2017-09-06
Structure and dynamics of melts of copolymers with an ABCBA topology, where C is an ionizable block, have been studied by fully atomistic molecular dynamics (MD) simulations. Introducing an ionizable block for functionality adds a significant element to the coupled set of interactions that determine the structure and dynamics of the macromolecule. The polymer consists of a randomly sulfonated polystyrene C block tethered to a flexible poly(ethylene-r-propylene) bridge B and end-capped with poly(tert-butylstyrene) A. The chemical structure and topology of these polymers constitute a model for incorporation of ionic blocks within a framework that provides tactility and mechanical stability. Heremore » in this paper we resolve the structure and dynamics of a structured polymer on the nanoscale constrained by ionic clusters. We find that the melts form intertwined networks of the A and C blocks independent of the degree of sulfonation of the C block with no long-range order. The cluster cohesiveness and morphology affect both macroscopic translational motion and segmental dynamics of all the blocks.« less
Structure and Dynamics of Ionic Block Copolymer Melts: Computational Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aryal, Dipak; Agrawal, Anupriya; Perahia, Dvora
Structure and dynamics of melts of copolymers with an ABCBA topology, where C is an ionizable block, have been studied by fully atomistic molecular dynamics (MD) simulations. Introducing an ionizable block for functionality adds a significant element to the coupled set of interactions that determine the structure and dynamics of the macromolecule. The polymer consists of a randomly sulfonated polystyrene C block tethered to a flexible poly(ethylene-r-propylene) bridge B and end-capped with poly(tert-butylstyrene) A. The chemical structure and topology of these polymers constitute a model for incorporation of ionic blocks within a framework that provides tactility and mechanical stability. Heremore » in this paper we resolve the structure and dynamics of a structured polymer on the nanoscale constrained by ionic clusters. We find that the melts form intertwined networks of the A and C blocks independent of the degree of sulfonation of the C block with no long-range order. The cluster cohesiveness and morphology affect both macroscopic translational motion and segmental dynamics of all the blocks.« less
Özer, Senem; Yaltirik, Mehmet; Kirli, Irem; Yargic, Ilhan
2012-11-01
The aim of this study was to compare anxiety and pain levels during anesthesia and efficacy of Quicksleeper intraosseous (IO) injection system, which delivers computer-controlled IO anesthesia and conventional inferior alveolar nerve block (IANB) in impacted mandibular third molars. Forty subjects with bilateral impacted mandibular third molars randomly received IO injection or conventional IANB at 2 successive appointments. The subjects received 1.8 mL 2% articaine. IO injection has many advantages, such as enabling painless anesthesia with less soft tissue numbness and quick onset of anesthesia as well as lingual and palatal anesthesia with single needle penetration. Although IO injection is a useful technique commonly used during various treatments in dentistry, the duration of injection takes longer than conventional techniques, there is a possibility of obstruction at the needle tip, and, the duration of the anesthetic effect is inadequate for prolonged surgical procedures. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; ...
2016-06-30
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
Adaptive 3D single-block grids for the computation of viscous flows around wings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagmeijer, R.; Kok, J.C.
1996-12-31
A robust algorithm for the adaption of a 3D single-block structured grid suitable for the computation of viscous flows around a wing is presented and demonstrated by application to the ONERA M6 wing. The effects of grid adaption on the flow solution and accuracy improvements is analyzed. Reynolds number variations are studied.
Composite sandwich structure and method for making same
NASA Technical Reports Server (NTRS)
Magurany, Charles J. (Inventor)
1995-01-01
A core for a sandwich structure which has multi-ply laminate ribs separated by voids is made as an integral unit in one single curing step. Tooling blocks corresponding to the voids are first wrapped by strips of prepreg layup equal to one half of each rib laminate so a continuous wall of prepreg material is formed around the tooling blocks. The wrapped tooling blocks are next pressed together laterally, like tiles, so adjoining walls from two tooling blocks are joined. The assembly is then cured by conventional methods, and afterwards the tooling blocks are removed so voids are formed. The ribs can be provided with integral tabs forming bonding areas for face sheets, and face sheets may be co-cured with the core ribs. The new core design is suitable for discrete ribcores used in space telescopes and reflector panels, where quasiisotropic properties and zero coefficient of thermal expansion are required.
Study on turbulent flow and heat transfer performance of tubes with internal fins in EGR cooler
NASA Astrophysics Data System (ADS)
Liu, Lin; Ling, Xiang; Peng, Hao
2015-07-01
In this paper, flow and heat transfer performances of the tubes with internal longitudinal fins in Exhaust Gas Recirculation (EGR ) cooler were investigated by three-dimension computation and experiment . Each test tube was a single-pipe structure, without inner tube. Three-dimension computation was performed to determine the thermal characteristics difference between the two kinds of tubes, that is, the tube with an inner solid staff as a blocked structure and the tube without the blocked structure. The effects of fin width and fin height on heat transfer and flow are examined. For proving the validity of numerical method, the calculated results were compared with corresponding experimental data. The tube-side friction factor and heat transfer coefficient were examined. As a result, the maximum deviations between the numerical results and the experimental data are approximately 5.4 % for friction factor and 8.6 % for heat transfer coefficient, respectively. It is found that two types of internally finned tubes enhance significantly heat transfer. The heat transfer of the tube with blocked structure is better, while the pressure drop of the tube without blocked structure is lower. The comprehensive performance of the unblocked tube is better to applied in EGR cooler.
Efficient Multiplexer FPGA Block Structures Based on G4FETs
NASA Technical Reports Server (NTRS)
Vatan, Farrokh; Fijany, Amir
2009-01-01
Generic structures have been conceived for multiplexer blocks to be implemented in field-programmable gate arrays (FPGAs) based on four-gate field-effect transistors (G(sup 4)FETs). This concept is a contribution to the continuing development of digital logic circuits based on G4FETs and serves as a further demonstration that logic circuits based on G(sup 4)FETs could be more efficient (in the sense that they could contain fewer transistors), relative to functionally equivalent logic circuits based on conventional transistors. Results in this line of development at earlier stages were summarized in two previous NASA Tech Briefs articles: "G(sup 4)FETs as Universal and Programmable Logic Gates" (NPO-41698), Vol. 31, No. 7 (July 2007), page 44, and "Efficient G4FET-Based Logic Circuits" (NPO-44407), Vol. 32, No. 1 ( January 2008), page 38 . As described in the first-mentioned previous article, a G4FET can be made to function as a three-input NOT-majority gate, which has been shown to be a universal and programmable logic gate. The universality and programmability could be exploited to design logic circuits containing fewer components than are required for conventional transistor-based circuits performing the same logic functions. The second-mentioned previous article reported results of a comparative study of NOT-majority-gate (G(sup 4)FET)-based logic-circuit designs and equivalent NOR- and NAND-gate-based designs utilizing conventional transistors. [NOT gates (inverters) were also included, as needed, in both the G(sup 4)FET- and the NOR- and NAND-based designs.] In most of the cases studied, fewer logic gates (and, hence, fewer transistors), were required in the G(sup 4)FET-based designs. There are two popular categories of FPGA block structures or architectures: one based on multiplexers, the other based on lookup tables. In standard multiplexer- based architectures, the basic building block is a tree-like configuration of multiplexers, with possibly a few additional logic gates such as ANDs or ORs. Interconnections are realized by means of programmable switches that may connect the input terminals of a block to output terminals of other blocks, may bridge together some of the inputs, or may connect some of the input terminals to signal sources representing constant logical levels 0 or 1. The left part of the figure depicts a four-to-one G(sup 4)FET-based multiplexer tree; the right part of the figure depicts a functionally equivalent four-to-one multiplexer based on conventional transistors. The G(sup 4)FET version would contains 54 transistors; the conventional version contains 70 transistors.
A review on past and present development on the interlocking loadbearing hollow block (ILHB) system
NASA Astrophysics Data System (ADS)
Bosro, M. Z. M.; Samad, A. A. A.; Mohamad, N.; Goh, W. I.; Tambichik, M. A.; Iman, M. A.
2018-04-01
Massive migration and increasing population in Malaysia has contributed to the increasing demand of quality and affordable housing. Over the past 50 years, the Malaysian housing industry has seen the growth of using conventional construction system such as reinforced concrete frame structures and bricks. The conventional system, as agreed by many researchers, causes delays and other disadvantages in some of the construction projects. Thus, the utilization of interlocking loadbearing hollow block (ILHB) system is needed to address these issues. This system has been identified as an alternative and sustainable building system for the construction industry in Malaysia which the PUTRA block system is the latest example of the ILHB developed. The system offers various advantages in terms of speed and cost in construction, strength, environmentally friendly and aesthetic qualities. Despite these advantages, this system has not been practically applied and develop in Malaysia. Therefore, this paper aims to review the past and present development of the interlocking loadbearing hollow block (ILHB) system that available locally and globally.
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
Cabo, Candido
2014-01-01
Initiation of cardiac arrhythmias typically follows one or more premature impulses either occurring spontaneously or applied externally. In this study, we characterize the dynamics of propagation of single (S2) and double premature impulses (S3), and the mechanisms of block of premature impulses at structural heterogeneities caused by remodeling of gap junctional conductance (Gj) in infarcted myocardium. Using a sub-cellular computer model of infarcted tissue, we found that |INa,max|, prematurity (coupling interval with the previous impulse), and conduction velocity (CV) of premature impulses change dynamically as they propagate away from the site of initiation. There are fundamental differences between the dynamics of propagation of S2 and S3 premature impulses: for S2 impulses |INa,max| recovers fast, prematurity decreases and CV increases as propagation proceeds; for S3 impulses low values of |INa,max| persist, prematurity could increase, and CV could decrease as impulses propagate away from the site of initiation. As a consequence it is more likely that S3 impulses block at sites of structural heterogeneities causing source/sink mismatch than S2 impulses block. Whether premature impulses block at Gj heterogeneities or not is also determined by the values of Gj (and the space constant λ) in the regions proximal and distal to the heterogeneity: when λ in the direction of propagation increases >40%, premature impulses could block. The maximum slope of CV restitution curves for S2 impulses is larger than for S3 impulses. In conclusion: (1) The dynamics of propagation of premature impulses make more likely that S3 impulses block at sites of structural heterogeneities than S2 impulses block; (2) Structural heterogeneities causing an increase in λ (or CV) of >40% could result in block of premature impulses; (3) A decrease in the maximum slope of CV restitution curves of propagating premature impulses is indicative of an increased potential for block at structural heterogeneities. PMID:25566085
Cabo, Candido
2014-01-01
Initiation of cardiac arrhythmias typically follows one or more premature impulses either occurring spontaneously or applied externally. In this study, we characterize the dynamics of propagation of single (S2) and double premature impulses (S3), and the mechanisms of block of premature impulses at structural heterogeneities caused by remodeling of gap junctional conductance (Gj) in infarcted myocardium. Using a sub-cellular computer model of infarcted tissue, we found that |INa,max|, prematurity (coupling interval with the previous impulse), and conduction velocity (CV) of premature impulses change dynamically as they propagate away from the site of initiation. There are fundamental differences between the dynamics of propagation of S2 and S3 premature impulses: for S2 impulses |INa,max| recovers fast, prematurity decreases and CV increases as propagation proceeds; for S3 impulses low values of |INa,max| persist, prematurity could increase, and CV could decrease as impulses propagate away from the site of initiation. As a consequence it is more likely that S3 impulses block at sites of structural heterogeneities causing source/sink mismatch than S2 impulses block. Whether premature impulses block at Gj heterogeneities or not is also determined by the values of Gj (and the space constant λ) in the regions proximal and distal to the heterogeneity: when λ in the direction of propagation increases >40%, premature impulses could block. The maximum slope of CV restitution curves for S2 impulses is larger than for S3 impulses. (1) The dynamics of propagation of premature impulses make more likely that S3 impulses block at sites of structural heterogeneities than S2 impulses block; (2) Structural heterogeneities causing an increase in λ (or CV) of >40% could result in block of premature impulses; (3) A decrease in the maximum slope of CV restitution curves of propagating premature impulses is indicative of an increased potential for block at structural heterogeneities.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.
Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD
NASA Astrophysics Data System (ADS)
Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun
2017-12-01
This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.
Organic photovoltaic cell incorporating electron conducting exciton blocking layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrest, Stephen R.; Lassiter, Brian E.
2014-08-26
The present disclosure relates to photosensitive optoelectronic devices including a compound blocking layer located between an acceptor material and a cathode, the compound blocking layer including: at least one electron conducting material, and at least one wide-gap electron conducting exciton blocking layer. For example, 3,4,9,10 perylenetetracarboxylic bisbenzimidazole (PTCBI) and 1,4,5,8-napthalene-tetracarboxylic-dianhydride (NTCDA) function as electron conducting and exciton blocking layers when interposed between the acceptor layer and cathode. Both materials serve as efficient electron conductors, leading to a fill factor as high as 0.70. By using an NTCDA/PTCBI compound blocking layer structure increased power conversion efficiency is achieved, compared to anmore » analogous device using a conventional blocking layers shown to conduct electrons via damage-induced midgap states.« less
Gaussian curvature analysis allows for automatic block placement in multi-block hexahedral meshing.
Ramme, Austin J; Shivanna, Kiran H; Magnotta, Vincent A; Grosland, Nicole M
2011-10-01
Musculoskeletal finite element analysis (FEA) has been essential to research in orthopaedic biomechanics. The generation of a volumetric mesh is often the most challenging step in a FEA. Hexahedral meshing tools that are based on a multi-block approach rely on the manual placement of building blocks for their mesh generation scheme. We hypothesise that Gaussian curvature analysis could be used to automatically develop a building block structure for multi-block hexahedral mesh generation. The Automated Building Block Algorithm incorporates principles from differential geometry, combinatorics, statistical analysis and computer science to automatically generate a building block structure to represent a given surface without prior information. We have applied this algorithm to 29 bones of varying geometries and successfully generated a usable mesh in all cases. This work represents a significant advancement in automating the definition of building blocks.
NASA Astrophysics Data System (ADS)
Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin
2018-02-01
In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.
NASA Technical Reports Server (NTRS)
Wohlen, R. L.
1976-01-01
Techniques are presented for the solution of structural dynamic systems on an electronic digital computer using FORMA (FORTRAN Matrix Analysis). FORMA is a library of subroutines coded in FORTRAN 4 for the efficient solution of structural dynamics problems. These subroutines are in the form of building blocks that can be put together to solve a large variety of structural dynamics problems. The obvious advantage of the building block approach is that programming and checkout time are limited to that required for putting the blocks together in the proper order.
Effects of Interlocking and Supporting Conditions on Concrete Block Pavements
NASA Astrophysics Data System (ADS)
Mahapatra, Geetimukta; Kalita, Kuldeep
2018-02-01
Concrete Block Paving (CBP) is widely used as wearing course in flexible pavements, preferably under light and medium vehicular loadings. Construction of CBP at site is quick and easy in quality control. Usually, flexible pavement design philosophy is followed in CBP construction, though it is structurally different in terms of small block elements with high strength concrete and their interlocking aspects, frequent joints and discontinuity, restrained edge etc. Analytical solution for such group action of concrete blocks under loading in a three dimensional multilayer structure is complex and thus, the need of conducting experimental studies is necessitated for extensive understanding of the load—deformation characteristics and behavior of concrete blocks in pavement. The present paper focuses on the experimental studies for load transfer characteristics of CBP under different interlocking and supporting conditions. It is observed that both interlocking and supporting conditions affect significantly on the load transfer behavior in CBP structures. Coro-lock block exhibits better performance in terms of load carrying capacity and distortion behavior under static loads. Plate load tests are performed over subgrade, granular sub-base (GSB), CBP with and without GSB using different block shapes. For an example case, the comparison of CBP with conventional flexible pavement section is also presented and it is found that CBP provides considerable benefit in terms of construction cost of the road structure.
Hybrid architecture for encoded measurement-based quantum computation
Zwerger, M.; Briegel, H. J.; Dür, W.
2014-01-01
We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906
NASA Astrophysics Data System (ADS)
Kutulakos, Kyros N.; O'Toole, Matthew
2015-03-01
Conventional cameras record all light falling on their sensor regardless of the path that light followed to get there. In this paper we give an overview of a new family of computational cameras that offers many more degrees of freedom. These cameras record just a fraction of the light coming from a controllable source, based on the actual 3D light path followed. Photos and live video captured this way offer an unconventional view of everyday scenes in which the effects of scattering, refraction and other phenomena can be selectively blocked or enhanced, visual structures that are too subtle to notice with the naked eye can become apparent, and object appearance can depend on depth. We give an overview of the basic theory behind these cameras and their DMD-based implementation, and discuss three applications: (1) live indirect-only imaging of complex everyday scenes, (2) reconstructing the 3D shape of scenes whose geometry or material properties make them hard or impossible to scan with conventional methods, and (3) acquiring time-of-flight images that are free of multi-path interference.
NASA Astrophysics Data System (ADS)
Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain
2014-11-01
Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.
NASA Astrophysics Data System (ADS)
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.
2018-01-01
We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
NASA Astrophysics Data System (ADS)
Usman, Muhammad; Saba, Kiran; Han, Dong-Pyo; Muhammad, Nazeer
2018-01-01
High efficiency of green GaAlInN-based light-emitting diode (LED) has been proposed with peak emission wavelength of ∼510 nm. By introducing quaternary quantum well (QW) along with the quaternary barrier (QB) and quaternary electron blocking layer (EBL) in a single structure, an efficiency droop reduction of up to 29% has been achieved in comparison to the conventional GaN-based LED. The proposed structure has significantly reduced electrostatic field in the active region. As a result, carrier leakage has been minimized and spontaneous emission rate has been doubled.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
Integration of nanoscale memristor synapses in neuromorphic computing architectures
NASA Astrophysics Data System (ADS)
Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis
2013-09-01
Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.
Interactive-predictive detection of handwritten text blocks
NASA Astrophysics Data System (ADS)
Ramos Terrades, O.; Serrano, N.; Gordó, A.; Valveny, E.; Juan, A.
2010-01-01
A method for text block detection is introduced for old handwritten documents. The proposed method takes advantage of sequential book structure, taking into account layout information from pages previously transcribed. This glance at the past is used to predict the position of text blocks in the current page with the help of conventional layout analysis methods. The method is integrated into the GIDOC prototype: a first attempt to provide integrated support for interactive-predictive page layout analysis, text line detection and handwritten text transcription. Results are given in a transcription task on a 764-page Spanish manuscript from 1891.
Hierarchical multiscale hyperporous block copolymer membranes via tunable dual-phase separation
Yoo, Seungmin; Kim, Jung-Hwan; Shin, Myoungsoo; Park, Hyungmin; Kim, Jeong-Hoon; Lee, Sang-Young; Park, Soojin
2015-01-01
The rational design and realization of revolutionary porous structures have been long-standing challenges in membrane science. We demonstrate a new class of amphiphilic polystyrene-block-poly(4-vinylpyridine) block copolymer (BCP)–based porous membranes featuring hierarchical multiscale hyperporous structures. The introduction of surface energy–modifying agents and the control of major phase separation parameters (such as nonsolvent polarity and solvent drying time) enable tunable dual-phase separation of BCPs, eventually leading to macro/nanoscale porous structures and chemical functionalities far beyond those accessible with conventional approaches. Application of this BCP membrane to a lithium-ion battery separator affords exceptional improvement in electrochemical performance. The dual-phase separation–driven macro/nanopore construction strategy, owing to its simplicity and tunability, is expected to be readily applicable to a rich variety of membrane fields including molecular separation, water purification, and energy-related devices. PMID:26601212
A Cost Effective Block Framing Scheme for Underwater Communication
Shin, Soo-Young; Park, Soo-Hyun
2011-01-01
In this paper, the Selective Multiple Acknowledgement (SMA) method, based on Multiple Acknowledgement (MA), is proposed to efficiently reduce the amount of data transmission by redesigning the transmission frame structure and taking into consideration underwater transmission characteristics. The method is suited to integrated underwater system models, as the proposed method can handle the same amount of data in a much more compact frame structure without any appreciable loss of reliability. Herein, the performance of the proposed SMA method was analyzed and compared to those of the conventional Automatic Repeat-reQuest (ARQ), Block Acknowledgement (BA), block response, and MA methods. The efficiency of the underwater sensor network, which forms a large cluster and mostly contains uplink data, is expected to be improved by the proposed method. PMID:22247689
LEGO® Bricks as Building Blocks for Centimeter-Scale Biological Environments: The Case of Plants
Lind, Kara R.; Sizmur, Tom; Benomar, Saida; Miller, Anthony; Cademartiri, Ludovico
2014-01-01
LEGO bricks are commercially available interlocking pieces of plastic that are conventionally used as toys. We describe their use to build engineered environments for cm-scale biological systems, in particular plant roots. Specifically, we take advantage of the unique modularity of these building blocks to create inexpensive, transparent, reconfigurable, and highly scalable environments for plant growth in which structural obstacles and chemical gradients can be precisely engineered to mimic soil. PMID:24963716
LEGO® bricks as building blocks for centimeter-scale biological environments: the case of plants.
Lind, Kara R; Sizmur, Tom; Benomar, Saida; Miller, Anthony; Cademartiri, Ludovico
2014-01-01
LEGO bricks are commercially available interlocking pieces of plastic that are conventionally used as toys. We describe their use to build engineered environments for cm-scale biological systems, in particular plant roots. Specifically, we take advantage of the unique modularity of these building blocks to create inexpensive, transparent, reconfigurable, and highly scalable environments for plant growth in which structural obstacles and chemical gradients can be precisely engineered to mimic soil.
Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J
2015-10-01
To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.
Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro
2008-04-01
This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.
GPU-Accelerated Voxelwise Hepatic Perfusion Quantification
Wang, H; Cao, Y
2012-01-01
Voxelwise quantification of hepatic perfusion parameters from dynamic contrast enhanced (DCE) imaging greatly contributes to assessment of liver function in response to radiation therapy. However, the efficiency of the estimation of hepatic perfusion parameters voxel-by-voxel in the whole liver using a dual-input single-compartment model requires substantial improvement for routine clinical applications. In this paper, we utilize the parallel computation power of a graphics processing unit (GPU) to accelerate the computation, while maintaining the same accuracy as the conventional method. Using CUDA-GPU, the hepatic perfusion computations over multiple voxels are run across the GPU blocks concurrently but independently. At each voxel, non-linear least squares fitting the time series of the liver DCE data to the compartmental model is distributed to multiple threads in a block, and the computations of different time points are performed simultaneously and synchronically. An efficient fast Fourier transform in a block is also developed for the convolution computation in the model. The GPU computations of the voxel-by-voxel hepatic perfusion images are compared with ones by the CPU using the simulated DCE data and the experimental DCE MR images from patients. The computation speed is improved by 30 times using a NVIDIA Tesla C2050 GPU compared to a 2.67 GHz Intel Xeon CPU processor. To obtain liver perfusion maps with 626400 voxels in a patient’s liver, it takes 0.9 min with the GPU-accelerated voxelwise computation, compared to 110 min with the CPU, while both methods result in perfusion parameters differences less than 10−6. The method will be useful for generating liver perfusion images in clinical settings. PMID:22892645
NASA Technical Reports Server (NTRS)
Eren, K.
1980-01-01
The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.
Shepherd, Emma; Stuart, Graham; Martin, Rob; Walsh, Mark A
2015-06-01
SelectSecure™ pacing leads (Medtronic Inc) are increasingly being used in pediatric patients and adults with structural congenital heart disease. The 4Fr lead is ideal for patients who may require lifelong pacing and can be advantageous for patients with complex anatomy. The purpose of this study was to compare the extraction of SelectSecure leads with conventional (stylette-driven) pacing leads in patients with structural congenital heart disease and congenital atrioventricular block. The data on lead extractions from pediatric and adult congenital heart disease (ACHD) patients from August 2004 to July 2014 at Bristol Royal Hospital for Children and the Bristol Heart Institute were reviewed. Multivariable regression analysis was used to determine whether conventional pacing leads were associated with a more difficult extraction process. A total of 57 patients underwent pacemaker lead extractions (22 SelectSecure, 35 conventional). No deaths occurred. Mean age at the time of extraction was 17.6 ± 10.5 years, mean weight was 47 ± 18 kg, and mean lead age was 5.6 ± 2.6 years (range 1-11 years). Complex extraction (partial extraction/femoral extraction) was more common in patients with conventional pacing leads at univariate (P < .01) and multivariate (P = .04) levels. Lead age was also a significant predictor of complex extraction (P < .01). SelectSecure leads can be successfully extracted using techniques that are used for conventional pacing leads. They are less likely to be partially extracted and are less likely to require extraction using a femoral approach compared with conventional pacing leads. Copyright © 2015 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
Shaping Crystal-Crystal Phase Transitions
NASA Astrophysics Data System (ADS)
Du, Xiyu; van Anders, Greg; Dshemuchadse, Julia; Glotzer, Sharon
Previous computational and experimental studies have shown self-assembled structure depends strongly on building block shape. New synthesis techniques have led to building blocks with reconfigurable shape and it has been demonstrated that building block reconfiguration can induce bulk structural reconfiguration. However, we do not understand systematically how this transition happens as a function of building block shape. Using a recently developed ``digital alchemy'' framework, we study the thermodynamics of shape-driven crystal-crystal transitions. We find examples of shape-driven bulk reconfiguration that are accompanied by first-order phase transitions, and bulk reconfiguration that occurs without any thermodynamic phase transition. Our results suggest that for well-chosen shapes and structures, there exist facile means of bulk reconfiguration, and that shape-driven bulk reconfiguration provides a viable mechanism for developing functional materials.
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; ...
2017-09-14
In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao
In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less
Narasimhulu, D M; Scharfman, L; Minkoff, H; George, B; Homel, P; Tyagaraj, K
2018-04-27
Injection of local anesthetic into the transversus abdominis plane (TAP block) decreases systemic morphine requirements after abdominal surgery. We compared intraoperative surgeon-administered TAP block (surgical TAP) to anesthesiologist-administered transcutaneous ultrasound-guided TAP block (conventional TAP) for post-cesarean analgesia. We hypothesized that surgical TAP blocks would take less time to perform than conventional TAP blocks. We performed a randomized trial, recruiting 41 women undergoing cesarean delivery under neuraxial anesthesia, assigning them to either surgical TAP block (n=20) or conventional TAP block (n=21). Time taken to perform the block was the primary outcome, while postoperative pain scores and 24-hour opioid requirements were secondary outcomes. Student's t-test was used to compare block time and Kruskal-Wallis test opioid consumption and pain-scores. Time taken to perform the block (2.4 vs 12.1 min, P <0.001), and time spent in the operating room after delivery (55.3 vs 77.9 min, P <0.001) were significantly less for surgical TAP. The 24 h morphine consumption (P=0.17) and postoperative pain scores at 4, 8, 24 and 48 h were not significantly different between the groups. Surgical TAP blocks are feasible and less time consuming than conventional TAP blocks, while providing comparable analgesia after cesarean delivery. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hybrid Grid Techniques for Propulsion Applications
NASA Technical Reports Server (NTRS)
Koomullil, Roy P.; Soni, Bharat K.; Thornburg, Hugh J.
1996-01-01
During the past decade, computational simulation of fluid flow for propulsion activities has progressed significantly, and many notable successes have been reported in the literature. However, the generation of a high quality mesh for such problems has often been reported as a pacing item. Hence, much effort has been expended to speed this portion of the simulation process. Several approaches have evolved for grid generation. Two of the most common are structured multi-block, and unstructured based procedures. Structured grids tend to be computationally efficient, and have high aspect ratio cells necessary for efficently resolving viscous layers. Structured multi-block grids may or may not exhibit grid line continuity across the block interface. This relaxation of the continuity constraint at the interface is intended to ease the grid generation process, which is still time consuming. Flow solvers supporting non-contiguous interfaces require specialized interpolation procedures which may not ensure conservation at the interface. Unstructured or generalized indexing data structures offer greater flexibility, but require explicit connectivity information and are not easy to generate for three dimensional configurations. In addition, unstructured mesh based schemes tend to be less efficient and it is difficult to resolve viscous layers. Recently hybrid or generalized element solution and grid generation techniques have been developed with the objective of combining the attractive features of both structured and unstructured techniques. In the present work, recently developed procedures for hybrid grid generation and flow simulation are critically evaluated, and compared to existing structured and unstructured procedures in terms of accuracy and computational requirements.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.; Coleman, R. G.
1980-01-01
The computer program documentation for the design and analysis of supersonic configurations is presented. Schematics and block diagrams of the major program structure, together with subroutine descriptions for each module are included.
Three-dimensional inversion for Network-Magnetotelluric data
NASA Astrophysics Data System (ADS)
Siripunvaraporn, W.; Uyeshima, M.; Egbert, G.
2004-09-01
Three-dimensional inversion of Network-Magnetotelluric (MT) data has been implemented. The program is based on a conventional 3-D MT inversion code (Siripunvaraporn et al., 2004), which is a data space variant of the OCCAM approach. In addition to modifications required for computing Network-MT responses and sensitivities, the program makes use of Massage Passing Interface (MPI) software, with allowing computations for each period to be run on separate CPU nodes. Here, we consider inversion of synthetic data generated from simple models consisting of a 1 W-m conductive block buried at varying depths in a 100 W-m background. We focus in particular on inversion of long period (320-40,960 seconds) data, because Network-MT data usually have high coherency in these period ranges. Even with only long period data the inversion recovers shallow and deep structures, as long as these are large enough to affect the data significantly. However, resolution of the inversion depends greatly on the geometry of the dipole network, the range of periods used, and the horizontal size of the conductive anomaly.
Encoders for block-circulant LDPC codes
NASA Technical Reports Server (NTRS)
Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.
A new bite block for panoramic radiographs of anterior edentulous patients: A technical report.
Park, Jong-Woong; Symkhampha, Khanthaly; Huh, Kyung-Hoe; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul
2015-06-01
Panoramic radiographs taken using conventional chin-support devices have often presented problems with positioning accuracy and reproducibility. The aim of this report was to propose a new bite block for panoramic radiographs of anterior edentulous patients that better addresses these two issues. A new panoramic radiography bite block similar to the bite block for dentulous patients was developed to enable proper positioning stability for edentulous patients. The new bite block was designed and implemented in light of previous studies. The height of the new bite block was 18 mm and to compensate for the horizontal edentulous space, its horizontal width was 7 mm. The panoramic radiographs using the new bite block were compared with those using the conventional chin-support device. Panoramic radiographs taken with the new bite block showed better stability and bilateral symmetry than those taken with the conventional chin-support device. Patients also showed less movement and more stable positioning during panoramic radiography with the new bite block. Conventional errors in panoramic radiographs of edentulous patients could be caused by unreliability of the chin-support device. The newly proposed bite block for panoramic radiographs of edentulous patients showed better reliability. Further study is required to evaluate the image quality and reproducibility of images with the new bite block.
NASA Astrophysics Data System (ADS)
Stastnik, S.
2016-06-01
Development of materials for vertical outer building structures tends to application of hollow clay blocks filled with some appropriate insulation material. Ceramic fittings provide high thermal resistance, but the walls built from them suffer from condensation of air humidity in winter season frequently. The paper presents the computational simulation and experimental laboratory validation of moisture behaviour of such masonry with insulation prepared from waste fibres under the Central European climatic conditions.
Implementation of a block Lanczos algorithm for Eigenproblem solution of gyroscopic systems
NASA Technical Reports Server (NTRS)
Gupta, Kajal K.; Lawson, Charles L.
1987-01-01
The details of implementation of a general numerical procedure developed for the accurate and economical computation of natural frequencies and associated modes of any elastic structure rotating along an arbitrary axis are described. A block version of the Lanczos algorithm is derived for the solution that fully exploits associated matrix sparsity and employs only real numbers in all relevant computations. It is also capable of determining multiple roots and proves to be most efficient when compared to other, similar, exisiting techniques.
Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Chen, Jen-Ping
2012-01-01
This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.
Plane Smoothers for Multiblock Grids: Computational Aspects
NASA Technical Reports Server (NTRS)
Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane
1999-01-01
Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, He-Lou; Li, Xiao; Ren, Jiaxing
Controlling the macroscopic orientation of nanoscale periodic structures of amphiphilic liquid crystalline block copolymers (LC BCPs) is important to a variety of technical applications (e.g., lithium conducting polymer electrolytes). To study LC BCP domain orientation, a series of LC BCPs containing a poly(ethylene oxide) (PEO) block as a conventional hydrophilic coil block and LC blocks containing azobenzene mesogens is designed and synthesized. LC ordering in thin films of the BCP leads to the formation of highly ordered, microphase-separated nanostructures, with hexagonally arranged PEO cylinders. Substitution on the tail of the azobenzene mesogen is shown to control the orientation of themore » PEO cylinders. When the substitution on the mesogenic tails is an alkyl chain, the PEO cylinders have a perpendicular orientation to the substrate surface, provided the thin film is above a critical thickness value. In contrast, when the substitution on the mesogenic tails has an ether group the PEO cylinders assemble parallel to the substrate surface regardless of the film thickness value.« less
LUMIS Interactive graphics operating instructions and system specifications
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Yu, T. C.; Landini, A. J.
1976-01-01
The LUMIS program has designed an integrated geographic information system to assist program managers and planning groups in metropolitan regions. Described is the system designed to interactively interrogate a data base, display graphically a portion of the region enclosed in the data base, and perform cross tabulations of variables within each city block, block group, or census tract. The system is designed to interface with U. S. Census DIME file technology, but can accept alternative districting conventions. The system is described on three levels: (1) introduction to the systems's concept and potential applications; (2) the method of operating the system on an interactive terminal; and (3) a detailed system specification for computer facility personnel.
Possibilities for LWIR detectors using MBE-grown Si(/Si(1-x)Ge(x) structures
NASA Technical Reports Server (NTRS)
Hauenstein, Robert J.; Miles, Richard H.; Young, Mary H.
1990-01-01
Traditionally, long wavelength infrared (LWIR) detection in Si-based structures has involved either extrinsic Si or Si/metal Schottky barrier devices. Molecular beam epitaxially (MBE) grown Si and Si/Si(1-x)Ge(x) heterostructures offer new possibilities for LWIR detection, including sensors based on intersubband transitions as well as improved conventional devices. The improvement in doping profile control of MBE in comparison with conventional chemical vapor deposited (CVD) Si films has resulted in the successful growth of extrinsic Si:Ga, blocked impurity-band conduction detectors. These structures exhibit a highly abrupt step change in dopant profile between detecting and blocking layers which is extremely difficult or impossible to achieve through conventional epitaxial growth techniques. Through alloying Si with Ge, Schottky barrier infrared detectors are possible, with barrier height values between those involving pure Si or Ge semiconducting materials alone. For both n-type and p-type structures, strain effects can split the band edges, thereby splitting the Schottky threshold and altering the spectral response. Measurements of photoresponse of n-type Au/Si(1-x)Ge(x) Schottky barriers demonstrate this effect. For intersubband multiquntum well (MQW) LWIR detection, Si(1-x)Ge(x)/Si detectors grown on Si substrates promise comparable absorption coefficients to that of the Ga(Al)As system while in addition offering the fundamental advantage of response to normally incident light as well as the practical advantage of Si-compatibility. Researchers grew Si(1-x)Ge(x)/Si MQW structures aimed at sensitivity to IR in the 8 to 12 micron region and longer, guided by recent theoretical work. Preliminary measurements of n- and p-type Si(1-x)Ge(x)/Si MQW structures are given.
Arafat, Basel; Wojsz, Magdalena; Isreb, Abdullah; Forbes, Robert T; Isreb, Mohammad; Ahmed, Waqar; Arafat, Tawfiq; Alhnan, Mohamed A
2018-06-15
Fused deposition modelling (FDM) 3D printing has shown the most immediate potential for on-demand dose personalisation to suit particular patient's needs. However, FDM 3D printing often involves employing a relatively large molecular weight thermoplastic polymer and results in extended release pattern. It is therefore essential to fast-track drug release from the 3D printed objects. This work employed an innovative design approach of tablets with unique built-in gaps (Gaplets) with the aim of accelerating drug release. The novel tablet design is composed of 9 repeating units (blocks) connected with 3 bridges to allow the generation of 8 gaps. The impact of size of the block, the number of bridges and the spacing between different blocks was investigated. Increasing the inter-block space reduced mechanical resistance of the unit, however, tablets continued to meet pharmacopeial standards for friability. Upon introduction into gastric medium, the 1 mm spaces gaplet broke into mini-structures within 4 min and met the USP criteria of immediate release products (86.7% drug release at 30 min). Real-time ultraviolet (UV) imaging indicated that the cellulosic matrix expanded due to swelling of hydroxypropyl cellulose (HPC) upon introduction to the dissolution medium. This was followed by a steady erosion of the polymeric matrix at a rate of 8 μm/min. The design approach was more efficient than a comparison conventional formulation approach of adding disintegrants to accelerate tablet disintegration and drug release. This work provides a novel example where computer-aided design was instrumental at modifying the performance of solid dosage forms. Such an example may serve as the foundation for a new generation of dosage forms with complicated geometric structures to achieve functionality that is usually achieved by a sophisticated formulation approach. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Spekreijse, S. P.; Boerstoel, J. W.; Vitagliano, P. L.; Kuyvenhoven, J. L.
1992-01-01
About five years ago, a joint development was started of a flow simulation system for engine-airframe integration studies on propeller as well as jet aircraft. The initial system was based on the Euler equations and made operational for industrial aerodynamic design work. The system consists of three major components: a domain modeller, for the graphical interactive subdivision of flow domains into an unstructured collection of blocks; a grid generator, for the graphical interactive computation of structured grids in blocks; and a flow solver, for the computation of flows on multi-block grids. The industrial partners of the collaboration and NLR have demonstrated that the domain modeller, grid generator and flow solver can be applied to simulate Euler flows around complete aircraft, including propulsion system simulation. Extension to Navier-Stokes flows is in progress. Delft Hydraulics has shown that both the domain modeller and grid generator can also be applied successfully for hydrodynamic configurations. An overview is given about the main aspects of both domain modelling and grid generation.
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
On the use of distributed sensing in control of large flexible spacecraft
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.; Ghosh, Dave
1990-01-01
Distributed processing technology is being developed to process signals from distributed sensors using distributed computations. Thiw work presents a scheme for calculating the operators required to emulate a conventional Kalman filter and regulator using such a computer. The scheme makes use of conventional Kalman theory as applied to the control of large flexible structures. The required computation of the distributed operators given the conventional Kalman filter and regulator is explained. A straightforward application of this scheme may lead to nonsmooth operators whose convergence is not apparent. This is illustrated by application to the Mini-Mast, a large flexible truss at the Langley Research Center used for research in structural dynamics and control. Techniques for developing smooth operators are presented. These involve spatial filtering as well as adjusting the design constants in the Kalman theory. Results are presented that illustrate the degree of smoothness achieved.
Markov prior-based block-matching algorithm for superdimension reconstruction of porous media
NASA Astrophysics Data System (ADS)
Li, Yang; He, Xiaohai; Teng, Qizhi; Feng, Junxi; Wu, Xiaohong
2018-04-01
A superdimension reconstruction algorithm is used for the reconstruction of three-dimensional (3D) structures of a porous medium based on a single two-dimensional image. The algorithm borrows the concepts of "blocks," "learning," and "dictionary" from learning-based superresolution reconstruction and applies them to the 3D reconstruction of a porous medium. In the neighborhood-matching process of the conventional superdimension reconstruction algorithm, the Euclidean distance is used as a criterion, although it may not really reflect the structural correlation between adjacent blocks in an actual situation. Hence, in this study, regular items are adopted as prior knowledge in the reconstruction process, and a Markov prior-based block-matching algorithm for superdimension reconstruction is developed for more accurate reconstruction. The algorithm simultaneously takes into consideration the probabilistic relationship between the already reconstructed blocks in three different perpendicular directions (x , y , and z ) and the block to be reconstructed, and the maximum value of the probability product of the blocks to be reconstructed (as found in the dictionary for the three directions) is adopted as the basis for the final block selection. Using this approach, the problem of an imprecise spatial structure caused by a point simulation can be overcome. The problem of artifacts in the reconstructed structure is also addressed through the addition of hard data and by neighborhood matching. To verify the improved reconstruction accuracy of the proposed method, the statistical and morphological features of the results from the proposed method and traditional superdimension reconstruction method are compared with those of the target system. The proposed superdimension reconstruction algorithm is confirmed to enable a more accurate reconstruction of the target system while also eliminating artifacts.
An Adaptive Mesh Algorithm: Mesh Structure and Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scannapieco, Anthony J.
2016-06-21
The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally sparse.« less
Blocking the buccal nerve using two methods of inferior alveolar block injection.
Aker, F D
2001-01-01
The anatomic relations of the buccal nerve branch of the mandibular division of the trigeminal nerve were studied to explain the rationale for the discrepancy in blocking the buccal nerve using two methods of blocking the inferior alveolar nerve, the conventional method and the Gow-Gates method. The conventional method rarely blocks the buccal nerve, while the Gow-Gates method is reported to consistently block the buccal nerve. Eight head and mandibular specimens were dissected to observe the path of buccal nerve and its relationship to the path of needles in the conventional and Gow-Gates techniques. The buccal nerve descends on the medial and then anterior aspect of the deep head of the temporalis muscle (Tdh). At the latter position the buccal nerve enters the retromolar fossa and is encased in a fascial sleeve created by a dense fascial band that spans between the temporalis muscle tendons and the buccinator muscle. At the level of the conventional block injection the buccal nerve was shielded from the path of the needle by the Tdh and the fascial band. In the Gow-Gates block injection, the buccal nerve was exposed on the medial surface of the Tdh, immediately lateral to the path of the needle and proximal to the fascial sleeve. Consequently, the anatomical relations of the buccal nerve in the conventional block method essentially shield the nerve from being bathed by anesthetic solution while in the Gow-Gates method the relations are such that the buccal nerve can be exposed to anesthetic solution and thus blocked, explaining the findings in clinical dentistry. Copyright Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Yan, Lei; Niu, H. J.; Rosseinsky, M. J.
2011-03-01
The (AO)(A BO3)n Ruddlesden-Popper structure is an archetypal complex oxide consisting of two distinct structural units, an (AO) rock salt layer separating an n-octahedra thick perovskite block. Conventional high-temperature oxide synthesis methods cannot access members with n > 3 , butlowtemperaturelayer - by - layerthinfilmmethodsallowthepreparationofmaterialswiththickerperovskiteblocks , exploitinghighsurfacemobilityandlatticematchingwiththesubstrate . Thispresentationdescribesthegrowthofann = 6 memberCaO / (ABO 3)n (ABO 3 : CaMnO 3 , La 0.67 Ca 0.33 MnO 3 orCa 0.85 Sm 0.15 MnO 3) epitaxialsinglecrystalfilmsonthe (001) SrTiO 3 substrates by pulsed laser deposition with the assistance of a reflection high energy electron diffraction (RHEED).
An electrostatic Particle-In-Cell code on multi-block structured meshes
NASA Astrophysics Data System (ADS)
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; Vernon, Louis J.; Moulton, J. David
2017-12-01
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. Despite the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where an arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma-material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. Compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.
An electrostatic Particle-In-Cell code on multi-block structured meshes
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; ...
2017-09-14
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less
An electrostatic Particle-In-Cell code on multi-block structured meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less
Morphological studies on block copolymer modified PA 6 blends
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poindl, M., E-mail: marcus.poindl@ikt.uni-stuttgart.de, E-mail: christian.bonten@ikt.uni-stuttgart.de; Bonten, C., E-mail: marcus.poindl@ikt.uni-stuttgart.de, E-mail: christian.bonten@ikt.uni-stuttgart.de
Recent studies show that compounding polyamide 6 (PA 6) with a PA 6 polyether block copolymers made by reaction injection molding (RIM) or continuous anionic polymerization in a reactive extrusion process (REX) result in blends with high impact strength and high stiffness compared to conventional rubber blends. In this paper, different high impact PA 6 blends were prepared using a twin screw extruder. The different impact modifiers were an ethylene propylene copolymer, a PA PA 6 polyether block copolymer made by reaction injection molding and one made by reactive extrusion. To ensure good particle matrix bonding, the ethylene propylene copolymermore » was grafted with maleic anhydride (EPR-g-MA). Due to the molecular structure of the two block copolymers, a coupling agent was not necessary. The block copolymers are semi-crystalline and partially cross-linked in contrast to commonly used amorphous rubbers which are usually uncured. The combination of different analysis methods like atomic force microscopy (AFM), transmission electron microscopy (TEM) and scanning electron microscopy (SEM) gave a detailed view in the structure of the blends. Due to the partial cross-linking, the particles of the block copolymers in the blends are not spherical like the ones of ethylene propylene copolymer. The differences in molecular structure, miscibility and grafting of the impact modifiers result in different mechanical properties and different blend morphologies.« less
Yang, Jieping; Liu, Wei; Gao, Qinghong
2013-08-01
To evaluate the anesthetic effects and safety of Gow-Gates technique of inferior alveolar nerve block in impacted mandibular third molar extraction. A split-mouth study was designed. The bilateral impacted mandibular third molar of 32 participants were divided into Gow-Gates technique of inferior alveolar nerve block (Gow-Gates group) and conventional technique of inferior alveolar nerve block (conventional group) randomly with third molar extracted. The anesthetic effects and adverse events were recorded. All the participants completed the research. The anesthetic success rate was 96.9% in Gow-Gates group and 90.6% in conventional group with no statistical difference ( P= 0.317); but when comparing the anesthesia grade, Gow-Gates group had a 96.9% of grade A and B, and conventional group had a rate of 78.1% (P = 0.034). And the Gow-Gates group had a much lower withdrawn bleeding than conventional group (P = 0.025). Two groups had no hematoma. Gow-Gates technique had a reliable anesthesia effects and safety in impacted mandibular third molar extraction and could be chosen as a candidate for the conventional inferior alveolar nerve block.
Structure and Dynamics Ionic Block co-Polymer Melts: Computational Study
NASA Astrophysics Data System (ADS)
Aryal, Dipak; Perahia, Dvora; Grest, Gary S.
Tethering ionomer blocks into co-polymers enables engineering of polymeric systems designed to encompass transport while controlling structure. Here the structure and dynamics of symmetric pentablock copolymers melts are probed by fully atomistic molecular dynamics simulations. The center block consists of randomly sulfonated polystyrene with sulfonation fractions f = 0 to 0.55 tethered to a hydrogenated polyisoprene (PI), end caped with poly(t-butyl styrene). We find that melts with f = 0.15 and 0.30 consist of isolated ionic clusters whereas melts with f = 0.55 exhibit a long-range percolating ionic network. Similar to polystyrene sulfonate, a small number of ionic clusters slow the mobility of the center of mass of the co-polymer, however, formation of the ionic clusters is slower and they are often intertwined with PI segments. Surprisingly, the segmental dynamics of the other blocks are also affected. NSF DMR-1611136; NERSC; Palmetto Cluster Clemson University; Kraton Polymers US, LLC.
NASA Astrophysics Data System (ADS)
Herrick, Gregory Paul
The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.
Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase
NASA Astrophysics Data System (ADS)
Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten
2016-04-01
Objective. One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. Approach. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. Main results. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. Significance. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.
Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase.
Zink, Rob; Hunyadi, Borbála; Huffel, Sabine Van; Vos, Maarten De
2016-04-01
One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.
Magazzeni, Philippe; Jochum, Denis; Iohom, Gabriella; Mekler, Gérard; Albuisson, Eliane; Bouaziz, Hervé
2018-06-13
For superficial surgery of anteromedial and posteromedial surfaces of the upper arm, the medial brachial cutaneous nerve (MBCN) and the intercostobrachial nerve (ICBN) must be selectively blocked, in addition to an axillary brachial plexus block. We compared efficacy of ultrasound-guided (USG) versus conventional block of the MBCN and the ICBN. Eighty-four patients, undergoing upper limb surgery, were randomized to receive either USG (n = 42) or conventional (n = 42) block of the MBCN and the ICBN with 1% mepivacaine. Sensory block was evaluated using light-touch on the upper and lower half of the anteromedial and posteromedial surfaces of the upper arm at 5, 10, 15, 20 minutes after nerve blocks. The primary outcome was the proportion of patients who had no sensation in all 4 regions innervated by the MBCN and the ICBN at 20 minutes. Secondary outcomes were onset time of complete anesthesia, volume of local anesthetic, tourniquet tolerance, and quality of ultrasound images. In the USG group, 37 patients (88%) had no sensation at 20 minutes in any of the 4 areas tested versus 8 patients (19%) in the conventional group (P < 0.001). When complete anesthesia was obtained, it occurred within 10 minutes in more than 90% of patients, in both groups. Mean total volumes of local anesthetic used for blocking the MBCN and the ICBN were similar in the 2 groups. Ultrasound images were of good quality in only 20 (47.6%) of 42 patients. Forty-one patients (97.6%) who received USG block were comfortable with the tourniquet versus 16 patients (38.1%) in the conventional group (P < 0.001). Ultrasound guidance improved the efficacy of the MBCN and ICBN blocks. This study was registered at ClinicalTrials.gov, identifier NCT02940847.
NASA Technical Reports Server (NTRS)
Cannizzaro, Frank E.; Ash, Robert L.
1992-01-01
A state-of-the-art computer code has been developed that incorporates a modified Runge-Kutta time integration scheme, upwind numerical techniques, multigrid acceleration, and multi-block capabilities (RUMM). A three-dimensional thin-layer formulation of the Navier-Stokes equations is employed. For turbulent flow cases, the Baldwin-Lomax algebraic turbulence model is used. Two different upwind techniques are available: van Leer's flux-vector splitting and Roe's flux-difference splitting. Full approximation multi-grid plus implicit residual and corrector smoothing were implemented to enhance the rate of convergence. Multi-block capabilities were developed to provide geometric flexibility. This feature allows the developed computer code to accommodate any grid topology or grid configuration with multiple topologies. The results shown in this dissertation were chosen to validate the computer code and display its geometric flexibility, which is provided by the multi-block structure.
Graded porous inorganic materials derived from self-assembled block copolymer templates.
Gu, Yibei; Werner, Jörg G; Dorin, Rachel M; Robbins, Spencer W; Wiesner, Ulrich
2015-03-19
Graded porous inorganic materials directed by macromolecular self-assembly are expected to offer unique structural platforms relative to conventional porous inorganic materials. Their preparation to date remains a challenge, however, based on the sparsity of viable synthetic self-assembly pathways to control structural asymmetry. Here we demonstrate the fabrication of graded porous carbon, metal, and metal oxide film structures from self-assembled block copolymer templates by using various backfilling techniques in combination with thermal treatments for template removal and chemical transformations. The asymmetric inorganic structures display mesopores in the film top layers and a gradual pore size increase along the film normal in the macroporous sponge-like support structure. Substructure walls between macropores are themselves mesoporous, constituting a structural hierarchy in addition to the pore gradation. Final graded structures can be tailored by tuning casting conditions of self-assembled templates as well as the backfilling processes. We expect that these graded porous inorganic materials may find use in applications including separation, catalysis, biomedical implants, and energy conversion and storage.
Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen
2013-01-01
Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.
Development of Alkali Activated Geopolymer Masonry Blocks
NASA Astrophysics Data System (ADS)
Venugopal, K.; Radhakrishna; Sasalatti, Vinod
2016-09-01
Cement masonry units are not considered as sustainable since their production involves consumption of fuel, cement and natural resources and therefore it is essential to find alternatives. This paper reports on making of geopolymer solid & hollow blocks and masonry prisms using non conventional materials like fly ash, ground granulated blast furnace slag (GGBFS) and manufactured sand and curing at ambient temperature. They were tested for water absorption, initial rate of water absorption, dry density, dimensionality, compressive, flexural and bond-strength which were tested for bond strength with and without lateral confinement, modulus of elasticity, alternative drying & wetting and masonry efficiency. The properties of geopolymer blocks were found superior to traditional masonry blocks and the masonry efficiency was found to increase with decrease in thickness of cement mortar joints. There was marginal difference in strength between rendered and unrendered geopolymer masonry blocks. The percentage weight gain after 7 cycles was less than 6% and the percentage reduction in strength of geopolymer solid blocks and hollow blocks were 26% and 28% respectively. Since the properties of geopolymer blocks are comparatively better than the traditional masonry they can be strongly recommended for structural masonry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brualla, Lorenzo, E-mail: lorenzo.brualla@uni-due.de; Zaragoza, Francisco J.; Sempau, Josep
Purpose: External beam radiotherapy is the only conservative curative approach for Stage I non-Hodgkin lymphomas of the conjunctiva. The target volume is geometrically complex because it includes the eyeball and lid conjunctiva. Furthermore, the target volume is adjacent to radiosensitive structures, including the lens, lacrimal glands, cornea, retina, and papilla. The radiotherapy planning and optimization requires accurate calculation of the dose in these anatomical structures that are much smaller than the structures traditionally considered in radiotherapy. Neither conventional treatment planning systems nor dosimetric measurements can reliably determine the dose distribution in these small irradiated volumes. Methods and Materials: The Montemore » Carlo simulations of a Varian Clinac 2100 C/D and human eye were performed using the PENELOPE and PENEASYLINAC codes. Dose distributions and dose volume histograms were calculated for the bulbar conjunctiva, cornea, lens, retina, papilla, lacrimal gland, and anterior and posterior hemispheres. Results: The simulated results allow choosing the most adequate treatment setup configuration, which is an electron beam energy of 6 MeV with additional bolus and collimation by a cerrobend block with a central cylindrical hole of 3.0 cm diameter and central cylindrical rod of 1.0 cm diameter. Conclusions: Monte Carlo simulation is a useful method to calculate the minute dose distribution in ocular tissue and to optimize the electron irradiation technique in highly critical structures. Using a voxelized eye phantom based on patient computed tomography images, the dose distribution can be estimated with a standard statistical uncertainty of less than 2.4% in 3 min using a computing cluster with 30 cores, which makes this planning technique clinically relevant.« less
Kulhánek, Tomáš; Ježek, Filip; Mateják, Marek; Šilar, Jan; Kofránek, Jří
2015-08-01
This work introduces experiences of teaching modeling and simulation for graduate students in the field of biomedical engineering. We emphasize the acausal and object-oriented modeling technique and we have moved from teaching block-oriented tool MATLAB Simulink to acausal and object oriented Modelica language, which can express the structure of the system rather than a process of computation. However, block-oriented approach is allowed in Modelica language too and students have tendency to express the process of computation. Usage of the exemplar acausal domains and approach allows students to understand the modeled problems much deeper. The causality of the computation is derived automatically by the simulation tool.
Zhang, Xinghao; Qiu, Xiongying; Kong, Debin; Zhou, Lu; Li, Zihao; Li, Xianglong; Zhi, Linjie
2017-07-25
Nanostructuring is a transformative way to improve the structure stability of high capacity silicon for lithium batteries. Yet, the interface instability issue remains and even propagates in the existing nanostructured silicon building blocks. Here we demonstrate an intrinsically dual stabilized silicon building block, namely silicene flowers, to simultaneously address the structure and interface stability issues. These original Si building blocks as lithium battery anodes exhibit extraordinary combined performance including high gravimetric capacity (2000 mAh g -1 at 800 mA g -1 ), high volumetric capacity (1799 mAh cm -3 ), remarkable rate capability (950 mAh g -1 at 8 A g -1 ), and excellent cycling stability (1100 mA h g -1 at 2000 mA g -1 over 600 cycles). Paired with a conventional cathode, the fabricated full cells deliver extraordinarily high specific energy and energy density (543 Wh kg ca -1 and 1257 Wh L ca -1 , respectively) based on the cathode and anode, which are 152% and 239% of their commercial counterparts using graphite anodes. Coupled with a simple, cost-effective, scalable synthesis approach, this silicon building block offers a horizon for the development of high-performance batteries.
NASA Astrophysics Data System (ADS)
Cho, Doohyung; Sim, Seulgi; Park, Kunsik; Won, Jongil; Kim, Sanggi; Kim, Kwangsoo
2015-12-01
In this paper, a 4H-SiC trench MOS barrier Schottky (TMBS) rectifier with an enhanced sidewall layer (ESL) is proposed. The proposed structure has a high doping concentration at the trench sidewall. This high doping concentration improves both the reverse blocking and forward characteristics of the structure. The ESL-TMBS rectifier has a 7.4% lower forward voltage drop and a 24% higher breakdown voltage. However, this structure has a reverse leakage current that is approximately three times higher than that of a conventional TMBS rectifier owing to the reduction in energy barrier height. This problem is solved when ESL is used partially, since its use provides a reverse leakage current that is comparable to that of a conventional TMBS rectifier. Thus, the forward voltage drop and breakdown voltage improve without any loss in static and dynamic characteristics in the ESL-TMBS rectifier compared with the performance of a conventional TMBS rectifier.
A new topological structure for the Langevin-type ultrasonic transducer.
Lu, Xiaolong; Hu, Junhui; Peng, Hanmin; Wang, Yuan
2017-03-01
In this paper, a new topological structure for the Langevin-type ultrasonic transducer is proposed and investigated. The two cylindrical terminal blocks are conically shaped with four supporting plates each, and two cooling fins are disposed at the bottom of terminal blocks, adjacent to the piezoelectric rings. Experimental results show that it has larger vibration velocity, lower temperature rise and higher electroacoustic energy efficiency than the conventional Langevin transducer. The reasons for the phenomena can be well explained by the change of mass, heat dissipation surface and force factor of the transducer. The proposed design may effectively improve the performance of ultrasonic transducers, in terms of the working effect, energy consumption and working life. Copyright © 2016 Elsevier B.V. All rights reserved.
A Tabu-Search Heuristic for Deterministic Two-Mode Blockmodeling of Binary Network Matrices.
Brusco, Michael; Steinley, Douglas
2011-10-01
Two-mode binary data matrices arise in a variety of social network contexts, such as the attendance or non-attendance of individuals at events, the participation or lack of participation of groups in projects, and the votes of judges on cases. A popular method for analyzing such data is two-mode blockmodeling based on structural equivalence, where the goal is to identify partitions for the row and column objects such that the clusters of the row and column objects form blocks that are either complete (all 1s) or null (all 0s) to the greatest extent possible. Multiple restarts of an object relocation heuristic that seeks to minimize the number of inconsistencies (i.e., 1s in null blocks and 0s in complete blocks) with ideal block structure is the predominant approach for tackling this problem. As an alternative, we propose a fast and effective implementation of tabu search. Computational comparisons across a set of 48 large network matrices revealed that the new tabu-search heuristic always provided objective function values that were better than those of the relocation heuristic when the two methods were constrained to the same amount of computation time.
PRSEUS Pressure Cube Test Data and Response
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.
2013-01-01
NASA s Environmentally Responsible Aviation (ERA) Program is examining the hybrid wing body (HWB) aircraft, among others, in an effort to increase the fuel efficiency of commercial aircraft. The HWB design combines features of a flying wing with features of conventional transport aircraft, and has the advantage of simultaneously increasing both fuel efficiency and payload. Recent years have seen an increased focus on the structural performance of the HWB. The key structural challenge of a HWB airframe is the ability to create a cost and weight efficient, non-circular, pressurized shell. Conventional round fuselage sections react cabin pressure by hoop tension. However, the structural configuration of the HWB subjects the majority of the structural panels to bi-axial, in-plane loads in addition to the internal cabin pressure, which requires more thorough examination and analysis than conventional transport aircraft components having traditional and less complex load paths. To address this issue, while keeping structural weights low, extensive use of advanced composite materials is made. This report presents the test data and preliminary conclusions for a pressurized cube test article that utilizes Boeing's Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS), and which is part of the building block approach used for HWB development.
A design philosophy for multi-layer neural networks with applications to robot control
NASA Technical Reports Server (NTRS)
Vadiee, Nader; Jamshidi, MO
1989-01-01
A system is proposed which receives input information from many sensors that may have diverse scaling, dimension, and data representations. The proposed system tolerates sensory information with faults. The proposed self-adaptive processing technique has great promise in integrating the techniques of artificial intelligence and neural networks in an attempt to build a more intelligent computing environment. The proposed architecture can provide a detailed decision tree based on the input information, information stored in a long-term memory, and the adapted rule-based knowledge. A mathematical model for analysis will be obtained to validate the cited hypotheses. An extensive software program will be developed to simulate a typical example of pattern recognition problem. It is shown that the proposed model displays attention, expectation, spatio-temporal, and predictory behavior which are specific to the human brain. The anticipated results of this research project are: (1) creation of a new dynamic neural network structure, and (2) applications to and comparison with conventional multi-layer neural network structures. The anticipated benefits from this research are vast. The model can be used in a neuro-computer architecture as a building block which can perform complicated, nonlinear, time-varying mapping from a multitude of input excitory classes to an output or decision environment. It can be used for coordinating different sensory inputs and past experience of a dynamic system and actuating signals. The commercial applications of this project can be the creation of a special-purpose neuro-computer hardware which can be used in spatio-temporal pattern recognitions in such areas as air defense systems, e.g., target tracking, and recognition. Potential robotics-related applications are trajectory planning, inverse dynamics computations, hierarchical control, task-oriented control, and collision avoidance.
Utilization of a CRT display light pen in the design of feedback control systems
NASA Technical Reports Server (NTRS)
Thompson, J. G.; Young, K. R.
1972-01-01
A hierarchical structure of the interlinked programs was developed to provide a flexible computer-aided design tool. A graphical input technique and a data structure are considered which provide the capability of entering the control system model description into the computer in block diagram form. An information storage and retrieval system was developed to keep track of the system description, and analysis and simulation results, and to provide them to the correct routines for further manipulation or display. Error analysis and diagnostic capabilities are discussed, and a technique was developed to reduce a transfer function to a set of nested integrals suitable for digital simulation. A general, automated block diagram reduction procedure was set up to prepare the system description for the analysis routines.
Jadi, Monika P; Behabadi, Bardia F; Poleg-Polsky, Alon; Schiller, Jackie; Mel, Bartlett W
2014-05-01
In pursuit of the goal to understand and eventually reproduce the diverse functions of the brain, a key challenge lies in reverse engineering the peculiar biology-based "technology" that underlies the brain's remarkable ability to process and store information. The basic building block of the nervous system is the nerve cell, or "neuron," yet after more than 100 years of neurophysiological study and 60 years of modeling, the information processing functions of individual neurons, and the parameters that allow them to engage in so many different types of computation (sensory, motor, mnemonic, executive, etc.) remain poorly understood. In this paper, we review both historical and recent findings that have led to our current understanding of the analog spatial processing capabilities of dendrites, the major input structures of neurons, with a focus on the principal cell type of the neocortex and hippocampus, the pyramidal neuron (PN). We encapsulate our current understanding of PN dendritic integration in an abstract layered model whose spatially sensitive branch-subunits compute multidimensional sigmoidal functions. Unlike the 1-D sigmoids found in conventional neural network models, multidimensional sigmoids allow the cell to implement a rich spectrum of nonlinear modulation effects directly within their dendritic trees.
Computational predictions of zinc oxide hollow structures
NASA Astrophysics Data System (ADS)
Tuoc, Vu Ngoc; Huan, Tran Doan; Thao, Nguyen Thi
2018-03-01
Nanoporous materials are emerging as potential candidates for a wide range of technological applications in environment, electronic, and optoelectronics, to name just a few. Within this active research area, experimental works are predominant while theoretical/computational prediction and study of these materials face some intrinsic challenges, one of them is how to predict porous structures. We propose a computationally and technically feasible approach for predicting zinc oxide structures with hollows at the nano scale. The designed zinc oxide hollow structures are studied with computations using the density functional tight binding and conventional density functional theory methods, revealing a variety of promising mechanical and electronic properties, which can potentially find future realistic applications.
1984-05-31
INTEPRETATION OF DI.ECLIRC CURE DATA IN ADHESIVES D. I. Day*, T. I. Lewis**. H. L. Leoe, and S. D. Seturia Department of Electrical Engineering and Computer...are geometry-independent. However, one result of the present paper is to show that the conventional practice of placing a thin release film , which...noted that the smallest of these spacings (achieved by using KaptonR film spacers) is much less than that typically used with parallel plates. No
Lalande, David; Hodd, Jeffrey A; Brousseau, John S; Ramos, Van; Dunham, Daniel; Rueggeberg, Frederick
2017-10-14
Because crowns with open margins are a well-known problem and can lead to complications, it is important to assess the accuracy of margins resulting from the use of a new technique. Currently, data regarding the marginal fit of computer-aided design and computer-aided manufacturing (CAD-CAM) technology to fabricate a complete gold crown (CGC) from a castable acrylate resin polymer block are lacking. The purpose of this in vitro study was to compare marginal discrepancy widths of CGCs fabricated by using either conventional hand waxing or acrylate resin polymer blocks generated by using CAD-CAM technology. A plastic model of a first mandibular molar was prepared by using a 1-mm, rounded chamfer margin on the entire circumference of the tooth. The master die was duplicated 30 times, and 15 wax patterns were fabricated by using a manual waxing technique, and 15 were fabricated by using CAD-CAM technology. All patterns were invested and cast, and resulting CGCs were cemented on their respective die by using resin-modified glass ionomer cement. The specimens were then embedded in acrylic resin and sectioned buccolingually. The buccal and lingual marginal discrepancies of each sectioned portion were measured by using microscopy at ×50 magnification. Data were subjected to repeated measures 2-way ANOVA, by using the Tukey post hoc pairwise comparison test (α=.05). The factor of "technique" had no significant influence on marginal discrepancy measurement (P=.431), but a significant effect of "margin location" (P=.019) was noted. The confounding combination of factors was found to be significantly lower marginal discrepancy dimensions of the lingual margin discrepancy than on the buccal side by using CAD-CAM technology. The marginal discrepancy of CAD-CAM acrylate resin crowns was not significantly different from those made with a conventional manual method; however, lingual margin discrepancies present from CAD-CAM-prepared crowns were significantly less than those measured on the respective buccal surface. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
PETSc Users Manual Revision 3.7
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, Satish; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.
PETSc Users Manual Revision 3.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.
Making Ordered DNA and Protein Structures from Computer-Printed Transparency Film Cut-Outs
ERIC Educational Resources Information Center
Jittivadhna, Karnyupha; Ruenwongsa, Pintip; Panijpan, Bhinyo
2009-01-01
Instructions are given for building physical scale models of ordered structures of B-form DNA, protein [alpha]-helix, and parallel and antiparallel protein [beta]-pleated sheets made from colored computer printouts designed for transparency film sheets. Cut-outs from these sheets are easily assembled. Conventional color coding for atoms are used…
Stockert, J C; Del Castillo, P
1990-01-01
On account of the rigidity and compact structure of the hyaline cartilage, unfixed or formaldehyde fixed samples of this tissue can be directly sectioned by using a conventional ultramicrotome and a glass knife. This simple method allows to obtain microscopical sections from unembedded cartilage blocks, which show a well preserved histological structure and are very suitable to carry out morphological and histochemical studies on chondrocytes and cartilaginous matrix.
Computational Design of Self-Assembling Protein Nanomaterials with Atomic Level Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Neil P.; Sheffler, William; Sawaya, Michael R.
2015-09-17
We describe a general computational method for designing proteins that self-assemble to a desired symmetric architecture. Protein building blocks are docked together symmetrically to identify complementary packing arrangements, and low-energy protein-protein interfaces are then designed between the building blocks in order to drive self-assembly. We used trimeric protein building blocks to design a 24-subunit, 13-nm diameter complex with octahedral symmetry and a 12-subunit, 11-nm diameter complex with tetrahedral symmetry. The designed proteins assembled to the desired oligomeric states in solution, and the crystal structures of the complexes revealed that the resulting materials closely match the design models. The method canmore » be used to design a wide variety of self-assembling protein nanomaterials.« less
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-02-25
Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.
2005-04-25
We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current softwaremore » when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.« less
1989-02-01
which capture the knowledge of such experts. These Expert Systems, or Knowledge-Based Systems’, differ from the usual computer programming techniques...their applications in the fields of structural design and welding is reviewed. 5.1 Introduction Expert Systems, or KBES, are computer programs using Al...procedurally constructed as conventional computer programs usually are; * The knowledge base of such systems is executable, unlike databases 3 "Ill
Non-Contact Smartphone-Based Monitoring of Thermally Stressed Structures
Ozturk, Turgut; Mas, David; Rizzo, Piervincenzo
2018-01-01
The in-situ measurement of thermal stress in beams or continuous welded rails may prevent structural anomalies such as buckling. This study proposed a non-contact monitoring/inspection approach based on the use of a smartphone and a computer vision algorithm to estimate the vibrating characteristics of beams subjected to thermal stress. It is hypothesized that the vibration of a beam can be captured using a smartphone operating at frame rates higher than conventional 30 Hz, and the first few natural frequencies of the beam can be extracted using a computer vision algorithm. In this study, the first mode of vibration was considered and compared to the information obtained with a conventional accelerometer attached to the two structures investigated, namely a thin beam and a thick beam. The results show excellent agreement between the conventional contact method and the non-contact sensing approach proposed here. In the future, these findings may be used to develop a monitoring/inspection smartphone application to assess the axial stress of slender structures, to predict the neutral temperature of continuous welded rails, or to prevent thermal buckling. PMID:29670034
Non-Contact Smartphone-Based Monitoring of Thermally Stressed Structures.
Sefa Orak, Mehmet; Nasrollahi, Amir; Ozturk, Turgut; Mas, David; Ferrer, Belen; Rizzo, Piervincenzo
2018-04-18
The in-situ measurement of thermal stress in beams or continuous welded rails may prevent structural anomalies such as buckling. This study proposed a non-contact monitoring/inspection approach based on the use of a smartphone and a computer vision algorithm to estimate the vibrating characteristics of beams subjected to thermal stress. It is hypothesized that the vibration of a beam can be captured using a smartphone operating at frame rates higher than conventional 30 Hz, and the first few natural frequencies of the beam can be extracted using a computer vision algorithm. In this study, the first mode of vibration was considered and compared to the information obtained with a conventional accelerometer attached to the two structures investigated, namely a thin beam and a thick beam. The results show excellent agreement between the conventional contact method and the non-contact sensing approach proposed here. In the future, these findings may be used to develop a monitoring/inspection smartphone application to assess the axial stress of slender structures, to predict the neutral temperature of continuous welded rails, or to prevent thermal buckling.
Fuel-Mediated Transient Clustering of Colloidal Building Blocks.
van Ravensteijn, Bas G P; Hendriksen, Wouter E; Eelkema, Rienk; van Esch, Jan H; Kegel, Willem K
2017-07-26
Fuel-driven assembly operates under the continuous influx of energy and results in superstructures that exist out of equilibrium. Such dissipative processes provide a route toward structures and transient behavior unreachable by conventional equilibrium self-assembly. Although perfected in biological systems like microtubules, this class of assembly is only sparsely used in synthetic or colloidal analogues. Here, we present a novel colloidal system that shows transient clustering driven by a chemical fuel. Addition of fuel causes an increase in hydrophobicity of the building blocks by actively removing surface charges, thereby driving their aggregation. Depletion of fuel causes reappearance of the charged moieties and leads to disassembly of the formed clusters. This reassures that the system returns to its initial, equilibrium state. By taking advantage of the cyclic nature of our system, we show that clustering can be induced several times by simple injection of new fuel. The fuel-mediated assembly of colloidal building blocks presented here opens new avenues to the complex landscape of nonequilibrium colloidal structures, guided by biological design principles.
Manipulating the ABCs of self-assembly via low-χ block polymer design
Chang, Alice B.; Lee, Byeongdu; Garland, Carol M.; Jones, Simon C.; Matsen, Mark W.
2017-01-01
Block polymer self-assembly typically translates molecular chain connectivity into mesoscale structure by exploiting incompatible blocks with large interaction parameters (χij). In this article, we demonstrate that the converse approach, encoding low-χ interactions in ABC bottlebrush triblock terpolymers (χAC ≲ 0), promotes organization into a unique mixed-domain lamellar morphology, which we designate LAMP. Transmission electron microscopy indicates that LAMP exhibits ACBC domain connectivity, in contrast to conventional three-domain lamellae (LAM3) with ABCB periods. Complementary small-angle X-ray scattering experiments reveal a strongly decreasing domain spacing with increasing total molar mass. Self-consistent field theory reinforces these observations and predicts that LAMP is thermodynamically stable below a critical χAC, above which LAM3 emerges. Both experiments and theory expose close analogies to ABA′ triblock copolymer phase behavior, collectively suggesting that low-χ interactions between chemically similar or distinct blocks intimately influence self-assembly. These conclusions provide fresh opportunities for block polymer design with potential consequences spanning all self-assembling soft materials. PMID:28588139
Fundamental Flux Equations for Fracture-Matrix Interactions with Linear Diffusion
NASA Astrophysics Data System (ADS)
Oldenburg, C. M.; Zhou, Q.; Rutqvist, J.; Birkholzer, J. T.
2017-12-01
The conventional dual-continuum models are only applicable for late-time behavior of pressure propagation in fractured rock, while discrete-fracture-network models may explicitly deal with matrix blocks at high computational expense. To address these issues, we developed a unified-form diffusive flux equation for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular matrix blocks (squares, cubes, rectangles, and rectangular parallelepipeds) by partitioning the entire dimensionless-time domain (Zhou et al., 2017a, b). For each matrix block, this flux equation consists of the early-time solution up until a switch-over time after which the late-time solution is applied to create continuity from early to late time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the coefficients dependent on dimensionless area-to-volume ratio and aspect ratios for rectangular blocks. For the late-time solutions, one exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic blocks. The time-partitioning method was also used for calculating pressure/concentration/temperature distribution within a matrix block. The approximate solution contains an error-function solution for early times and an exponential solution for late times, with relative errors less than 0.003. These solutions form the kernel of multirate and multidimensional hydraulic, solute and thermal diffusion in fractured reservoirs.
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
Software Design Strategies for Multidisciplinary Computational Fluid Dynamics
2012-07-01
on the left-hand-side of Figure 3. The resulting unstructured grid system does a good job of representing the flowfield locally around the solid... Laboratory [16–19]. It uses Cartesian block structured grids, which lead to a substantially more efficient computational execution compared to the...including blade sectional lift and pitching moment. These Helios-computed airloads show good agreement with the experimental data. Many of the
Spectroscopic investigation of some building blocks of organic conductors: A comparative study
NASA Astrophysics Data System (ADS)
Mukherjee, V.; Yadav, T.
2017-04-01
Theoretical molecular structures and IR and Raman spectra of di and tetra methyl substituted tetrathiafulvalene and tetraselenafulvalene molecules have been studied. These molecules belong to the organic conductor family and are immensely used as building blocks of several organic conducting devices. The Hartree-Fock and density functional theory with exchange functional B3LYP have been employed for computational purpose. We have also performed normal coordinate analysis to scale the theoretical frequencies and to calculate potential energy distributions for the conspicuous assignments. The exciting frequency and temperature dependent Raman spectra have also presented. Optimization results reveal that the sulphur derivatives possess boat shape while selenium derivatives possess planner structures. Natural bond orbitals analysis has also been performed to study second order interaction between donors and acceptors and to compute molecular orbital occupancy and energy.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
Effects of Chitin and Contact Insecticide Complexes on Rove Beetles in Commercial Orchards
Balog, A.; Ferencz, L.; Hartel, T.
2011-01-01
A five-year research project was performed to explore the potential effects of contact insecticide applications on the change of abundance and species richness of predatory rove beetles (Coleoptera: Staphylinidae) in conventionally managed orchards. Twelve blocks of nine orchards were used for this study in Central Europe. High sensitivity atomic force microscopic examination was carried out for chitin structure analyses as well as computer simulation for steric energy calculation between insecticides and chitin. The species richness of rove beetles in orchards was relatively high after insecticide application. Comparing the mean abundance before and after insecticide application, a higher value was observed before spraying with alphacypermethrin and lambda-cyhalothrin, and a lower value was observed in the cases of diflubenzuron, malathion, lufenuron, and phosalone. The species richness was higher only before chlorpyrifos-methyl application. There was a negative correlation between abundance and stability value of chitin-insecticides, persistence time, and soil absorption coefficients. Positive correlation was observed with lipo- and water solubility. PMID:21870981
ERIC Educational Resources Information Center
Horikoshi, Ryo; Kobayashi, Yoji; Kageyama, Hiroshi
2013-01-01
Catalysis with transition-metal complexes is a part of the inorganic chemistry curriculum and a challenging topic for upper-level undergraduate and graduate students. A hands-on teaching aid has been developed for use during conventional lectures to help students understand these catalytic reactions. A unique method of illustrating the…
The Application of Sheet Technology in Cartilage Tissue Engineering.
Ge, Yang; Gong, Yi Yi; Xu, Zhiwei; Lu, Yanan; Fu, Wei
2016-04-01
Cartilage tissue engineering started to act as a promising, even essential alternative method in the process of cartilage repair and regeneration, considering adult avascular structure has very limited self-renewal capacity of cartilage tissue in adults and a bottle-neck existed in conventional surgical treatment methods. Recent progressions in tissue engineering realized the development of more feasible strategies to treat cartilage disorders. Of these strategies, cell sheet technology has shown great clinical potentials in the regenerative areas such as cornea and esophagus and is increasingly considered as a potential way to reconstruct cartilage tissues for its non-use of scaffolds and no destruction of matrix secreted by cultured cells. Acellular matrix sheet technologies utilized in cartilage tissue engineering, with a sandwich model, can ingeniously overcome the drawbacks that occurred in a conventional acellular block, where cells are often blocked from migrating because of the non-nanoporous structure. Electrospun-based sheets with nanostructures that mimic the natural cartilage matrix offer a level of control as well as manipulation and make them appealing and widely used in cartilage tissue engineering. In this review, we focus on the utilization of these novel and promising sheet technologies to construct cartilage tissues with practical and beneficial functions.
Random and Block Sulfonated Polyaramides as Advanced Proton Exchange Membranes
Kinsinger, Corey L.; Liu, Yuan; Liu, Feilong; ...
2015-10-09
We present here the experimental and computational characterization of two novel copolyaramide proton exchange membranes (PEMs) with higher conductivity than Nafion at relatively high temperatures, good mechanical properties, high thermal stability, and the capability to operate in low humidity conditions. The random and block copolyaramide PEMs are found to possess different ion exchange capacities (IEC) in addition to subtle structural and morphological differences, which impact the stability and conductivity of the membranes. SAXS patterns indicate the ionomer peak for the dry block copolymer resides at q = 0.1 Å –1, which increases in amplitude when initially hydrated to 25% relativemore » humidity, but then decrease in amplitude with additional hydration. This pattern is hypothesized to signal the transport of water into the polymer matrix resulting in a reduced degree of phase separation. Coupled to these morphological changes, the enhanced proton transport characteristics and structural/mechanical stability for the block copolymer are hypothesized to be primarily due to the ordered structure of ionic clusters that create connected proton transport pathways while reducing swelling upon hydration. Interestingly, the random copolymer did not possess an ionomer peak at any of the hydration levels investigated, indicating a lack of any significant ionomer structure. The random copolymer also demonstrated higher proton conductivity than the block copolymer, which is opposite to the trend normally seen in polymer membranes. However, it has reduced structural/mechanical stability as compared to the block copolymer. In conclusion, this reduction in stability is due to the random morphology formed by entanglements of polymer chains and the adverse swelling characteristics upon hydration. Therefore, the block copolymer with its enhanced proton conductivity characteristics, as compared to Nafion, and favorable structural/mechanical stability, as compared to the random copolymer, represents a viable alternative to current proton exchange membranes.« less
Directed self-assembly of block copolymer films on atomically-thin graphene chemical patterns
Chang, Tzu-Hsuan; Xiong, Shisheng; Jacobberger, Robert M.; ...
2016-08-16
Directed self-assembly of block copolymers is a scalable method to fabricate well-ordered patterns over the wafer scale with feature sizes below the resolution of conventional lithography. Typically, lithographically-defined prepatterns with varying chemical contrast are used to rationally guide the assembly of block copolymers. The directed self-assembly to obtain accurate registration and alignment is largely influenced by the assembly kinetics. Furthermore, a considerably broad processing window is favored for industrial manufacturing. Using an atomically-thin layer of graphene on germanium, after two simple processing steps, we create a novel chemical pattern to direct the assembly of polystyreneblock-poly(methyl methacrylate). Faster assembly kinetics aremore » observed on graphene/germanium chemical patterns than on conventional chemical patterns based on polymer mats and brushes. This new chemical pattern allows for assembly on a wide range of guiding periods and along designed 90° bending structures. We also achieve density multiplication by a factor of 10, greatly enhancing the pattern resolution. Lastly, the rapid assembly kinetics, minimal topography, and broad processing window demonstrate the advantages of inorganic chemical patterns composed of hard surfaces.« less
Ground Software Maintenance Facility (GSMF) system manual
NASA Technical Reports Server (NTRS)
Derrig, D.; Griffith, G.
1986-01-01
The Ground Software Maintenance Facility (GSMF) is designed to support development and maintenance of spacelab ground support software. THE GSMF consists of a Perkin Elmer 3250 (Host computer) and a MITRA 125s (ATE computer), with appropriate interface devices and software to simulate the Electrical Ground Support Equipment (EGSE). This document is presented in three sections: (1) GSMF Overview; (2) Software Structure; and (3) Fault Isolation Capability. The overview contains information on hardware and software organization along with their corresponding block diagrams. The Software Structure section describes the modes of software structure including source files, link information, and database files. The Fault Isolation section describes the capabilities of the Ground Computer Interface Device, Perkin Elmer host, and MITRA ATE.
Spectral partitioning in equitable graphs.
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Spectral partitioning in equitable graphs
NASA Astrophysics Data System (ADS)
Barucca, Paolo
2017-06-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.
Using block pulse functions for seismic vibration semi-active control of structures with MR dampers
NASA Astrophysics Data System (ADS)
Rahimi Gendeshmin, Saeed; Davarnia, Daniel
2018-03-01
This article applied the idea of block pulse functions in the semi-active control of structures. The BP functions give effective tools to approximate complex problems. The applied control algorithm has a major effect on the performance of the controlled system and the requirements of the control devices. In control problems, it is important to devise an accurate analytical technique with less computational cost. It is proved that the BP functions are fundamental tools in approximation problems which have been applied in disparate areas in last decades. This study focuses on the employment of BP functions in control algorithm concerning reduction the computational cost. Magneto-rheological (MR) dampers are one of the well-known semi-active tools that can be used to control the response of civil Structures during earthquake. For validation purposes, numerical simulations of a 5-story shear building frame with MR dampers are presented. The results of suggested method were compared with results obtained by controlling the frame by the optimal control method based on linear quadratic regulator theory. It can be seen from simulation results that the suggested method can be helpful in reducing seismic structural responses. Besides, this method has acceptable accuracy and is in agreement with optimal control method with less computational costs.
NASA Astrophysics Data System (ADS)
Elbaz, Reouven; Torres, Lionel; Sassatelli, Gilles; Guillemin, Pierre; Bardouillet, Michel; Martinez, Albert
The bus between the System on Chip (SoC) and the external memory is one of the weakest points of computer systems: an adversary can easily probe this bus in order to read private data (data confidentiality concern) or to inject data (data integrity concern). The conventional way to protect data against such attacks and to ensure data confidentiality and integrity is to implement two dedicated engines: one performing data encryption and another data authentication. This approach, while secure, prevents parallelizability of the underlying computations. In this paper, we introduce the concept of Block-Level Added Redundancy Explicit Authentication (BL-AREA) and we describe a Parallelized Encryption and Integrity Checking Engine (PE-ICE) based on this concept. BL-AREA and PE-ICE have been designed to provide an effective solution to ensure both security services while allowing for full parallelization on processor read and write operations and optimizing the hardware resources. Compared to standard encryption which ensures only confidentiality, we show that PE-ICE additionally guarantees code and data integrity for less than 4% of run-time performance overhead.
A no-reference video quality assessment metric based on ROI
NASA Astrophysics Data System (ADS)
Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan
2015-01-01
A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.
1992-02-01
develop,, and maintains computer programs for the Department of the Navy. It provides life cycle support for over 50 computer programs installed at over...the computer programs . Table 4 presents a list of possible product or output measures of functionality for ACDS Block 0 programs . Examples of output...were identified as important "causes" of process performance. Functionality of the computer programs was the result or "effect" of the combination of
An Efficient Method to Detect Mutual Overlap of a Large Set of Unordered Images for Structure-From
NASA Astrophysics Data System (ADS)
Wang, X.; Zhan, Z. Q.; Heipke, C.
2017-05-01
Recently, low-cost 3D reconstruction based on images has become a popular focus of photogrammetry and computer vision research. Methods which can handle an arbitrary geometric setup of a large number of unordered and convergent images are of particular interest. However, determining the mutual overlap poses a considerable challenge. We propose a new method which was inspired by and improves upon methods employing random k-d forests for this task. Specifically, we first derive features from the images and then a random k-d forest is used to find the nearest neighbours in feature space. Subsequently, the degree of similarity between individual images, the image overlaps and thus images belonging to a common block are calculated as input to a structure-from-motion (sfm) pipeline. In our experiments we show the general applicability of the new method and compare it with other methods by analyzing the time efficiency. Orientations and 3D reconstructions were successfully conducted with our overlap graphs by sfm. The results show a speed-up of a factor of 80 compared to conventional pairwise matching, and of 8 and 2 compared to the VocMatch approach using 1 and 4 CPU, respectively.
Robust integer and fractional helical modes in the quantum Hall effect
NASA Astrophysics Data System (ADS)
Ronen, Yuval; Cohen, Yonatan; Banitt, Daniel; Heiblum, Moty; Umansky, Vladimir
2018-04-01
Electronic systems harboring one-dimensional helical modes, where spin and momentum are locked, have lately become an important field of their own. When coupled to a conventional superconductor, such systems are expected to manifest topological superconductivity; a unique phase hosting exotic Majorana zero modes. Even more interesting are fractional helical modes, yet to be observed, which open the route for realizing generalized parafermions. Possessing non-Abelian exchange statistics, these quasiparticles may serve as building blocks in topological quantum computing. Here, we present a new approach to form protected one-dimensional helical edge modes in the quantum Hall regime. The novel platform is based on a carefully designed double-quantum-well structure in a GaAs-based system hosting two electronic sub-bands; each tuned to the quantum Hall effect regime. By electrostatic gating of different areas of the structure, counter-propagating integer, as well as fractional, edge modes with opposite spins are formed. We demonstrate that, due to spin protection, these helical modes remain ballistic over large distances. In addition to the formation of helical modes, this platform can serve as a rich playground for artificial induction of compounded fractional edge modes, and for construction of edge-mode-based interferometers.
The Power of the Test for Treatment Effects in Three-Level Block Randomized Designs
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2008-01-01
Experiments that involve nested structures may assign treatment conditions either to subgroups (such as classrooms) or individuals within subgroups (such as students). The design of such experiments requires knowledge of the intraclass correlation structure to compute the sample sizes necessary to achieve adequate power to detect the treatment…
Kuhn, Stefan; Egert, Björn; Neumann, Steffen; Steinbeck, Christoph
2008-09-25
Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE) of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures. This paper evaluates different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB. A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors. HOSE codes being a notably simple method achieved a comparatively good result of 0.17 ppm mean absolute error. NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites.
Simpler grammar, larger vocabulary: How population size affects language
2018-01-01
Languages with many speakers tend to be structurally simple while small communities sometimes develop languages with great structural complexity. Paradoxically, the opposite pattern appears to be observed for non-structural properties of language such as vocabulary size. These apparently opposite patterns pose a challenge for theories of language change and evolution. We use computational simulations to show that this inverse pattern can depend on a single factor: ease of diffusion through the population. A population of interacting agents was arranged on a network, passing linguistic conventions to one another along network links. Agents can invent new conventions, or replicate conventions that they have previously generated themselves or learned from other agents. Linguistic conventions are either Easy or Hard to diffuse, depending on how many times an agent needs to encounter a convention to learn it. In large groups, only linguistic conventions that are easy to learn, such as words, tend to proliferate, whereas small groups where everyone talks to everyone else allow for more complex conventions, like grammatical regularities, to be maintained. Our simulations thus suggest that language, and possibly other aspects of culture, may become simpler at the structural level as our world becomes increasingly interconnected. PMID:29367397
Distribution majorization of corner points by reinforcement learning for moving object detection
NASA Astrophysics Data System (ADS)
Wu, Hao; Yu, Hao; Zhou, Dongxiang; Cheng, Yongqiang
2018-04-01
Corner points play an important role in moving object detection, especially in the case of free-moving camera. Corner points provide more accurate information than other pixels and reduce the computation which is unnecessary. Previous works only use intensity information to locate the corner points, however, the information that former and the last frames provided also can be used. We utilize the information to focus on more valuable area and ignore the invaluable area. The proposed algorithm is based on reinforcement learning, which regards the detection of corner points as a Markov process. In the Markov model, the video to be detected is regarded as environment, the selections of blocks for one corner point are regarded as actions and the performance of detection is regarded as state. Corner points are assigned to be the blocks which are seperated from original whole image. Experimentally, we select a conventional method which uses marching and Random Sample Consensus algorithm to obtain objects as the main framework and utilize our algorithm to improve the result. The comparison between the conventional method and the same one with our algorithm show that our algorithm reduce 70% of the false detection.
Turhan, K S Cakar; Akmese, R; Ozkan, F; Okten, F F
2015-04-01
In the current prospective, randomized study, we aimed to compare the effects of low dose selective spinal anesthesia with 5 mg of hyperbaric bupivacaine and single-shot femoral nerve block combination with conventional dose selective spinal anesthesia in terms of intraoperative anesthesia characteristics, block recovery characteristics, and postoperative analgesic consumption. After obtaining institutional Ethics Committee approval, 52 ASA I-II patients aged 25-65, undergoing arthroscopic meniscus repair were randomly assigned to Group S (conventional dose selective spinal anesthesia with 10 mg bupivacaine) and Group FS (low-dose selective spinal anesthesia with 5mg bupivacaine +single-shot femoral block with 0.25% bupivacaine). Primary endpoints were time to reach T12 sensory block level, L2 regression, and complete motor block regression. Secondary endpoints were maximum sensory block level (MSBL); time to reach MSBL, time to first urination, time to first analgesic consumption and pain severity at the time of first mobilization. Demographic characteristics were similar in both groups (p > 0.05). MSBL and time to reach T12 sensory level were similar in both groups (p > 0.05). Time to reach L2 regression, complete motor block regression, and time to first micturition were significantly shorter; time to first analgesic consumption was significantly longer; and total analgesic consumption and severity of pain at time of first mobilization were significantly lower in Group FS (p < 0.05). The findings of the current study suggest that addition of single-shot femoral block to low dose spinal anesthesia could be an alternative to conventional dose spinal anesthesia in outpatient arthroscopic meniscus repair. NCT02322372.
Fast Algorithms for Structured Least Squares and Total Least Squares Problems
Kalsi, Anoop; O’Leary, Dianne P.
2006-01-01
We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z1 and Z2. We develop formulas for the generators of the matrix M HM in terms of the generators of M and show that the Cholesky factorization of the matrix M HM can be computed quickly if Z1 is close to unitary and Z2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices. PMID:27274922
Fast Algorithms for Structured Least Squares and Total Least Squares Problems.
Kalsi, Anoop; O'Leary, Dianne P
2006-01-01
We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z 1 and Z 2. We develop formulas for the generators of the matrix M (H) M in terms of the generators of M and show that the Cholesky factorization of the matrix M (H) M can be computed quickly if Z 1 is close to unitary and Z 2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Multivariable frequency domain identification via 2-norm minimization
NASA Technical Reports Server (NTRS)
Bayard, David S.
1992-01-01
The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.
[Axial computer tomography of the neurocranium (author's transl)].
Stöppler, L
1977-05-27
Computer tomography (CT), a new radiographic examination technique, is very highly efficient, for it has high informative content with little stress for the patient. In contrast to the conventional X-ray technology, CT succeeds, by direct presentation of the structure of the soft parts, in obtaining information which comes close to that of macroscopic neuropathology. The capacity and limitations of the method at the present stage of development are reported. Computer tomography cannot displace conventional neuroradiological methods of investigation, although it is rightly presented as a screening method and helps towards selective use. Indications, technical integration and handling of CT are prerequisites for the exhaustive benefit of the excellent new technique.
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong
2016-07-01
In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.
Galan-Marin, Carmen; Rivera-Gomez, Carlos; Garcia-Martinez, Antonio
2016-06-13
During the last decades natural polymers have become more and more frequent to replace traditional inorganic stabilizers in building materials. The purpose of this research is to establish a comparison between the most conventional building material solutions for load-bearing walls and a type of biomaterial. This comparison will focus on load-bearing walls as used in a widespread type of twentieth century dwelling construction in Europe and still used in developing countries nowadays. To carry out this analysis, the structural and thermal insulation characteristics of different construction solutions are balanced. The tool used for this evaluation is the life cycle assessment throughout the whole lifespan of these buildings. This research aims to examine the environmental performance of each material assessed: fired clay brick masonry walls (BW), concrete block masonry walls (CW), and stabilized soil block masonry walls (SW) stabilized with natural fibers and alginates. These conventional and new materials are evaluated from the point of view of both operational and embodied energy.
Galan-Marin, Carmen; Rivera-Gomez, Carlos; Garcia-Martinez, Antonio
2016-01-01
During the last decades natural polymers have become more and more frequent to replace traditional inorganic stabilizers in building materials. The purpose of this research is to establish a comparison between the most conventional building material solutions for load-bearing walls and a type of biomaterial. This comparison will focus on load-bearing walls as used in a widespread type of twentieth century dwelling construction in Europe and still used in developing countries nowadays. To carry out this analysis, the structural and thermal insulation characteristics of different construction solutions are balanced. The tool used for this evaluation is the life cycle assessment throughout the whole lifespan of these buildings. This research aims to examine the environmental performance of each material assessed: fired clay brick masonry walls (BW), concrete block masonry walls (CW), and stabilized soil block masonry walls (SW) stabilized with natural fibers and alginates. These conventional and new materials are evaluated from the point of view of both operational and embodied energy. PMID:28773586
Beyond the schools of psychology 2: a digital analysis of psychological review, 1904-1923.
Green, Christopher D; Feinerer, Ingo; Burman, Jeremy T
2014-01-01
In order to better understand the broader trends and points of contention in early American psychology, it is conventional to organize the relevant material in terms of "schools" of psychology-structuralism, functionalism, etc. Although not without value, this scheme marginalizes many otherwise significant figures, and tends to exclude a large number of secondary, but interesting, individuals. In an effort to address these problems, we grouped all the articles that appeared in the second and third decades of Psychological Review into five-year blocks, and then cluster analyzed each block by the articles' verbal similarity to each other. This resulted in a number of significant intellectual "genres" of psychology that are ignored by the usual "schools" taxonomy. It also made "visible" a number of figures who are typically downplayed or ignored in conventional histories of the discipline, and it provide us with an intellectual context in which to understand their contributions. © 2014 Wiley Periodicals, Inc.
Argueta, Edwin; Shaji, Jeena; Gopalan, Arun; Liao, Peilin; Snurr, Randall Q; Gómez-Gualdrón, Diego A
2018-01-09
Metal-organic frameworks (MOFs) are porous crystalline materials with attractive properties for gas separation and storage. Their remarkable tunability makes it possible to create millions of MOF variations but creates the need for fast material screening to identify promising structures. Computational high-throughput screening (HTS) is a possible solution, but its usefulness is tied to accurate predictions of MOF adsorption properties. Accurate adsorption simulations often require an accurate description of electrostatic interactions, which depend on the electronic charges of the MOF atoms. HTS-compatible methods to assign charges to MOF atoms need to accurately reproduce electrostatic potentials (ESPs) and be computationally affordable, but current methods present an unsatisfactory trade-off between computational cost and accuracy. We illustrate a method to assign charges to MOF atoms based on ab initio calculations on MOF molecular building blocks. A library of building blocks with built-in charges is thus created and used by an automated MOF construction code to create hundreds of MOFs with charges "inherited" from the constituent building blocks. The molecular building block-based (MBBB) charges are similar to REPEAT charges-which are charges that reproduce ESPs obtained from ab initio calculations on crystallographic unit cells of nanoporous crystals-and thus similar predictions of adsorption loadings, heats of adsorption, and Henry's constants are obtained with either method. The presented results indicate that the MBBB method to assign charges to MOF atoms is suitable for use in computational high-throughput screening of MOFs for applications that involve adsorption of molecules such as carbon dioxide.
Parallel Geospatial Data Management for Multi-Scale Environmental Data Analysis on GPUs
NASA Astrophysics Data System (ADS)
Wang, D.; Zhang, J.; Wei, Y.
2013-12-01
As the spatial and temporal resolutions of Earth observatory data and Earth system simulation outputs are getting higher, in-situ and/or post- processing such large amount of geospatial data increasingly becomes a bottleneck in scientific inquires of Earth systems and their human impacts. Existing geospatial techniques that are based on outdated computing models (e.g., serial algorithms and disk-resident systems), as have been implemented in many commercial and open source packages, are incapable of processing large-scale geospatial data and achieve desired level of performance. In this study, we have developed a set of parallel data structures and algorithms that are capable of utilizing massively data parallel computing power available on commodity Graphics Processing Units (GPUs) for a popular geospatial technique called Zonal Statistics. Given two input datasets with one representing measurements (e.g., temperature or precipitation) and the other one represent polygonal zones (e.g., ecological or administrative zones), Zonal Statistics computes major statistics (or complete distribution histograms) of the measurements in all regions. Our technique has four steps and each step can be mapped to GPU hardware by identifying its inherent data parallelisms. First, a raster is divided into blocks and per-block histograms are derived. Second, the Minimum Bounding Boxes (MBRs) of polygons are computed and are spatially matched with raster blocks; matched polygon-block pairs are tested and blocks that are either inside or intersect with polygons are identified. Third, per-block histograms are aggregated to polygons for blocks that are completely within polygons. Finally, for blocks that intersect with polygon boundaries, all the raster cells within the blocks are examined using point-in-polygon-test and cells that are within polygons are used to update corresponding histograms. As the task becomes I/O bound after applying spatial indexing and GPU hardware acceleration, we have developed a GPU-based data compression technique by reusing our previous work on Bitplane Quadtree (or BPQ-Tree) based indexing of binary bitmaps. Results have shown that our GPU-based parallel Zonal Statistic technique on 3000+ US counties over 20+ billion NASA SRTM 30 meter resolution Digital Elevation (DEM) raster cells has achieved impressive end-to-end runtimes: 101 seconds and 46 seconds a low-end workstation equipped with a Nvidia GTX Titan GPU using cold and hot cache, respectively; and, 60-70 seconds using a single OLCF TITAN computing node and 10-15 seconds using 8 nodes. Our experiment results clearly show the potentials of using high-end computing facilities for large-scale geospatial processing.
Thermal/structural Tailoring of Engine Blades (T/SEAEBL). Theoretical Manual
NASA Technical Reports Server (NTRS)
Brown, K. W.; Clevenger, W. B.
1994-01-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual describes the T/STAEBL data block structure and system organization. The approximate analysis and optimization modules are detailed, and a validation test case is provided.
Thermal/structural tailoring of engine blades (T/SEAEBL). Theoretical manual
NASA Astrophysics Data System (ADS)
Brown, K. W.; Clevenger, W. B.
1994-03-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual describes the T/STAEBL data block structure and system organization. The approximate analysis and optimization modules are detailed, and a validation test case is provided.
Enhanced Motor Imagery-Based BCI Performance via Tactile Stimulation on Unilateral Hand.
Shu, Xiaokang; Yao, Lin; Sheng, Xinjun; Zhang, Dingguo; Zhu, Xiangyang
2017-01-01
Brain-computer interface (BCI) has attracted great interests for its effectiveness in assisting disabled people. However, due to the poor BCI performance, this technique is still far from daily-life applications. One of critical issues confronting BCI research is how to enhance BCI performance. This study aimed at improving the motor imagery (MI) based BCI accuracy by integrating MI tasks with unilateral tactile stimulation (Uni-TS). The effects were tested on both healthy subjects and stroke patients in a controlled study. Twenty-two healthy subjects and four stroke patients were recruited and randomly divided into a control-group and an enhanced-group. In the control-group, subjects performed two blocks of conventional MI tasks (left hand vs. right hand), with 80 trials in each block. In the enhanced-group, subjects also performed two blocks of MI tasks, but constant tactile stimulation was applied on the non-dominant/paretic hand during MI tasks in the second block. We found the Uni-TS significantly enhanced the contralateral cortical activations during MI of the stimulated hand, whereas it had no influence on activation patterns during MI of the non-stimulated hand. The two-class BCI decoding accuracy was significantly increased from 72.5% (MI without Uni-TS) to 84.7% (MI with Uni-TS) in the enhanced-group ( p < 0.001, paired t -test). Moreover, stroke patients in the enhanced-group achieved an accuracy >80% during MI with Uni-TS. This novel approach complements the conventional methods for BCI enhancement without increasing source information or complexity of signal processing. This enhancement via Uni-TS may facilitate clinical applications of MI-BCI.
The Ettention software package.
Dahmen, Tim; Marsalek, Lukas; Marniok, Nico; Turoňová, Beata; Bogachev, Sviatoslav; Trampert, Patrick; Nickels, Stefan; Slusallek, Philipp
2016-02-01
We present a novel software package for the problem "reconstruction from projections" in electron microscopy. The Ettention framework consists of a set of modular building-blocks for tomographic reconstruction algorithms. The well-known block iterative reconstruction method based on Kaczmarz algorithm is implemented using these building-blocks, including adaptations specific to electron tomography. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or many-core architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and eTomo user interface. We also provide developers with a clean and well-structured application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing. Copyright © 2015 Elsevier B.V. All rights reserved.
Kim, Yong Bok; Lee, Hyeongjin; Kim, Geun Hyung
2016-11-30
Recently, a three-dimensional (3D) bioprinting process for obtaining a cell-laden structure has been widely applied because of its ability to fabricate biomimetic complex structures embedded with and without cells. To successfully obtain a cell-laden porous block, the cell-delivering vehicle, bioink, is one of the significant factors. Until now, various biocompatible hydrogels (synthetic and natural biopolymers) have been utilized in the cell-printing process, but a bioink satisfying both biocompatibility and print-ability requirements to achieve a porous structure with reasonable mechanical strength has not been issued. Here, we propose a printing strategy with optimal conditions including a safe cross-linking procedure for obtaining a 3D porous cell block composed of a biocompatible collagen-bioink and genipin, a cross-linking agent. To obtain the optimal processing conditions, we modified the 3D printing machine and selected an optimal cross-linking condition (∼1 mM and 1 h) of genipin solution. To show the feasibility of the process, 3D pore-interconnected cell-laden constructs were manufactured using osteoblast-like cells (MG63) and human adipose stem cells (hASCs). Under these processing conditions, a macroscale 3D collagen-based cell block of 21 × 21 × 12 mm 3 and over 95% cell viability was obtained. In vitro biological testing of the cell-laden 3D porous structure showed that the embedded cells were sufficiently viable, and their proliferation was significantly higher; the cells also exhibited increased osteogenic activities compared to the conventional alginate-based bioink (control). The results indicated the fabrication process using the collagen-bioink would be an innovative platform to design highly biocompatible and mechanically stable cell blocks.
Piezosurgical osteotomy for harvesting intraoral block bone graft
Lakshmiganthan, Mahalingam; Gokulanathan, Subramanium; Shanmugasundaram, Natarajan; Daniel, Rajkumar; Ramesh, Sadashiva B.
2012-01-01
The use of ultrasonic vibrations for the cutting of bone was first introduced two decades ago. Piezoelectric surgery is a minimally invasive technique that lessens the risk of damage to surrounding soft tissues and important structures such as nerves, vessels, and mucosa. It also reduces damage to osteocytes and permits good survival of bony cells during harvesting of bone. Grafting with intraoral bone blocks is a good way to reconstruct severe horizontal and vertical bone resorption in future implants sites. The piezosurgery system creates an effective osteotomy with minimal or no trauma to soft tissue in contrast to conventional surgical burs or saws and minimizes a patient's psychological stress and fear during osteotomy under local anesthesia. The purpose of this article is to describe the harvesting of intraoral bone blocks using the piezoelectric surgery device. PMID:23066242
Computational predictions of the new Gallium nitride nanoporous structures
NASA Astrophysics Data System (ADS)
Lien, Le Thi Hong; Tuoc, Vu Ngoc; Duong, Do Thi; Thu Huyen, Nguyen
2018-05-01
Nanoporous structural prediction is emerging area of research because of their advantages for a wide range of materials science and technology applications in opto-electronics, environment, sensors, shape-selective and bio-catalysis, to name just a few. We propose a computationally and technically feasible approach for predicting Gallium nitride nanoporous structures with hollows at the nano scale. The designed porous structures are studied with computations using the density functional tight binding (DFTB) and conventional density functional theory methods, revealing a variety of promising mechanical and electronic properties, which can potentially find future realistic applications. Their stability is discussed by means of the free energy computed within the lattice-dynamics approach. Our calculations also indicate that all the reported hollow structures are wide band gap semiconductors in the same fashion with their parent’s bulk stable phase. The electronic band structures of these nanoporous structures are finally examined in detail.
JADI, MONIKA P.; BEHABADI, BARDIA F.; POLEG-POLSKY, ALON; SCHILLER, JACKIE; MEL, BARTLETT W.
2014-01-01
In pursuit of the goal to understand and eventually reproduce the diverse functions of the brain, a key challenge lies in reverse engineering the peculiar biology-based “technology” that underlies the brain’s remarkable ability to process and store information. The basic building block of the nervous system is the nerve cell, or “neuron,” yet after more than 100 years of neurophysiological study and 60 years of modeling, the information processing functions of individual neurons, and the parameters that allow them to engage in so many different types of computation (sensory, motor, mnemonic, executive, etc.) remain poorly understood. In this paper, we review both historical and recent findings that have led to our current understanding of the analog spatial processing capabilities of dendrites, the major input structures of neurons, with a focus on the principal cell type of the neocortex and hippocampus, the pyramidal neuron (PN). We encapsulate our current understanding of PN dendritic integration in an abstract layered model whose spatially sensitive branch-subunits compute multidimensional sigmoidal functions. Unlike the 1-D sigmoids found in conventional neural network models, multidimensional sigmoids allow the cell to implement a rich spectrum of nonlinear modulation effects directly within their dendritic trees. PMID:25554708
Chen, Rong; Chung, Shin-Ho
2013-01-01
The discovery of new drugs that selectively block or modulate ion channels has great potential to provide new treatments for a host of conditions. One promising avenue revolves around modifying or mimicking certain naturally occurring ion channel modulator toxins. This strategy appears to offer the prospect of designing drugs that are both potent and specific. The use of computational modeling is crucial to this endeavor, as it has the potential to provide lower cost alternatives for exploring the effects of new compounds on ion channels. In addition, computational modeling can provide structural information and theoretical understanding that is not easily derivable from experimental results. In this review, we look at the theory and computational methods that are applicable to the study of ion channel modulators. The first section provides an introduction to various theoretical concepts, including force-fields and the statistical mechanics of binding. We then look at various computational techniques available to the researcher, including molecular dynamics, Brownian dynamics, and molecular docking systems. The latter section of the review explores applications of these techniques, concentrating on pore blocker and gating modifier toxins of potassium and sodium channels. After first discussing the structural features of these channels, and their modes of block, we provide an in-depth review of past computational work that has been carried out. Finally, we discuss prospects for future developments in the field. PMID:23589832
Poggio, Claudio; Pigozzo, Marco; Ceci, Matteo; Scribante, Andrea; Beltrami, Riccardo; Chiesa, Marco
2016-01-01
Background: The purpose of this study was to evaluate the influence of three different luting protocols on shear bond strength of computer aided design/computer aided manufacturing (CAD/CAM) resin nanoceramic (RNC) material to dentin. Materials and Methods: In this in vitro study, 30 disks were milled from RNC blocks (Lava Ultimate/3M ESPE) with CAD/CAM technology. The disks were subsequently cemented to the exposed dentin of 30 recently extracted bovine permanent mandibular incisors. The specimens were randomly assigned into 3 groups of 10 teeth each. In Group 1, disks were cemented using a total-etch protocol (Scotchbond™ Universal Etchant phosphoric acid + Scotchbond Universal Adhesive + RelyX™ Ultimate conventional resin cement); in Group 2, disks were cemented using a self-etch protocol (Scotchbond Universal Adhesive + RelyX™ Ultimate conventional resin cement); in Group 3, disks were cemented using a self-adhesive protocol (RelyX™ Unicem 2 Automix self-adhesive resin cement). All cemented specimens were placed in a universal testing machine (Instron Universal Testing Machine 3343) and submitted to a shear bond strength test to check the strength of adhesion between the two substrates, dentin, and RNC disks. Specimens were stressed at a crosshead speed of 1 mm/min. Data were analyzed with analysis of variance and post-hoc Tukey's test at a level of significance of 0.05. Results: Post-hoc Tukey testing showed that the highest shear strength values (P < 0.001) were reported in Group 2. The lowest data (P < 0.001) were recorded in Group 3. Conclusion: Within the limitations of this in vitro study, conventional resin cements (coupled with etch and rinse or self-etch adhesives) showed better shear strength values compared to self-adhesive resin cements. Furthermore, conventional resin cements used together with a self-etch adhesive reported the highest values of adhesion. PMID:27076822
Poggio, Claudio; Pigozzo, Marco; Ceci, Matteo; Scribante, Andrea; Beltrami, Riccardo; Chiesa, Marco
2016-01-01
The purpose of this study was to evaluate the influence of three different luting protocols on shear bond strength of computer aided design/computer aided manufacturing (CAD/CAM) resin nanoceramic (RNC) material to dentin. In this in vitro study, 30 disks were milled from RNC blocks (Lava Ultimate/3M ESPE) with CAD/CAM technology. The disks were subsequently cemented to the exposed dentin of 30 recently extracted bovine permanent mandibular incisors. The specimens were randomly assigned into 3 groups of 10 teeth each. In Group 1, disks were cemented using a total-etch protocol (Scotchbond™ Universal Etchant phosphoric acid + Scotchbond Universal Adhesive + RelyX™ Ultimate conventional resin cement); in Group 2, disks were cemented using a self-etch protocol (Scotchbond Universal Adhesive + RelyX™ Ultimate conventional resin cement); in Group 3, disks were cemented using a self-adhesive protocol (RelyX™ Unicem 2 Automix self-adhesive resin cement). All cemented specimens were placed in a universal testing machine (Instron Universal Testing Machine 3343) and submitted to a shear bond strength test to check the strength of adhesion between the two substrates, dentin, and RNC disks. Specimens were stressed at a crosshead speed of 1 mm/min. Data were analyzed with analysis of variance and post-hoc Tukey's test at a level of significance of 0.05. Post-hoc Tukey testing showed that the highest shear strength values (P < 0.001) were reported in Group 2. The lowest data (P < 0.001) were recorded in Group 3. Within the limitations of this in vitro study, conventional resin cements (coupled with etch and rinse or self-etch adhesives) showed better shear strength values compared to self-adhesive resin cements. Furthermore, conventional resin cements used together with a self-etch adhesive reported the highest values of adhesion.
SYSTID - A flexible tool for the analysis of communication systems.
NASA Technical Reports Server (NTRS)
Dawson, C. T.; Tranter, W. H.
1972-01-01
Description of the System Time Domain Simulation (SYSTID) computer-aided analysis program which is specifically structured for communication systems analysis. The SYSTID program is user oriented so that very little knowledge of computer techniques and very little programming ability are required for proper application. The program is designed so that the user can go from a system block diagram to an accurate simulation by simply programming a single English language statement for each block in the system. The mathematical and functional models available in the SYSTID library are presented. An example problem is given which illustrates the ease of modeling communication systems. Examples of the outputs available are presented, and proposed improvements are summarized.
Initial Mechanical Testing of Superalloy Lattice Block Structures Conducted
NASA Technical Reports Server (NTRS)
Krause, David L.; Whittenberger, J. Daniel
2002-01-01
The first mechanical tests of superalloy lattice block structures produced promising results for this exciting new lightweight material system. The testing was performed in-house at NASA Glenn Research Center's Structural Benchmark Test Facility, where small subelement-sized compression and beam specimens were loaded to observe elastic and plastic behavior, component strength levels, and fatigue resistance for hundreds of thousands of load cycles. Current lattice block construction produces a flat panel composed of thin ligaments arranged in a three-dimensional triangulated trusslike structure. Investment casting of lattice block panels has been developed and greatly expands opportunities for using this unique architecture in today's high-performance structures. In addition, advances made in NASA's Ultra-Efficient Engine Technology Program have extended the lattice block concept to superalloy materials. After a series of casting iterations, the nickel-based superalloy Inconel 718 (IN 718, Inco Alloys International, Inc., Huntington, WV) was successfully cast into lattice block panels; this combination offers light weight combined with high strength, high stiffness, and elevated-temperature durability. For tests to evaluate casting quality and configuration merit, small structural compression and bend test specimens were machined from the 5- by 12- by 0.5-in. panels. Linear elastic finite element analyses were completed for several specimen layouts to predict material stresses and deflections under proposed test conditions. The structural specimens were then subjected to room-temperature static and cyclic loads in Glenn's Life Prediction Branch's material test machine. Surprisingly, the test results exceeded analytical predictions: plastic strains greater than 5 percent were obtained, and fatigue lives did not depreciate relative to the base material. These assets were due to the formation of plastic hinges and the redundancies inherent in lattice block construction, which were not considered in the simplified computer models. The fatigue testing proved the value of redundancies since specimen strength was maintained even after the fracture of one or two ligaments. This ongoing test program is planned to continue through high-temperature testing. Also scheduled for testing are IN 718 lattice block panels with integral face sheets, as well as specimens cast from a higher temperature alloy. The initial testing suggests the value of this technology for large panels under low and moderate pressure loadings and for high-risk, damage-tolerant structures. Potential aeropropulsion uses for lattice blocks include turbine-engine actuated panels, exhaust nozzle flaps, and side panel structures.
Computing Aerodynamic Performance of a 2D Iced Airfoil: Blocking Topology and Grid Generation
NASA Technical Reports Server (NTRS)
Chi, X.; Zhu, B.; Shih, T. I.-P.; Slater, J. W.; Addy, H. E.; Choo, Yung K.; Lee, Chi-Ming (Technical Monitor)
2002-01-01
The ice accrued on airfoils can have enormously complicated shapes with multiple protruded horns and feathers. In this paper, several blocking topologies are proposed and evaluated on their ability to produce high-quality structured multi-block grid systems. A transition layer grid is introduced to ensure that jaggedness on the ice-surface geometry do not to propagate into the domain. This is important for grid-generation methods based on hyperbolic PDEs (Partial Differential Equations) and algebraic transfinite interpolation. A 'thick' wrap-around grid is introduced to ensure that grid lines clustered next to solid walls do not propagate as streaks of tightly packed grid lines into the interior of the domain along block boundaries. For ice shapes that are not too complicated, a method is presented for generating high-quality single-block grids. To demonstrate the usefulness of the methods developed, grids and CFD solutions were generated for two iced airfoils: the NLF0414 airfoil with and without the 623-ice shape and the B575/767 airfoil with and without the 145m-ice shape. To validate the computations, the computed lift coefficients as a function of angle of attack were compared with available experimental data. The ice shapes and the blocking topologies were prepared by NASA Glenn's SmaggIce software. The grid systems were generated by using a four-boundary method based on Hermite interpolation with controls on clustering, orthogonality next to walls, and C continuity across block boundaries. The flow was modeled by the ensemble-averaged compressible Navier-Stokes equations, closed by the shear-stress transport turbulence model in which the integration is to the wall. All solutions were generated by using the NPARC WIND code.
Investigation of Kevlar fabric-based materials for use with inflatable structures
NASA Technical Reports Server (NTRS)
Niccum, R. J.; Munson, J. B.; Rueter, L. L.
1977-01-01
Design, manufacture and testing of laminated and coated composite materials incorporating a structural matrix of Kevlar are reported. The practicality of using Kevlar in aerostat materials is demonstrated, and data are provided on practical weaves, lamination and coating particulars, rigidity, strength, weight, elastic coefficients, abrasion resistance, crease effects, peel strength, blocking tendencies, helium permeability, and fabrication techniques. Properties of the Kevlar-based materials are compared with conventional Dacron-reinforced counterparts. A comprehensive test and qualification program is discussed, and considerable quantitative biaxial tensile and shear test data are provided.
Fully automated three-dimensional microscopy system
NASA Astrophysics Data System (ADS)
Kerschmann, Russell L.
2000-04-01
Tissue-scale structures such as vessel networks are imaged at micron resolution with the Virtual Tissue System (VT System). VT System imaging of cubic millimeters of tissue and other material extends the capabilities of conventional volumetric techniques such as confocal microscopy, and allows for the first time the integrated 2D and 3D analysis of important tissue structural relationships. The VT System eliminates the need for glass slide-mounted tissue sections and instead captures images directly from the surface of a block containing a sample. Tissues are en bloc stained with fluorochrome compounds, embedded in an optically conditioned polymer that suppresses image signals form dep within the block , and serially sectioned for imaging. Thousands of fully registered 2D images are automatically captured digitally to completely convert tissue samples into blocks of high-resolution information. The resulting multi gigabyte data sets constitute the raw material for precision visualization and analysis. Cellular function may be seen in a larger anatomical context. VT System technology makes tissue metrics, accurate cell enumeration and cell cycle analyses possible while preserving full histologic setting.
Manipulating the ABCs of self-assembly via low-χ block polymer design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Alice B.; Bates, Christopher M.; Lee, Byeongdu
Block polymer self-assembly typically translates molecular chain connectivity into mesoscale structure by exploiting incompatible blocks with large interaction parameters (χ ij). In this report, we demonstrate that the converse approach, encoding low-χ interactions in ABC bottlebrush triblock terpolymers (χ AC ≲ 0), promotes organization into a unique mixed-domain lamellar morphology which we designate LAM P. Transmission electron microscopy indicates that LAMP exhibits ACBC domain connectivity, in contrast to conventional three-domain lamellae (LAM 3) with ABCB periods. Complementary small angle X-ray scattering experiments reveal a strongly decreasing domain spacing with increasing total molar mass. Self-consistent field theory reinforces these observations andmore » predicts that LAM P is thermodynamically stable below a critical χ AC, above which LAM 3 emerges. Both experiments and theory expose close analogies to ABA triblock copolymer phase behavior, collectively suggesting that low-χ interactions between chemically similar or distinct blocks intimately influence self-assembly. Furthermore, these conclusions provide new opportunities in block polymer design with potential consequences spanning all self-assembling soft materials.« less
Manipulating the ABCs of self-assembly via low-χ block polymer design
Chang, Alice B.; Bates, Christopher M.; Lee, Byeongdu; ...
2017-06-06
Block polymer self-assembly typically translates molecular chain connectivity into mesoscale structure by exploiting incompatible blocks with large interaction parameters (χ ij). In this report, we demonstrate that the converse approach, encoding low-χ interactions in ABC bottlebrush triblock terpolymers (χ AC ≲ 0), promotes organization into a unique mixed-domain lamellar morphology which we designate LAM P. Transmission electron microscopy indicates that LAMP exhibits ACBC domain connectivity, in contrast to conventional three-domain lamellae (LAM 3) with ABCB periods. Complementary small angle X-ray scattering experiments reveal a strongly decreasing domain spacing with increasing total molar mass. Self-consistent field theory reinforces these observations andmore » predicts that LAM P is thermodynamically stable below a critical χ AC, above which LAM 3 emerges. Both experiments and theory expose close analogies to ABA triblock copolymer phase behavior, collectively suggesting that low-χ interactions between chemically similar or distinct blocks intimately influence self-assembly. Furthermore, these conclusions provide new opportunities in block polymer design with potential consequences spanning all self-assembling soft materials.« less
Rheological Design of Sustainable Block Copolymers
NASA Astrophysics Data System (ADS)
Mannion, Alexander M.
Block copolymers are extremely versatile materials that microphase separate to give rise to a rich array of complex behavior, making them the ideal platform for the development of rheologically sophisticated soft matter. In line with growing environmental concerns of conventional plastics from petroleum feedstocks, this work focuses on the rheological design of sustainable block copolymers--those derived from renewable sources and are degradable--based on poly(lactide). Although commercially viable, poly(lactide) has a number of inherent deficiencies that result in a host of challenges that require both creative and practical solutions that are cost-effective and amenable to large-scale production. Specifically, this dissertation looks at applications in which both shear and extensional rheology dictate performance attributes, namely chewing gum, pressure-sensitive adhesives, and polymers for blown film extrusion. Structure-property relationships in the context of block polymer architecture, polymer composition, morphology, and branching are explored in depth. The basic principles and fundamental findings presented in this thesis are applicable to a broader range of substances that incorporate block copolymers for which rheology plays a pivotal role.
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
Topics in the optimization of millimeter-wave mixers
NASA Technical Reports Server (NTRS)
Siegel, P. H.; Kerr, A. R.; Hwang, W.
1984-01-01
A user oriented computer program for the analysis of single-ended Schottky diode mixers is described. The program is used to compute the performance of a 140 to 220 GHz mixer and excellent agreement with measurements at 150 and 180 GHz is obtained. A sensitivity analysis indicates the importance of various diode and mount characteristics on the mixer performance. A computer program for the analysis of varactor diode multipliers is described. The diode operates in either the reverse biased varactor mode or with substantial forward current flow where the conversion mechanism is predominantly resistive. A description and analysis of a new H-plane rectangular waveguide transformer is reported. The transformer is made quickly and easily in split-block waveguide using a standard slitting saw. It is particularly suited for use in the millimeter-wave band, replacing conventional electroformed stepped transformers. A theoretical analysis of the transformer is given and good agreement is obtained with measurements made at X-band.
3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging
NASA Astrophysics Data System (ADS)
Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak
2017-10-01
Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.
Resonant soft x-ray GISAXS on block copolymer films
NASA Astrophysics Data System (ADS)
Wang, Cheng; Araki, T.; Watts, B.; Ade, H.; Hexemer, A.; Park, S.; Russell, T. P.; Schlotter, W. F.; Stein, G. E.; Tang, C.; Kramer, E. J.
2008-03-01
Ordered block copolymer thin films may have important applications in modern device fabrication. Current characterization methods such as conventional GISAXS have fixed electron density contrast that can be overwhelmed by surface scattering. However, soft x-rays have longer wavelength, energy dependent contrast and tunable penetration, making resonant GISAXS a very promising tool for probing nanostructured polymer thin films. Our preliminary investigation was performed using PS-b-P2VP block copolymer films on beam-line 5-2 SSRL, and beam-line 6.3.2 at ALS, LBNL. The contrast/sensitivity of the scattering pattern varies significantly with photon energy close to the C K-edge (˜290 eV). Also, higher order peaks are readily observed, indicating hexagonal packing structure in the sample. Comparing to the hard x-ray GISAXS data of the same system, it is clear that resonant GISAXS has richer data and better resolution. Beyond the results on the A-B diblock copolymers, results on ABC block copolymers are especially interesting.
Experimental Investigations on Axially and Eccentrically Loaded Masonry Walls
NASA Astrophysics Data System (ADS)
Keshava, Mangala; Raghunath, Seshagiri Rao
2017-12-01
In India, un-reinforced masonry walls are often used as main structural components in load bearing structures. Indian code on masonry accounts the reduction in strength of walls by using stress reduction factors in its design philosophy. This code was introduced in 1987 and reaffirmed in 1995. The present study investigates the use of these factors for south Indian masonry. Also, with the gaining popularity in block work construction, the aim of this study was to find out the suitability of these factors given in the Indian code to block work masonry. Normally, the load carrying capacity of masonry walls can be assessed in three ways, namely, (1) tests on masonry constituents, (2) tests on masonry prisms and (3) tests on full-scale wall specimens. Tests on bricks/blocks, cement-sand mortar, brick/block masonry prisms and 14 full-scale brick/block masonry walls formed the experimental investigation. The behavior of the walls was investigated under varying slenderness and eccentricity ratios. Hollow concrete blocks normally used as in-fill masonry can be considered as load bearing elements as its load carrying capacity was found to be high when compared to conventional brick masonry. Higher slenderness and eccentricity ratios drastically reduced the strength capacity of south Indian brick masonry walls. The reduction in strength due to slenderness and eccentricity is presented in the form of stress reduction factors in the Indian code. These factors obtained through experiments on eccentrically loaded brick masonry walls was lower while that of brick/block masonry under axial loads was higher than the values indicated in the Indian code. Also the reduction in strength is different for brick and block work masonry thus indicating the need for separate stress reduction factors for these two masonry materials.
Analysis of Helical Waveguide.
1985-12-23
tube Efficiency Helix structure Backward wave oscillation Gain 19. ABSTRACT (Continue on reverse if necessary and identofy by block number) The...4,vailabilitY CCdes -vai aidIorDist spec a ." iii "- -. .5- S.. . ANALYSIS OF HELICAL WAVEGUIDE I. INTRODUCTION High power (- 10 kW) and broadband ...sys- tems. The frequency range of interest is 60-100 GHz. In this frequency range, the conventional slow wave circuits such as klystrons and TWTs have
A Block Iterative Finite Element Model for Nonlinear Leaky Aquifer Systems
NASA Astrophysics Data System (ADS)
Gambolati, Giuseppe; Teatini, Pietro
1996-01-01
A new quasi three-dimensional finite element model of groundwater flow is developed for highly compressible multiaquifer systems where aquitard permeability and elastic storage are dependent on hydraulic drawdown. The model is solved by a block iterative strategy, which is naturally suggested by the geological structure of the porous medium and can be shown to be mathematically equivalent to a block Gauss-Seidel procedure. As such it can be generalized into a block overrelaxation procedure and greatly accelerated by the use of the optimum overrelaxation factor. Results for both linear and nonlinear multiaquifer systems emphasize the excellent computational performance of the model and indicate that convergence in leaky systems can be improved up to as much as one order of magnitude.
Cloud computing-based TagSNP selection algorithm for human genome data.
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-05
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used.
Cloud Computing-Based TagSNP Selection Algorithm for Human Genome Data
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-01
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used. PMID:25569088
NASA Astrophysics Data System (ADS)
Khan, Akhtar Nawaz
2017-11-01
Currently, analytical models are used to compute approximate blocking probabilities in opaque and all-optical WDM networks with the homogeneous link capacities. Existing analytical models can also be extended to opaque WDM networking with heterogeneous link capacities due to the wavelength conversion at each switch node. However, existing analytical models cannot be utilized for all-optical WDM networking with heterogeneous structure of link capacities due to the wavelength continuity constraint and unequal numbers of wavelength channels on different links. In this work, a mathematical model is extended for computing approximate network blocking probabilities in heterogeneous all-optical WDM networks in which the path blocking is dominated by the link along the path with fewer number of wavelength channels. A wavelength assignment scheme is also proposed for dynamic traffic, termed as last-fit-first wavelength assignment, in which a wavelength channel with maximum index is assigned first to a lightpath request. Due to heterogeneous structure of link capacities and the wavelength continuity constraint, the wavelength channels with maximum indexes are utilized for minimum hop routes. Similarly, the wavelength channels with minimum indexes are utilized for multi-hop routes between source and destination pairs. The proposed scheme has lower blocking probability values compared to the existing heuristic for wavelength assignments. Finally, numerical results are computed in different network scenarios which are approximately equal to values obtained from simulations. Since January 2016, he is serving as Head of Department and an Assistant Professor in the Department of Electrical Engineering at UET, Peshawar-Jalozai Campus, Pakistan. From May 2013 to June 2015, he served Department of Telecommunication Engineering as an Assistant Professor at UET, Peshawar-Mardan Campus, Pakistan. He also worked as an International Internship scholar in the Fukuda Laboratory, National Institute of Informatics, Tokyo, Japan on the topic large-scale simulation for internet topology analysis. His research interests include design and analysis of optical WDM networks, network algorithms, network routing, and network resource optimization problems.
2014-05-01
fusion, space and astrophysical plasmas, but still the general picture can be presented quite well with the fluid approach [6, 7]. The microscopic...purpose computing CPU for algorithms where processing of large blocks of data is done in parallel. The reason for that is the GPU’s highly effective...parallel structure. Most of the image and video processing computations involve heavy matrix and vector op- erations over large amounts of data and
Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas
2016-04-01
Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .
Daxini, S D; Prajapati, J M
2014-01-01
Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.
An object-oriented approach for parallel self adaptive mesh refinement on block structured grids
NASA Technical Reports Server (NTRS)
Lemke, Max; Witsch, Kristian; Quinlan, Daniel
1993-01-01
Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.
A project management system for the X-29A flight test program
NASA Technical Reports Server (NTRS)
Stewart, J. F.; Bauer, C. A.
1983-01-01
The project-management system developed for NASA's participation in the X-29A aircraft development program is characterized from a theoretical perspective, as an example of a system appropriate to advanced, highly integrated technology projects. System-control theory is applied to the analysis of classical project-management techniques and structures, which are found to be of closed-loop multivariable type; and the effects of increasing project complexity and integration are evaluated. The importance of information flow, sampling frequency, information holding, and delays is stressed. The X-29A system is developed in four stages: establishment of overall objectives and requirements, determination of information processes (block diagrams) definition of personnel functional roles and relationships, and development of a detailed work-breakdown structure. The resulting system is shown to require a greater information flow to management than conventional methods. Sample block diagrams are provided.
Monodisperse Block Copolymer Particles with Controllable Size, Shape, and Nanostructure
NASA Astrophysics Data System (ADS)
Shin, Jae Man; Kim, Yongjoo; Kim, Bumjoon; PNEL Team
Shape-anisotropic particles are important class of novel colloidal building block for their functionality is more strongly governed by their shape, size and nanostructure compared to conventional spherical particles. Recently, facile strategy for producing non-spherical polymeric particles by interfacial engineering received significant attention. However, achieving uniform size distribution of particles together with controlled shape and nanostructure has not been achieved. Here, we introduce versatile system for producing monodisperse BCP particles with controlled size, shape and morphology. Polystyrene-b-polybutadiene (PS-b-PB) self-assembled to either onion-like or striped ellipsoid particle, where final structure is governed by amount of adsorbed sodium dodecyl sulfate (SDS) surfactant at the particle/surrounding interface. Further control of molecular weight and particle size enabled fine-tuning of aspect ratio of ellipsoid particle. Underlying physics of free energy for morphology formation and entropic penalty associated with bending BCP chains strongly affects particle structure and specification.
Fault-tolerant computer study. [logic designs for building block circuits
NASA Technical Reports Server (NTRS)
Rennels, D. A.; Avizienis, A. A.; Ercegovac, M. D.
1981-01-01
A set of building block circuits is described which can be used with commercially available microprocessors and memories to implement fault tolerant distributed computer systems. Each building block circuit is intended for VLSI implementation as a single chip. Several building blocks and associated processor and memory chips form a self checking computer module with self contained input output and interfaces to redundant communications buses. Fault tolerance is achieved by connecting self checking computer modules into a redundant network in which backup buses and computer modules are provided to circumvent failures. The requirements and design methodology which led to the definition of the building block circuits are discussed.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-01-10
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
NASA Technical Reports Server (NTRS)
Mangalgiri, P. D.; Prabhakaran, R.
1986-01-01
An algorithm for vectorized computation of stiffness matrices of an 8 noded isoparametric hexahedron element for geometric nonlinear analysis was developed. This was used in conjunction with the earlier 2-D program GAMNAS to develop the new program NAS3D for geometric nonlinear analysis. A conventional, modified Newton-Raphson process is used for the nonlinear analysis. New schemes for the computation of stiffness and strain energy release rates is presented. The organization the program is explained and some results on four sample problems are given. The study of CPU times showed that savings by a factor of 11 to 13 were achieved when vectorized computation was used for the stiffness instead of the conventional scalar one. Finally, the scheme of inputting data is explained.
Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core
NASA Astrophysics Data System (ADS)
Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.
2009-12-01
One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.
Zero-block mode decision algorithm for H.264/AVC.
Lee, Yu-Ming; Lin, Yinyi
2009-03-01
In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.
Ring system-based chemical graph generation for de novo molecular design
NASA Astrophysics Data System (ADS)
Miyao, Tomoyuki; Kaneko, Hiromasa; Funatsu, Kimito
2016-05-01
Generating chemical graphs in silico by combining building blocks is important and fundamental in virtual combinatorial chemistry. A premise in this area is that generated structures should be irredundant as well as exhaustive. In this study, we develop structure generation algorithms regarding combining ring systems as well as atom fragments. The proposed algorithms consist of three parts. First, chemical structures are generated through a canonical construction path. During structure generation, ring systems can be treated as reduced graphs having fewer vertices than those in the original ones. Second, diversified structures are generated by a simple rule-based generation algorithm. Third, the number of structures to be generated can be estimated with adequate accuracy without actual exhaustive generation. The proposed algorithms were implemented in structure generator Molgilla. As a practical application, Molgilla generated chemical structures mimicking rosiglitazone in terms of a two dimensional pharmacophore pattern. The strength of the algorithms lies in simplicity and flexibility. Therefore, they may be applied to various computer programs regarding structure generation by combining building blocks.
Bodine, M.W.
1987-01-01
The FORTRAN 77 computer program CLAYFORM apportions the constituents of a conventional chemical analysis of a silicate mineral into a user-selected structure formula. If requested, such as for a clay mineral or other phyllosilicate, the program distributes the structural formula components into appropriate default or user-specified structural sites (tetrahedral, octahedral, interlayer, hydroxyl, and molecular water sites), and for phyllosilicates calculates the layer (tetrahedral, octahedral, and interlayer) charge distribution. The program also creates data files of entered analyses for subsequent reuse. ?? 1987.
NASA Technical Reports Server (NTRS)
1978-01-01
The antenna shown is the new, multiple-beam, Unattended Earth Terminal, located at COMSAT Laboratories in Clarksburg, Maryland. Seemingly simple, it is actually a complex structure capable of maintaining contact with several satellites simultaneously (conventional Earth station antennas communicate with only one satellite at a time). In developing the antenna, COMSAT Laboratories used NASTRAN, NASA's structural analysis computer program, together with BANDIT, a companion program. The computer programs were used to model several structural configurations and determine the most suitable, The speed and accuracy of the computerized design analysis afforded appreciable savings in time and money.
Load Balancing Strategies for Multi-Block Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)
2002-01-01
The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.
Multigrid Methods for the Computation of Propagators in Gauge Fields
NASA Astrophysics Data System (ADS)
Kalkreuter, Thomas
Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.
Aeroelasticity of wing and wing-body configurations on parallel computers
NASA Technical Reports Server (NTRS)
Byun, Chansup
1995-01-01
The objective of this research is to develop computationally efficient methods for solving aeroelasticity problems on parallel computers. Both uncoupled and coupled methods are studied in this research. For the uncoupled approach, the conventional U-g method is used to determine the flutter boundary. The generalized aerodynamic forces required are obtained by the pulse transfer-function analysis method. For the coupled approach, the fluid-structure interaction is obtained by directly coupling finite difference Euler/Navier-Stokes equations for fluids and finite element dynamics equations for structures. This capability will significantly impact many aerospace projects of national importance such as Advanced Subsonic Civil Transport (ASCT), where the structural stability margin becomes very critical at the transonic region. This research effort will have direct impact on the High Performance Computing and Communication (HPCC) Program of NASA in the area of parallel computing.
InChIKey collision resistance: an experimental testing
2012-01-01
InChIKey is a 27-character compacted (hashed) version of InChI which is intended for Internet and database searching/indexing and is based on an SHA-256 hash of the InChI character string. The first block of InChIKey encodes molecular skeleton while the second block represents various kinds of isomerism (stereo, tautomeric, etc.). InChIKey is designed to be a nearly unique substitute for the parent InChI. However, a single InChIKey may occasionally map to two or more InChI strings (collision). The appearance of collision itself does not compromise the signature as collision-free hashing is impossible; the only viable approach is to set and keep a reasonable level of collision resistance which is sufficient for typical applications. We tested, in computational experiments, how well the real-life InChIKey collision resistance corresponds to the theoretical estimates expected by design. For this purpose, we analyzed the statistical characteristics of InChIKey for datasets of variable size in comparison to the theoretical statistical frequencies. For the relatively short second block, an exhaustive direct testing was performed. We computed and compared to theory the numbers of collisions for the stereoisomers of Spongistatin I (using the whole set of 67,108,864 isomers and its subsets). For the longer first block, we generated, using custom-made software, InChIKeys for more than 3 × 1010 chemical structures. The statistical behavior of this block was tested by comparison of experimental and theoretical frequencies for the various four-letter sequences which may appear in the first block body. From the results of our computational experiments we conclude that the observed characteristics of InChIKey collision resistance are in good agreement with theoretical expectations. PMID:23256896
InChIKey collision resistance: an experimental testing.
Pletnev, Igor; Erin, Andrey; McNaught, Alan; Blinov, Kirill; Tchekhovskoi, Dmitrii; Heller, Steve
2012-12-20
InChIKey is a 27-character compacted (hashed) version of InChI which is intended for Internet and database searching/indexing and is based on an SHA-256 hash of the InChI character string. The first block of InChIKey encodes molecular skeleton while the second block represents various kinds of isomerism (stereo, tautomeric, etc.). InChIKey is designed to be a nearly unique substitute for the parent InChI. However, a single InChIKey may occasionally map to two or more InChI strings (collision). The appearance of collision itself does not compromise the signature as collision-free hashing is impossible; the only viable approach is to set and keep a reasonable level of collision resistance which is sufficient for typical applications.We tested, in computational experiments, how well the real-life InChIKey collision resistance corresponds to the theoretical estimates expected by design. For this purpose, we analyzed the statistical characteristics of InChIKey for datasets of variable size in comparison to the theoretical statistical frequencies. For the relatively short second block, an exhaustive direct testing was performed. We computed and compared to theory the numbers of collisions for the stereoisomers of Spongistatin I (using the whole set of 67,108,864 isomers and its subsets). For the longer first block, we generated, using custom-made software, InChIKeys for more than 3 × 1010 chemical structures. The statistical behavior of this block was tested by comparison of experimental and theoretical frequencies for the various four-letter sequences which may appear in the first block body.From the results of our computational experiments we conclude that the observed characteristics of InChIKey collision resistance are in good agreement with theoretical expectations.
Community Seismic Network (CSN)
NASA Astrophysics Data System (ADS)
Clayton, R. W.; Heaton, T. H.; Kohler, M. D.; Chandy, M.; Krause, A.
2010-12-01
In collaboration with computer science and earthquake engineering, we are developing a dense network of low-cost accelerometers that send their data via the Internet to a cloud-based center. The goal is to make block-by-block measurements of ground shaking in urban areas, which will provide emergency response information in the case of large earthquakes, and an unprecedented high-frequency seismic array to study structure and the earthquake process with moderate shaking. When deployed in high-rise buildings they can be used to monitor the state of health of the structure. The sensors are capable of a resolution of approximately 80 micro-g, connect via USB ports to desktop computers, and cost about $100 each. The network will adapt to its environment by using network-wide machine learning to adjust the picking sensitivity. We are also looking into using other motion sensing devices such as cell phones. For a pilot project, we plan to deploy more than 1000 sensors in the greater Pasadena area. The system is easily adaptable to other seismically vulnerable urban areas.
Recent enhancements to the GRIDGEN structured grid generation system
NASA Technical Reports Server (NTRS)
Steinbrenner, John P.; Chawner, John R.
1992-01-01
Significant enhancements are being implemented into the GRIDGEN3D, multiple block, structured grid generation software. Automatic, point-to-point, interblock connectivity will be possible through the addition of the domain entity to GRIDBLOCK's block construction process. Also, the unification of GRIDGEN2D and GRIDBLOCK has begun with the addition of edge grid point distribution capability to GRIDBLOCK. The geometric accuracy of surface grids and the ease with which databases may be obtained is being improved by adding support for standard computer-aided design formats (e.g., PATRAN Neutral and IGES files). Finally, volume grid quality was improved through addition of new SOR algorithm features and the new hybrid control function type to GRIDGEN3D.
CFD Methods and Tools for Multi-Element Airfoil Analysis
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; George, Michael W. (Technical Monitor)
1995-01-01
This lecture will discuss the computational tools currently available for high-lift multi-element airfoil analysis. It will present an overview of a number of different numerical approaches, their current capabilities, short-comings, and computational costs. The lecture will be limited to viscous methods, including inviscid/boundary layer coupling methods, and incompressible and compressible Reynolds-averaged Navier-Stokes methods. Both structured and unstructured grid generation approaches will be presented. Two different structured grid procedures are outlined, one which uses multi-block patched grids, the other uses overset chimera grids. Turbulence and transition modeling will be discussed.
High quality tissue miniarray technique using a conventional TV/radio telescopic antenna.
Elkablawy, Mohamed A; Albasri, Abdulkader M
2015-01-01
The tissue microarray (TMA) is widely accepted as a fast and cost-effective research tool for in situ tissue analysis in modern pathology. However, the current automated and manual TMA techniques have some drawbacks restricting their productivity. Our study aimed to introduce an improved manual tissue miniarray (TmA) technique that is simple and readily applicable to a broad range of tissue samples. In this study, a conventional TV/radio telescopic antenna was used to punch tissue cores manually from donor paraffin embedded tissue blocks which were pre-incubated at 40oC. The cores were manually transferred, organized and attached to a standard block mould, and filled with liquid paraffin to construct TmA blocks without any use of recipient paraffin blocks. By using a conventional TV/radio antenna, it was possible to construct TmA paraffin blocks with variable formats of array size and number (2-mm x 42, 2.5-mm x 30, 3-mm x 24, 4-mm x 20 and 5-mm x 12 cores). Up to 2-mm x 84 cores could be mounted and stained on a standard microscopic slide by cutting two sections from two different blocks and mounting them beside each other. The technique was simple and caused minimal damage to the donor blocks. H and E and immunostained slides showed well-defined tissue morphology and array configuration. This technique is easy to reproduce, quick, inexpensive and creates uniform blocks with abundant tissues without specialized equipment. It was found to improve the stability of the cores within the paraffin block and facilitated no losses during cutting and immunostaining.
Space Shuttle Communications Coverage Analysis for Thermal Tile Inspection
NASA Technical Reports Server (NTRS)
Kroll, Quin D.; Hwu, Shian U.; Upanavage, Matthew; Boster, John P.; Chavez, Mark A.
2009-01-01
The space shuttle ultra-high frequency Space-to-Space Communication System has to provide adequate communication coverage for astronauts who are performing thermal tile inspection and repair on the underside of the space shuttle orbiter (SSO). Careful planning and quantitative assessment are necessary to ensure successful system operations and mission safety in this work environment. This study assesses communication systems performance for astronauts who are working in the underside, non-line-of-sight shadow region on the space shuttle. All of the space shuttle and International Space Station (ISS) transmitting antennas are blocked by the SSO structure. To ensure communication coverage at planned inspection worksites, the signal strength and link margin between the SSO/ISS antennas and the extravehicular activity astronauts, whose line-of-sight is blocked by vehicle structure, was analyzed. Investigations were performed using rigorous computational electromagnetic modeling techniques. Signal strength was obtained by computing the reflected and diffracted fields along the signal propagation paths between transmitting and receiving antennas. Radio frequency (RF) coverage was determined for thermal tile inspection and repair missions using the results of this computation. Analysis results from this paper are important in formulating the limits on reliable communication range and RF coverage at planned underside inspection and repair worksites.
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
Choi, Y I; Jakhongir, M; Choi, S J; Kim, L; Park, I S; Han, J Y; Kim, J M; Chu, Y C
2016-12-01
Immunocytochemistry (ICC) on formalin-fixed paraffin embedded cell blocks is an ancillary tool commonly recruited for differential diagnoses of fine needle aspiration cytology (FNAC) samples. However, the quality of conventional cell blocks in terms of adequate cellularity and evenness of distribution of cytologic material is not always satisfactory for ICC. We introduce a modified agarose-based cytoscrape cell block (CCB) technique that can be effectively used for the preparation of cell blocks from scrapings of conventional FNAC slides. A decoverslipped FNAC slide was mounted with a small amount of water. The cytological material was scraped off the slide into a tissue mold by scraping with a cell scraper. The cytoscrape material was pelleted by centrifugation and pre-embedded in ultra-low gelling temperature agarose and then re-embedded in conventional agarose. The final agarose gel disk was processed and embedded in paraffin. The quality of the ICC on the CCB sections was identical to that of the immunohistochemical stains on histological sections. By scrapping and harvesting the entirety of the cytological material off the cytology slide into a compact agarose cell button, we could avoid the risk of losing diagnostic material during the CCB preparation. This modified CCB technique enables concentration and focusing of minute material while maintaining the entire amount of the cytoscrape material on the viewing spot of the CCB sections. We believe this technique can be effectively used to improve the level of confidence in diagnosis of FNAC especially when the FNAC slides are the only sample available.
Dynamically reassigning a connected node to a block of compute nodes for re-launching a failed job
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budnik, Thomas A; Knudson, Brant L; Megerian, Mark G
Methods, systems, and products for dynamically reassigning a connected node to a block of compute nodes for re-launching a failed job that include: identifying that a job failed to execute on the block of compute nodes because connectivity failed between a compute node assigned as at least one of the connected nodes for the block of compute nodes and its supporting I/O node; and re-launching the job, including selecting an alternative connected node that is actively coupled for data communications with an active I/O node; and assigning the alternative connected node as the connected node for the block of computemore » nodes running the re-launched job.« less
NASA Astrophysics Data System (ADS)
Dimova, Dilyana; Bajorath, Jürgen
2017-07-01
Computational scaffold hopping aims to identify core structure replacements in active compounds. To evaluate scaffold hopping potential from a principal point of view, regardless of the computational methods that are applied, a global analysis of conventional scaffolds in analog series from compound activity classes was carried out. The majority of analog series was found to contain multiple scaffolds, thus enabling the detection of intra-series scaffold hops among closely related compounds. More than 1000 activity classes were found to contain increasing proportions of multi-scaffold analog series. Thus, using such activity classes for scaffold hopping analysis is likely to overestimate the scaffold hopping (core structure replacement) potential of computational methods, due to an abundance of artificial scaffold hops that are possible within analog series.
Vidor, Michele Machado; Liedke, Gabriela Salatino; Fontana, Mathias Pante; da Silveira, Heraldo Luis Dias; Arus, Nadia Assein; Lemos, André; Vizzotto, Mariana Boessio
2017-11-01
The aim of this study was to evaluate the accuracy of cone beam computed tomography (CBCT) for evaluation of the bone-implant interface in comparison with periapical radiography. Titanium implants were inserted in 74 bovine rib blocks in intimate contact with bone walls and with a gap of 0.125 mm (simulating failure in the osseointegration process). Periapical radiographs were taken with conventional film, and CBCT scans were acquired with i-CAT (0.2 mm and 0.125 mm voxel) and Kodak (0.2 mm and 0.076 mm voxel) units. Three examiners evaluated the images using a 5-point scale. Diagnostic accuracy was analyzed through sensitivity, specificity, and the area under the receiver operating characteristic (ROC) curve (AUC) with 95% confidence intervals (CIs). Intra- and interexaminer agreements were analyzed through Kendall's concordance test. Intra- and interexaminer agreements showed satisfactory results. The greatest accuracy was observed with conventional radiography (AUC = 0.963; CI 95% = 0.891-0.993). I-CAT 0.125-mm images showed good accuracy (AUC = 0.885; CI 95% = 0.790-0.947), with no significant difference compared with conventional radiography. Kodak images had high specificity and low sensitivity, presenting more false-negative results. Conventional radiography showed the highest accuracy for assessment of the bone-implant interface. However, CBCT (i-CAT; 0.125-mm voxel), if available or if performed for preoperative assessment of another implant site, may provide similar accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.
A two-dimensional DNA lattice implanted polymer solar cell.
Lee, Keun Woo; Kim, Kyung Min; Lee, Junwye; Amin, Rashid; Kim, Byeonghoon; Park, Sung Kye; Lee, Seok Kiu; Park, Sung Ha; Kim, Hyun Jae
2011-09-16
A double crossover tile based artificial two-dimensional (2D) DNA lattice was fabricated and the dry-wet method was introduced to recover an original DNA lattice structure in order to deposit DNA lattices safely on the organic layer without damaging the layer. The DNA lattice was then employed as an electron blocking layer in a polymer solar cell causing an increase of about 10% up to 160% in the power conversion efficiency. Consequently, the resulting solar cell which had an artificial 2D DNA blocking layer showed a significant enhancement in power conversion efficiency compared to conventional polymer solar cells. It should be clear that the artificial DNA nanostructure holds unique physical properties that are extremely attractive for various energy-related and photonic applications.
Description of the F-16XL Geometry and Computational Grids Used in CAWAPI
NASA Technical Reports Server (NTRS)
Boelens, O. J.; Badcock, K. J.; Gortz, S.; Morton, S.; Fritz, W.; Karman, S. L., Jr.; Michal, T.; Lamar, J. E.
2009-01-01
The objective of the Cranked-Arrow Wing Aerodynamics Project International (CAWAPI) was to allow a comprehensive validation of Computational Fluid Dynamics methods against the CAWAP flight database. A major part of this work involved the generation of high-quality computational grids. Prior to the grid generation an IGES file containing the air-tight geometry of the F-16XL aircraft was generated by a cooperation of the CAWAPI partners. Based on this geometry description both structured and unstructured grids have been generated. The baseline structured (multi-block) grid (and a family of derived grids) has been generated by the National Aerospace Laboratory NLR. Although the algorithms used by NLR had become available just before CAWAPI and thus only a limited experience with their application to such a complex configuration had been gained, a grid of good quality was generated well within four weeks. This time compared favourably with that required to produce the unstructured grids in CAWAPI. The baseline all-tetrahedral and hybrid unstructured grids has been generated at NASA Langley Research Center and the USAFA, respectively. To provide more geometrical resolution, trimmed unstructured grids have been generated at EADS-MAS, the UTSimCenter, Boeing Phantom Works and KTH/FOI. All grids generated within the framework of CAWAPI will be discussed in the article. Both results obtained on the structured grids and the unstructured grids showed a significant improvement in agreement with flight test data in comparison with those obtained on the structured multi-block grid used during CAWAP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sercombe, W.J.; Smith, G.W.; Morse, J.D.
1996-01-01
The October field, a sub-salt giant in the extensional Gulf of Suez (Egypt) has been structurally reinterpreted for new reserve opportunities. Quantitative SCAT analyses of the wellbore dip data have been integrated with 3D seismic by using dip isogons to construct local structural sections. SCAT dip analysis was critical to the reinterpretation because SCAT revealed important structural information that previously was unresolvable using conventional tadpole plots. In gross aspect, the October Field is a homocline that trends NW-SE, dips to the NE, and is closed on the SW (updip) by the major Clysmic Normal Fault. SCAT accurately calculated the overallmore » trend of the field, but also identified important structural anomalies near the Clysmic fault and in the northwest and southeast plunge ends. In the northwest plunge end, SCAT has identified new, south dipping blocks that are transitional to the structurally-higher North October field. The southeast plunge end has been reinterpreted with correct azimuthal trends and new fault-block prospects. These new SCAT results have successfully improved the 3D seismic interpretation by providing a foundation of accurate in-situ structural control in an area of poor-to-fair seismic quality below the Miocene salt package.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sercombe, W.J.; Smith, G.W.; Morse, J.D.
1996-12-31
The October field, a sub-salt giant in the extensional Gulf of Suez (Egypt) has been structurally reinterpreted for new reserve opportunities. Quantitative SCAT analyses of the wellbore dip data have been integrated with 3D seismic by using dip isogons to construct local structural sections. SCAT dip analysis was critical to the reinterpretation because SCAT revealed important structural information that previously was unresolvable using conventional tadpole plots. In gross aspect, the October Field is a homocline that trends NW-SE, dips to the NE, and is closed on the SW (updip) by the major Clysmic Normal Fault. SCAT accurately calculated the overallmore » trend of the field, but also identified important structural anomalies near the Clysmic fault and in the northwest and southeast plunge ends. In the northwest plunge end, SCAT has identified new, south dipping blocks that are transitional to the structurally-higher North October field. The southeast plunge end has been reinterpreted with correct azimuthal trends and new fault-block prospects. These new SCAT results have successfully improved the 3D seismic interpretation by providing a foundation of accurate in-situ structural control in an area of poor-to-fair seismic quality below the Miocene salt package.« less
Evolution-Inspired Computational Design of Symmetric Proteins.
Voet, Arnout R D; Simoncini, David; Tame, Jeremy R H; Zhang, Kam Y J
2017-01-01
Monomeric proteins with a number of identical repeats creating symmetrical structures are potentially very valuable building blocks with a variety of bionanotechnological applications. As such proteins do not occur naturally, the emerging field of computational protein design serves as an excellent tool to create them from nonsymmetrical templates. Existing pseudo-symmetrical proteins are believed to have evolved from oligomeric precursors by duplication and fusion of identical repeats. Here we describe a computational workflow to reverse-engineer this evolutionary process in order to create stable proteins consisting of identical sequence repeats.
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Illuminati, Fabrizio
2008-10-01
We investigate the structural aspects of genuine multipartite entanglement in Gaussian states of continuous variable systems. Generalizing the results of Adesso and Illuminati [Phys. Rev. Lett. 99, 150501 (2007)], we analyze whether the entanglement shared by blocks of modes distributes according to a strong monogamy law. This property, once established, allows us to quantify the genuine N -partite entanglement not encoded into 2,…,K,…,(N-1) -partite quantum correlations. Strong monogamy is numerically verified, and the explicit expression of the measure of residual genuine multipartite entanglement is analytically derived, by a recursive formula, for a subclass of Gaussian states. These are fully symmetric (permutation-invariant) states that are multipartitioned into blocks, each consisting of an arbitrarily assigned number of modes. We compute the genuine multipartite entanglement shared by the blocks of modes and investigate its scaling properties with the number and size of the blocks, the total number of modes, the global mixedness of the state, and the squeezed resources needed for state engineering. To achieve the exact computation of the block entanglement, we introduce and prove a general result of symplectic analysis: Correlations among K blocks in N -mode multisymmetric and multipartite Gaussian states, which are locally invariant under permutation of modes within each block, can be transformed by a local (with respect to the partition) unitary operation into correlations shared by K single modes, one per block, in effective nonsymmetric states where N-K modes are completely uncorrelated. Due to this theorem, the above results, such as the derivation of the explicit expression for the residual multipartite entanglement, its nonnegativity, and its scaling properties, extend to the subclass of non-symmetric Gaussian states that are obtained by the unitary localization of the multipartite entanglement of symmetric states. These findings provide strong numerical evidence that the distributed Gaussian entanglement is strongly monogamous under and possibly beyond specific symmetry constraints, and that the residual continuous-variable tangle is a proper measure of genuine multipartite entanglement for permutation-invariant Gaussian states under any multipartition of the modes.
Gonzato, Carlo; Semsarilar, Mona; Jones, Elizabeth R; Li, Feng; Krooshof, Gerard J P; Wyman, Paul; Mykhaylyk, Oleksandr O; Tuinier, Remco; Armes, Steven P
2014-08-06
Block copolymer self-assembly is normally conducted via post-polymerization processing at high dilution. In the case of block copolymer vesicles (or "polymersomes"), this approach normally leads to relatively broad size distributions, which is problematic for many potential applications. Herein we report the rational synthesis of low-polydispersity diblock copolymer vesicles in concentrated solution via polymerization-induced self-assembly using reversible addition-fragmentation chain transfer (RAFT) polymerization of benzyl methacrylate. Our strategy utilizes a binary mixture of a relatively long and a relatively short poly(methacrylic acid) stabilizer block, which become preferentially expressed at the outer and inner poly(benzyl methacrylate) membrane surface, respectively. Dynamic light scattering was utilized to construct phase diagrams to identify suitable conditions for the synthesis of relatively small, low-polydispersity vesicles. Small-angle X-ray scattering (SAXS) was used to verify that this binary mixture approach produced vesicles with significantly narrower size distributions compared to conventional vesicles prepared using a single (short) stabilizer block. Calculations performed using self-consistent mean field theory (SCMFT) account for the preferred self-assembled structures of the block copolymer binary mixtures and are in reasonable agreement with experiment. Finally, both SAXS and SCMFT indicate a significant degree of solvent plasticization for the membrane-forming poly(benzyl methacrylate) chains.
Design component method for sensitivity analysis of built-up structures
NASA Technical Reports Server (NTRS)
Choi, Kyung K.; Seong, Hwai G.
1986-01-01
A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1990-01-01
Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.
Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.
Chen, Zhenhua; Corminboeuf, Clémence; Mo, Yirong
2014-08-07
Following the computational strategy proposed by Mulliken in 1939 ( J. Chem. Phys. 1939, 7 (5), 339-352), when the concept of hyperconjugation was coined, we evaluated the hyperconjugative stabilization energy in 1,1,1-trihaloethane using the block-localized wave function (BLW) method. The BLW method is the simplest and most efficient variant of ab initio valence bond (VB) theory and can derive the strictly electron-localized state wave function self-consistently. The latter serves as a reference for the quantification of the electron delocalization effect in terms of the resonance theory. Computations show that the overall hyperconjugative interactions in 1,1,1-trihaloethane, dominated by σ(CH) → σ'(CX) with minor contribution from σ(CX) → σ'(CH), ranges from 9.59 to 7.25 kcal/mol in the staggered structures and decreases in the order Br > Cl > F. This is in accord with the (1)H NMR spectra of CH3CX3. Notably, the hyperconjugation effect accounts for 35-40% of the rotation barriers in these molecules, which are dominated by the conventional steric repulsion. This is consistent with the recent findings with 1,2-difluoroethane (Freitas, Bühl, and O'Hagan. Chem. Comm. 2012, 48, 2433-2435) that the variation of (1)J(CF) with the FCCF torsional angle cannot be well explained by the hyperconjugation model
Large Composite Structures Processing Technologies for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Clinton, R. G., Jr.; Vickers, J. H.; McMahon, W. M.; Hulcher, A. B.; Johnston, N. J.; Cano, R. J.; Belvin, H. L.; McIver, K.; Franklin, W.; Sidwell, D.
2001-01-01
Significant efforts have been devoted to establishing the technology foundation to enable the progression to large scale composite structures fabrication. We are not capable today of fabricating many of the composite structures envisioned for the second generation reusable launch vehicle (RLV). Conventional 'aerospace' manufacturing and processing methodologies (fiber placement, autoclave, tooling) will require substantial investment and lead time to scale-up. Out-of-autoclave process techniques will require aggressive efforts to mature the selected technologies and to scale up. Focused composite processing technology development and demonstration programs utilizing the building block approach are required to enable envisioned second generation RLV large composite structures applications. Government/industry partnerships have demonstrated success in this area and represent best combination of skills and capabilities to achieve this goal.
NASA Astrophysics Data System (ADS)
Agarwal, Shikha; Agarwal, Dinesh Kr.; Kalal, Priyanka; Gandhi, Divyani
2018-05-01
Multicomponent reactions (MCRs) have been discovered as a powerful method for the synthesis of organic molecules, since the products are formed in a single step and the building blocks with diverse range of complexity can be obtained from easily available precursors. This strategy has become important in drug designing and discovery in the context of synthesis of biologically active compounds. In the today's scenario, MCRs are influenced by greener conditions as a powerful alternative over the conventional synthesis. In the last few years, a number of scientific publications have been appeared in the literature depicting the synthesis of pyrimidobenzothiazoles via greener routes which clearly states its importance in pharmaceutical chemistry for the drug development. Our article describes the synthesis of substituted pyrimidobenzothiazoles via one pot multicomponent reaction with structural diversity through conventional and greener pathways using different catalysts, ionic liquids, agar, resins etc.
Modeling Equity for Alternative Water Rate Structures
NASA Astrophysics Data System (ADS)
Griffin, R.; Mjelde, J.
2011-12-01
The rising popularity of increasing block rates for urban water runs counter to mainstream economic recommendations, yet decision makers in rate design forums are attracted to the notion of higher prices for larger users. Among economists, it is widely appreciated that uniform rates have stronger efficiency properties than increasing block rates, especially when volumetric prices incorporate intrinsic water value. Yet, except for regions where water market purchases have forced urban authorities to include water value in water rates, economic arguments have weakly penetrated policy. In this presentation, recent evidence will be reviewed regarding long term trends in urban rate structures while observing economic principles pertaining to these choices. The main objective is to investigate the equity of increasing block rates as contrasted to uniform rates for a representative city. Using data from four Texas cities, household water demand is established as a function of marginal price, income, weather, number of residents, and property characteristics. Two alternative rate proposals are designed on the basis of recent experiences for both water and wastewater rates. After specifying a reasonable number (~200) of diverse households populating the city and parameterizing each household's characteristics, every household's consumption selections are simulated for twelve months. This procedure is repeated for both rate systems. Monthly water and wastewater bills are also computed for each household. Most importantly, while balancing the budget of the city utility we compute the effect of switching rate structures on the welfares of households of differing types. Some of the empirical findings are as follows. Under conditions of absent water scarcity, households of opposing characters such as low versus high income do not have strong preferences regarding rate structure selection. This changes as water scarcity rises and as water's opportunity costs are allowed to influence uniform rates. The welfare results of these exercises indicate that popular conceptions about increasing block rates may be incorrect insofar as the scarcity-endogenous uniform rate favors low-income households. That is, under scarcity conditions a switch from increasing block rates to full price uniform rates redistributes welfare so as to place more of the welfare burden of conservation on high-income households. Similarly, any household characteristic that tends to accompany low water use (e.g. low property value) generates a the same rate structure preference. These results are an intriguing addition to existing knowledge pertaining to the properties of increasing block rates and uniform rates with respect to criteria such as efficiency, simplicity, effectiveness, and (now) equity.
On topological RNA interaction structures.
Qin, Jing; Reidys, Christian M
2013-07-01
Recently a folding algorithm of topological RNA pseudoknot structures was presented in Reidys et al. (2011). This algorithm folds single-stranded γ-structures, that is, RNA structures composed by distinct motifs of bounded topological genus. In this article, we set the theoretical foundations for the folding of the two backbone analogues of γ structures: the RNA γ-interaction structures. These are RNA-RNA interaction structures that are constructed by a finite number of building blocks over two backbones having genus at most γ. Combinatorial properties of γ-interaction structures are of practical interest since they have direct implications for the folding of topological interaction structures. We compute the generating function of γ-interaction structures and show that it is algebraic, which implies that the numbers of interaction structures can be computed recursively. We obtain simple asymptotic formulas for 0- and 1-interaction structures. The simplest class of interaction structures are the 0-interaction structures, which represent the two backbone analogues of secondary structures.
Program Aids Specification Of Multiple-Block Grids
NASA Technical Reports Server (NTRS)
Sorenson, R. L.; Mccann, K. M.
1993-01-01
3DPREP computer program aids specification of multiple-block computational grids. Highly interactive graphical preprocessing program designed for use on powerful graphical scientific computer workstation. Divided into three main parts, each corresponding to principal graphical-and-alphanumerical display. Relieves user of some burden of collecting and formatting many data needed to specify blocks and grids, and prepares input data for NASA's 3DGRAPE grid-generating computer program.
Biolik, A; Heide, S; Lessig, R; Hachmann, V; Stoevesandt, D; Kellner, J; Jäschke, C; Watzke, S
2018-04-01
One option for improving the quality of medical post mortem examinations is through intensified training of medical students, especially in countries where such a requirement exists regardless of the area of specialisation. For this reason, new teaching and learning methods on this topic have recently been introduced. These new approaches include e-learning modules or SkillsLab stations; one way to objectify the resultant learning outcomes is by means of the OSCE process. However, despite offering several advantages, this examination format also requires considerable resources, in particular in regards to medical examiners. For this reason, many clinical disciplines have already implemented computer-based OSCE examination formats. This study investigates whether the conventional exam format for the OSCE forensic "Death Certificate" station could be replaced with a computer-based approach in future. For this study, 123 students completed the OSCE "Death Certificate" station, using both a computer-based and conventional format, half starting with the Computer the other starting with the conventional approach in their OSCE rotation. Assignment of examination cases was random. The examination results for the two stations were compared and both overall results and the individual items of the exam checklist were analysed by means of inferential statistics. Following statistical analysis of examination cases of varying difficulty levels and correction of the repeated measures effect, the results of both examination formats appear to be comparable. Thus, in the descriptive item analysis, while there were some significant differences between the computer-based and conventional OSCE stations, these differences were not reflected in the overall results after a correction factor was applied (e.g. point deductions for assistance from the medical examiner was possible only at the conventional station). Thus, we demonstrate that the computer-based OSCE "Death Certificate" station is a cost-efficient and standardised format for examination that yields results comparable to those from a conventional format exam. Moreover, the examination results also indicate the need to optimize both the test itself (adjusting the degree of difficulty of the case vignettes) and the corresponding instructional and learning methods (including, for example, the use of computer programmes to complete the death certificate in small group formats in the SkillsLab). Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Fernandez-Gonzalez, Rodrigo; Deschamps, Thomas; Idica, Adam; Malladi, Ravikanth; Ortiz de Solorzano, Carlos
2003-07-01
In this paper we present a scheme for real time segmentation of histological structures in microscopic images of normal and neoplastic mammary gland sections. Paraffin embedded or frozen tissue blocks are sliced, and sections are stained with hematoxylin and eosin (H&E). The sections are then imaged using conventional bright field microscopy. The background of the images is corrected by arithmetic manipulation using a "phantom." Then we use the fast marching method with a speed function that depends on the brightness gradient of the image to obtain a preliminary approximation to the boundaries of the structures of interest within a region of interest (ROI) of the entire section manually selected by the user. We use the result of the fast marching method as the initial condition for the level set motion equation. We run this last method for a few steps and obtain the final result of the segmentation. These results can be connected from section to section to build a three-dimensional reconstruction of the entire tissue block that we are studying.
Lin, Mouhong; Huang, Haoliang; Liu, Zuotao; Liu, Yingju; Ge, Junbin; Fang, Yueping
2013-12-10
Magnetic nanoparticle clusters (MNCs) are a class of secondary structural materials that comprise chemically defined nanoparticles assembled into clusters of defined size. Herein, MNCs are fabricated through a one-pot solvothermal reaction featuring self-limiting assembly of building blocks and the controlled reorganization process. Such growth-dissolution-regrowth fabrication mechanism overcomes some limitations of conventional solvothermal fabrication methods with regard to restricted available feature size and structural complexity, which can be extended to other oxides (as long as one can be chelated by EDTA-2Na). Based on this method, the nanoparticle size of MNCs is tuned between 6.8 and 31.2 nm at a fixed cluster diameter of 120 nm, wherein the critical size for superparamagnetic-ferromagnetic transition is estimated from 13.5 to 15.7 nm. Control over the nature and secondary structure of MNCs gives an excellent model system to understand the nanoparticle size-dependent magnetic properties of MNCs. MNCs have potential applications in many different areas, while this work evaluates their cytotoxicity and Pb(2+) adsorption capacity as initial application study.
Guidelines for development structured FORTRAN programs
NASA Technical Reports Server (NTRS)
Earnest, B. M.
1984-01-01
Computer programming and coding standards were compiled to serve as guidelines for the uniform writing of FORTRAN 77 programs at NASA Langley. Software development philosophy, documentation, general coding conventions, and specific FORTRAN coding constraints are discussed.
Low-loss terahertz ribbon waveguides.
Yeh, Cavour; Shimabukuro, Fred; Siegel, Peter H
2005-10-01
The submillimeter wave or terahertz (THz) band (1 mm-100 microm) is one of the last unexplored frontiers in the electromagnetic spectrum. A major stumbling block hampering instrument deployment in this frequency regime is the lack of a low-loss guiding structure equivalent to the optical fiber that is so prevalent at the visible wavelengths. The presence of strong inherent vibrational absorption bands in solids and the high skin-depth losses of conductors make the traditional microstripline circuits, conventional dielectric lines, or metallic waveguides, which are common at microwave frequencies, much too lossy to be used in the THz bands. Even the modern surface plasmon polariton waveguides are much too lossy for long-distance transmission in the THz bands. We describe a concept for overcoming this drawback and describe a new family of ultra-low-loss ribbon-based guide structures and matching components for propagating single-mode THz signals. For straight runs this ribbon-based waveguide can provide an attenuation constant that is more than 100 times less than that of a conventional dielectric or metallic waveguide. Problems dealing with efficient coupling of power into and out of the ribbon guide, achieving low-loss bends and branches, and forming THz circuit elements are discussed in detail. One notes that active circuit elements can be integrated directly onto the ribbon structure (when it is made with semiconductor material) and that the absence of metallic structures in the ribbon guide provides the possibility of high-power carrying capability. It thus appears that this ribbon-based dielectric waveguide and associated components can be used as fundamental building blocks for a new generation of ultra-high-speed electronic integrated circuits or THz interconnects.
NASA Astrophysics Data System (ADS)
Bonduà, Stefano; Battistelli, Alfredo; Berry, Paolo; Bortolotti, Villiam; Consonni, Alberto; Cormio, Carlo; Geloni, Claudio; Vasini, Ester Maria
2017-11-01
As is known, a full three-dimensional (3D) unstructured grid permits a great degree of flexibility when performing accurate numerical reservoir simulations. However, when the Integral Finite Difference Method (IFDM) is used for spatial discretization, constraints (arising from the required orthogonality between the segment connecting the blocks nodes and the interface area between blocks) pose difficulties in the creation of grids with irregular shaped blocks. The full 3D Voronoi approach guarantees the respect of IFDM constraints and allows generation of grids conforming to geological formations and structural objects and at the same time higher grid resolution in volumes of interest. In this work, we present dedicated pre- and post-processing gridding software tools for the TOUGH family of numerical reservoir simulators, developed by the Geothermal Research Group of the DICAM Department, University of Bologna. VORO2MESH is a new software coded in C++, based on the voro++ library, allowing computation of the 3D Voronoi tessellation for a given domain and the creation of a ready to use TOUGH2 MESH file. If a set of geological surfaces is available, the software can directly generate the set of Voronoi seed points used for tessellation. In order to reduce the number of connections and so to decrease computation time, VORO2MESH can produce a mixed grid with regular blocks (orthogonal prisms) and irregular blocks (polyhedron Voronoi blocks) at the point of contact between different geological formations. In order to visualize 3D Voronoi grids together with the results of numerical simulations, the functionality of the TOUGH2Viewer post-processor has been extended. We describe an application of VORO2MESH and TOUGH2Viewer to validate the two tools. The case study deals with the simulation of the migration of gases in deep layered sedimentary formations at basin scale using TOUGH2-TMGAS. A comparison between the simulation performances of unstructured and structured grids is presented.
Ostras, Konstantin S; Gorobets, Nikolay Yu; Desenko, Sergey M; Musatov, Vladimir I
2006-08-01
A new one-stage fast multicomponent synthesis of title compounds leads to products in 21-55% isolated yields under both conventional and microwave conditions. The primary amino group in the building blocks can be easily acylated by various usual electophilic agents that can be utilized in the synthesis of diverse heterocylic compounds libraries.
Boonsiriseth, K; Sirintawat, N; Arunakul, K; Wongsirichat, N
2013-07-01
This study aimed to evaluate the efficacy of anesthesia obtained with a novel injection approach for inferior alveolar nerve block compared with the conventional injection approach. 40 patients in good health, randomly received each of two injection approaches of local anesthetic on each side of the mandible at two separate appointments. A sharp probe and an electric pulp tester were used to test anesthesia before injection, after injection when the patients' sensation changed, and 5 min after injection. This study comprised positive aspiration and intravascular injection 5% and neurovascular bundle injection 7.5% in the conventional inferior alveolar nerve block, but without occurrence in the novel injection approach. A visual analog scale (VAS) pain assessment was used during injection and surgery. The significance level used in the statistical analysis was p<0.05. For the novel injection approach compared with the conventional injection approach, no significant difference was found on the subjective onset, objective onset, operation time, duration of anesthesia and VAS pain score during operation, but the VAS pain score during injection was significantly different. The efficacy of inferior alveolar nerve block by the novel injection approach provided adequate anesthesia and caused less pain and greater safety during injection. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Electrotransfection of Polyamine Folded DNA Origami Structures.
Chopra, Aradhana; Krishnan, Swati; Simmel, Friedrich C
2016-10-12
DNA origami structures are artificial molecular nanostructures in which DNA double helices are forced into a closely packed configuration by a multitude of DNA strand crossovers. We show that three different types of origami structures (a flat sheet, a hollow tube, and a compact origami block) can be formed in magnesium-free buffer solutions containing low (<1 mM) concentrations of the condensing agent spermidine. Much like in DNA condensation, the amount of spermidine required for origami folding is proportional to the DNA concentration. At excessive amounts, the structures aggregate and precipitate. In contrast to origami structures formed in conventional buffers, the resulting structures are stable in the presence of high electric field pulses, such as those commonly used for electrotransfection experiments. We demonstrate that spermidine-stabilized structures are stable in cell lysate and can be delivered into mammalian cells via electroporation.
Djumas, Lee; Molotnikov, Andrey; Simon, George P; Estrin, Yuri
2016-05-24
Structural composites inspired by nacre have emerged as prime exemplars for guiding materials design of fracture-resistant, rigid hybrid materials. The intricate microstructure of nacre, which combines a hard majority phase with a small fraction of a soft phase, achieves superior mechanical properties compared to its constituents and has generated much interest. However, replicating the hierarchical microstructure of nacre is very challenging, not to mention improving it. In this article, we propose to alter the geometry of the hard building blocks by introducing the concept of topological interlocking. This design principle has previously been shown to provide an inherently brittle material with a remarkable flexural compliance. We now demonstrate that by combining the basic architecture of nacre with topological interlocking of discrete hard building blocks, hybrid materials of a new type can be produced. By adding a soft phase at the interfaces between topologically interlocked blocks in a single-build additive manufacturing process, further improvement of mechanical properties is achieved. The design of these fabricated hybrid structures has been guided by computational work elucidating the effect of various geometries. To our knowledge, this is the first reported study that combines the advantages of nacre-inspired structures with the benefits of topological interlocking.
Scaffolds for Bone Tissue Engineering: State of the art and new perspectives.
Roseti, Livia; Parisi, Valentina; Petretta, Mauro; Cavallo, Carola; Desando, Giovanna; Bartolotti, Isabella; Grigolo, Brunella
2017-09-01
This review is intended to give a state of the art description of scaffold-based strategies utilized in Bone Tissue Engineering. Numerous scaffolds have been tested in the orthopedic field with the aim of improving cell viability, attachment, proliferation and homing, osteogenic differentiation, vascularization, host integration and load bearing. The main traits that characterize a scaffold suitable for bone regeneration concerning its biological requirements, structural features, composition, and types of fabrication are described in detail. Attention is then focused on conventional and Rapid Prototyping scaffold manufacturing techniques. Conventional manufacturing approaches are subtractive methods where parts of the material are removed from an initial block to achieve the desired shape. Rapid Prototyping techniques, introduced to overcome standard techniques limitations, are additive fabrication processes that manufacture the final three-dimensional object via deposition of overlying layers. An important improvement is the possibility to create custom-made products by means of computer assisted technologies, starting from patient's medical images. As a conclusion, it is highlighted that, despite its encouraging results, the clinical approach of Bone Tissue Engineering has not taken place on a large scale yet, due to the need of more in depth studies, its high manufacturing costs and the difficulty to obtain regulatory approval. PUBMED search terms utilized to write this review were: "Bone Tissue Engineering", "regenerative medicine", "bioactive scaffolds", "biomimetic scaffolds", "3D printing", "3D bioprinting", "vascularization" and "dentistry". Copyright © 2017 Elsevier B.V. All rights reserved.
On the numerical treatment of nonlinear source terms in reaction-convection equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1992-01-01
The objectives of this paper are to investigate how various numerical treatments of the nonlinear source term in a model reaction-convection equation can affect the stability of steady-state numerical solutions and to show under what conditions the conventional linearized analysis breaks down. The underlying goal is to provide part of the basic building blocks toward the ultimate goal of constructing suitable numerical schemes for hypersonic reacting flows, combustions and certain turbulence models in compressible Navier-Stokes computations. It can be shown that nonlinear analysis uncovers much of the nonlinear phenomena which linearized analysis is not capable of predicting in a model reaction-convection equation.
Real-time algorithm for acoustic imaging with a microphone array.
Huang, Xun
2009-05-01
Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.
Wang, Jianji; Zheng, Nanning
2013-09-01
Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.
Pathways to Mesoporous Resin/Carbon Thin Films with Alternating Gyroid Morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qi; Matsuoka, Fumiaki; Suh, Hyo Seon
Three-dimensional (3D) mesoporous thin films with sub-100 nm periodic lattices are of increasing interest as templates for a number of nanotechnology applications, yet are hard to achieve with conventional top-down fabrication methods. Block copolymer self-assembly derived mesoscale structures provide a toolbox for such 3D template formation. In this work, single (alternating) gyroidal and double gyroidal mesoporous thin-film structures are achieved via solvent vapor annealing assisted co-assembly of poly(isoprene-block-styrene-block-ethylene oxide) (PI-b-PS-b-PEO, ISO) and resorcinol/phenol formaldehyde resols. In particular, the alternating gyroid thin-film morphology is highly desirable for potential template backfilling processes as a result of the large pore volume fraction. Inmore » situ grazing-incidence small-angle X-ray scattering during solvent annealing is employed as a tool to elucidate and navigate the pathway complexity of the structure formation processes. The resulting network structures are resistant to high temperatures provided an inert atmosphere. The thin films have tunable hydrophilicity from pyrolysis at different temperatures, while pore sizes can be tailored by varying ISO molar mass. A transfer technique between substrates is demonstrated for alternating gyroidal mesoporous thin films, circumventing the need to re-optimize film formation protocols for different substrates. Increased conductivity after pyrolysis at high temperatures demonstrates that these gyroidal mesoporous resin/carbon thin films have potential as functional 3D templates for a number of nanomaterials applications.« less
Semiclassical Virasoro blocks from AdS 3 gravity
Hijano, Eliot; Kraus, Per; Perlmutter, Eric; ...
2015-12-14
We present a unified framework for the holographic computation of Virasoro conformal blocks at large central charge. In particular, we provide bulk constructions that correctly reproduce all semiclassical Virasoro blocks that are known explicitly from conformal field theory computations. The results revolve around the use of geodesic Witten diagrams, recently introduced in [1], evaluated in locally AdS 3 geometries generated by backreaction of heavy operators. We also provide an alternative computation of the heavy-light semiclassical block — in which two external operators become parametrically heavy — as a certain scattering process involving higher spin gauge fields in AdS 3; thismore » approach highlights the chiral nature of Virasoro blocks. Finally, these techniques may be systematically extended to compute corrections to these blocks and to interpolate amongst the different semiclassical regimes.« less
Thermal/structural Tailoring of Engine Blades (T/STAEBL) User's Manual
NASA Technical Reports Server (NTRS)
Brown, K. W.; Clevenger, W. B.; Arel, J. D.
1994-01-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual contains an overview of the system, fundamentals of the data block structure, and detailed descriptions of the inputs required by the optimizer. Additionally, the thermal analysis input requirements are described as well as the inputs required to perform a finite element blade vibrations analysis.
Ramkiran, Seshadri; Jacob, Mathews; Honwad, Manish; Vivekanand, Desiraju; Krishnakumar, Mathangi; Patrikar, Seema
2018-01-01
Background: Pain associated with laparoscopic cholecystectomy is most severe during the first 24 h and the port sites are the most painful. Recent multimodal approaches target incisional pain instead of visceral pain which has led to the emergence of abdominal fascial plane blocks. This study embraces a novel combination of two independently effective fascial plane blocks, namely rectus sheath block and subcostal transversus abdominis plane (TAP) block to alleviate postoperative pain. Study Objective: The aim is to evaluate the effectiveness of the combination of rectus sheath block and subcostal TAP block, to compare its efficacy with that of subcostal TAP block alone and with conventional port site infiltration (PSI) in alleviating postoperative pain in patients undergoing laparoscopic cholecystectomy. Methodology: This prospective, randomized control, pilot study included 61 patients scheduled for elective laparoscopic cholecystectomy and distributed among three groups, namely Group 1: Combined subcostal TAP block with rectus sheath block (n = 20); Group 2: Oblique subcostal TAP block alone (n = 21); and Group 3: PSI group as an active control (n = 20). Results: Combined group had significantly lower pain scores, higher satisfaction scores, and reduced rescue analgesia both in early and late postoperative periods than the conventional PSI group. Conclusion: Ultrasound-guided combined fascial plane blocks is a novel intervention in pain management of patients undergoing laparoscopic cholecystectomy and should become the standard of care. PMID:29628547
NASA Astrophysics Data System (ADS)
Lee, Jonghyun; Rolle, Massimo; Kitanidis, Peter K.
2018-05-01
Most recent research on hydrodynamic dispersion in porous media has focused on whole-domain dispersion while other research is largely on laboratory-scale dispersion. This work focuses on the contribution of a single block in a numerical model to dispersion. Variability of fluid velocity and concentration within a block is not resolved and the combined spreading effect is approximated using resolved quantities and macroscopic parameters. This applies whether the formation is modeled as homogeneous or discretized into homogeneous blocks but the emphasis here being on the latter. The process of dispersion is typically described through the Fickian model, i.e., the dispersive flux is proportional to the gradient of the resolved concentration, commonly with the Scheidegger parameterization, which is a particular way to compute the dispersion coefficients utilizing dispersivity coefficients. Although such parameterization is by far the most commonly used in solute transport applications, its validity has been questioned. Here, our goal is to investigate the effects of heterogeneity and mass transfer limitations on block-scale longitudinal dispersion and to evaluate under which conditions the Scheidegger parameterization is valid. We compute the relaxation time or memory of the system; changes in time with periods larger than the relaxation time are gradually leading to a condition of local equilibrium under which dispersion is Fickian. The method we use requires the solution of a steady-state advection-dispersion equation, and thus is computationally efficient, and applicable to any heterogeneous hydraulic conductivity K field without requiring statistical or structural assumptions. The method was validated by comparing with other approaches such as the moment analysis and the first order perturbation method. We investigate the impact of heterogeneity, both in degree and structure, on the longitudinal dispersion coefficient and then discuss the role of local dispersion and mass transfer limitations, i.e., the exchange of mass between the permeable matrix and the low permeability inclusions. We illustrate the physical meaning of the method and we show how the block longitudinal dispersivity approaches, under certain conditions, the Scheidegger limit at large Péclet numbers. Lastly, we discuss the potential and limitations of the method to accurately describe dispersion in solute transport applications in heterogeneous aquifers.
A basic review on the inferior alveolar nerve block techniques.
Khalil, Hesham
2014-01-01
The inferior alveolar nerve block is the most common injection technique used in dentistry and many modifications of the conventional nerve block have been recently described in the literature. Selecting the best technique by the dentist or surgeon depends on many factors including the success rate and complications related to the selected technique. Dentists should be aware of the available current modifications of the inferior alveolar nerve block techniques in order to effectively choose between these modifications. Some operators may encounter difficulty in identifying the anatomical landmarks which are useful in applying the inferior alveolar nerve block and rely instead on assumptions as to where the needle should be positioned. Such assumptions can lead to failure and the failure rate of inferior alveolar nerve block has been reported to be 20-25% which is considered very high. In this basic review, the anatomical details of the inferior alveolar nerve will be given together with a description of its both conventional and modified blocking techniques; in addition, an overview of the complications which may result from the application of this important technique will be mentioned.
A basic review on the inferior alveolar nerve block techniques
Khalil, Hesham
2014-01-01
The inferior alveolar nerve block is the most common injection technique used in dentistry and many modifications of the conventional nerve block have been recently described in the literature. Selecting the best technique by the dentist or surgeon depends on many factors including the success rate and complications related to the selected technique. Dentists should be aware of the available current modifications of the inferior alveolar nerve block techniques in order to effectively choose between these modifications. Some operators may encounter difficulty in identifying the anatomical landmarks which are useful in applying the inferior alveolar nerve block and rely instead on assumptions as to where the needle should be positioned. Such assumptions can lead to failure and the failure rate of inferior alveolar nerve block has been reported to be 20-25% which is considered very high. In this basic review, the anatomical details of the inferior alveolar nerve will be given together with a description of its both conventional and modified blocking techniques; in addition, an overview of the complications which may result from the application of this important technique will be mentioned. PMID:25886095
Structural model of homogeneous As–S glasses derived from Raman spectroscopy and high-resolution XPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golovchak, R.; Shpotyuk, O.; Mccloy, J. S.
2010-11-28
The structure of homogeneous bulk As x S 100- x (25 ≤ x ≤ 42) glasses, prepared by the conventional rocking–melting–quenching method, was investigated using high-resolution X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy. It is shown that the main building blocks of their glass networks are regular AsS 3/2 pyramids and sulfur chains. In the S-rich domain, the existence of quasi-tetrahedral (QT) S = As(S 1/2) 3 units is deduced from XPS data, but with a concentration not exceeding ~3–5% of total atomic sites. Therefore, QT units do not appear as primary building blocks of the glass backbone in thesemore » materials, and an optimally-constrained network may not be an appropriate description for glasses when x < 40. Finally, it is shown that, in contrast to Se-based glasses, the ‘chain-crossing’ model is only partially applicable to sulfide glasses.« less
NASA Astrophysics Data System (ADS)
Ferrer, Gabriel; Sáez, Esteban; Ledezma, Christian
2018-01-01
Copper production is an essential component of the Chilean economy. During the extraction process of copper, large quantities of waste materials (tailings) are produced, which are typically stored in large tailing ponds. Thickened Tailings Disposal (TTD) is an alternative to conventional tailings ponds. In TTD, a considerable amount of water is extracted from the tailings before their deposition. Once a thickened tailings layer is deposited, it loses water and it shrinks, forming a relatively regular structure of tailings blocks with vertical cracks in between, which are then filled up with "fresh" tailings once the new upper layer is deposited. The dynamic response of a representative column of this complex structure made out of tailings blocks with softer material in between was analyzed using a periodic half-space finite element model. The tailings' behavior was modeled using an elasto-plastic multi-yielding constitutive model, and Chilean earthquake records were used for the seismic analyses. Special attention was given to the liquefaction potential evaluation of TTD.
A new parallel-vector finite element analysis software on distributed-memory computers
NASA Technical Reports Server (NTRS)
Qin, Jiangning; Nguyen, Duc T.
1993-01-01
A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.
Teachers' Organization of Participation Structures for Teaching Science with Computer Technology
NASA Astrophysics Data System (ADS)
Subramaniam, Karthigeyan
2016-08-01
This paper describes a qualitative study that investigated the nature of the participation structures and how the participation structures were organized by four science teachers when they constructed and communicated science content in their classrooms with computer technology. Participation structures focus on the activity structures and processes in social settings like classrooms thereby providing glimpses into the complex dynamics of teacher-students interactions, configurations, and conventions during collective meaning making and knowledge creation. Data included observations, interviews, and focus group interviews. Analysis revealed that the dominant participation structure evident within participants' instruction with computer technology was ( Teacher) initiation-( Student and Teacher) response sequences-( Teacher) evaluate participation structure. Three key events characterized the how participants organized this participation structure in their classrooms: setting the stage for interactive instruction, the joint activity, and maintaining accountability. Implications include the following: (1) teacher educators need to tap into the knowledge base that underscores science teachers' learning to teach philosophies when computer technology is used in instruction. (2) Teacher educators need to emphasize the essential idea that learning and cognition is not situated within the computer technology but within the pedagogical practices, specifically the participation structures. (3) The pedagogical practices developed with the integration or with the use of computer technology underscored by the teachers' own knowledge of classroom contexts and curriculum needs to be the focus for how students learn science content with computer technology instead of just focusing on how computer technology solely supports students learning of science content.
Simulation studies of the application of SEASAT data in weather and state of sea forecasting models
NASA Technical Reports Server (NTRS)
Cardone, V. J.; Greenwood, J. A.
1979-01-01
The design and analysis of SEASAT simulation studies in which the error structure of conventional analyses and forecasts is modeled realistically are presented. The development and computer implementation of a global spectral ocean wave model is described. The design of algorithms for the assimilation of theoretical wind data into computers and for the utilization of real wind data and wave height data in a coupled computer system are presented.
Accuracy and Transferability of Ab Initio Electronic Band Structure Calculations for Doped BiFeO3
NASA Astrophysics Data System (ADS)
Gebhardt, Julian; Rappe, Andrew M.
2017-11-01
BiFeO3 is a multiferroic material and, therefore, highly interesting with respect to future oxide electronics. In order to realize such devices, pn junctions need to be fabricated, which are currently impeded by the lack of successful p-type doping in this material. In order to guide the numerous research efforts in this field, we recently finished a comprehensive computational study, investigating the influence of many dopants onto the electronic structure of BiFeO3. In order to allow for this large scale ab initio study, the computational setup had to be accurate and efficient. Here we discuss the details of this assessment, showing that standard density-functional theory (DFT) yields good structural properties. The obtained electronic structure, however, suffers from well-known shortcomings. By comparing the conventional DFT results for alkali and alkaline-earth metal doping with more accurate hybrid-DFT calculations, we show that, in this case, the problems of standard DFT go beyond a simple systematic error. Conventional DFT shows bad transferability and the more reliable hybrid-DFT has to be chosen for a qualitatively correct prediction of doping induced changes in the electronic structure of BiFeO3.
Comparative study between manual injection intraosseous anesthesia and conventional oral anesthesia
Ata-Ali, Javier; Oltra-Moscardó, María J.; Peñarrocha-Diago, María; Peñarrocha, Miguel
2012-01-01
Objective: To compare intraosseous anesthesia (IA) with the conventional oral anesthesia techniques. Materials and methods: A simple-blind, prospective clinical study was carried out. Each patient underwent two anesthetic techniques: conventional (local infiltration and locoregional anesthetic block) and intraosseous, for res-pective dental operations. In order to allow comparison of IA versus conventional anesthesia, the two operations were similar and affected the same two teeth in opposite quadrants. Results: A total of 200 oral anesthetic procedures were carried out in 100 patients. The mean patient age was 28.6±9.92 years. Fifty-five vestibular infiltrations and 45 mandibular blocks were performed. All patients were also subjected to IA. The type of intervention (conservative or endodontic) exerted no significant influence (p=0.58 and p=0.62, respectively). The latency period was 8.52±2.44 minutes for the conventional techniques and 0.89±0.73 minutes for IA – the difference being statistically significant (p<0.05). Regarding patient anesthesia sensation, the infiltrative techniques lasted a maximum of one hour, the inferior alveolar nerve blocks lasted between 1-3 hours, and IA lasted only 2.5 minutes – the differences being statistically significant (p≤0.0000, Φ=0.29). Anesthetic success was recorded in 89% of the conventional procedures and in 78% of the IA. Most patients preferred IA (61%) (p=0.0032). Conclusions: The two anesthetic procedures have been compared for latency, duration of anesthetic effect, anesthetic success rate and patient preference. Intraosseous anesthesia has been shown to be a technique to be taken into account when planning conservative and endodontic treatments. Key words: Anesthesia, intraosseous, oral anesthesia, Stabident®, infiltrative, mandibular block. PMID:22143700
Vibration analysis in reciprocating compressors
NASA Astrophysics Data System (ADS)
Kacani, V.
2017-08-01
This paper presents the influence of modelling on the mechanical natural frequencies, the effect of inertia loads on the structure vibration, the impact of the crank gear damping on speed fluctuation to ensure a safe operation and increasing the reliability of reciprocating compressors. In this paper it is shown, that conventional way of modelling is not sufficient. For best results it is required to include the whole system (bare block, frame, coupling, main driver, vessels, pipe work, etc.) in the model (see results in Table 1).
3D hollow nanostructures as building blocks for multifunctional plasmonics.
De Angelis, Francesco; Malerba, Mario; Patrini, Maddalena; Miele, Ermanno; Das, Gobind; Toma, Andrea; Zaccaria, Remo Proietti; Di Fabrizio, Enzo
2013-08-14
We present an advanced and robust technology to realize 3D hollow plasmonic nanostructures which are tunable in size, shape, and layout. The presented architectures offer new and unconventional properties such as the realization of 3D plasmonic hollow nanocavities with high electric field confinement and enhancement, finely structured extinction profiles, and broad band optical absorption. The 3D nature of the devices can overcome intrinsic difficulties related to conventional architectures in a wide range of multidisciplinary applications.
Protecting quantum memories using coherent parity check codes
NASA Astrophysics Data System (ADS)
Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv
2018-07-01
Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2
Development and Evaluation of Elastomeric Materials for Geothermal Applications
NASA Technical Reports Server (NTRS)
Mueller, W. A.; Kalfayan, S. H.; Reilly, W. W.; Yavrouian, A. H.; Mosesman, I. D.; Ingham, J. D.
1979-01-01
A material was formulated having about 250-350 psi tensile strength and 30-80 percent elongation at 260 C for at least 24 hours in simulated brine. The relationship between these laboratory test results and sealing performance in actual or simulated test conditions is not entirely clear; however, it is believed that no conventional formation or casing packer design is likely to perform well using these materials. The synthetic effort focused on high temperature block copolymers and development of curable polystyrene. Procedures were worked out for synthesizing these new materials. Initial results with heat-cured unfilled polystyrene 'gum' at 260 C indicate a tensile strength of about 50 psi. Cast films of the first sample of polyphenyl quinoxaline-polystyrene block copolymer, which has 'graft-block' structure consisting of a polystyrene chain with pendant polyphenyl quinoxaline groups, show elastomeric behavior in the required temperature range. Its tensile strength and elongation at 260 C were 220-350 psi and 18-36 percent, respectively. All of these materials also showed satisfactory hydrolytic stability.
The computation of 15 deg and 10 deg equal area block terrestrial free air gravity anomalies
NASA Technical Reports Server (NTRS)
Hajela, D. P.
1973-01-01
Starting with the set of 23,355 1 deg x 1 deg mean free air gravity anomalies used in Rapp (1972) to form a 5 deg equal area block terrestrial gravity field, the computation of 15 deg equal area block mean free air gravity anomalies is described along with estimates of their standard deviations. A new scheme of an integral division of a 15 deg block into 9 component 300 n. m. blocks, and each 300 n. m. block being subdivided into 25 60 n.mi. blocks, is used. This insures that there is no loss in accuracy, which would have resulted if proportional values according to area were taken of the 5 deg equal area anomalies to form the 15 deg block anomalies. A similar scheme is used for the computation of 10 deg equal area block mean free air gravity anomalies with estimates of their standard deviations. The scheme is general enough to be used for a 30 deg equal area block terrestrial gravity field.
Yu, Shuzhi; Hao, Fanchang; Leong, Hon Wai
2016-02-01
We consider the problem of sorting signed permutations by reversals, transpositions, transreversals, and block-interchanges. The problem arises in the study of species evolution via large-scale genome rearrangement operations. Recently, Hao et al. gave a 2-approximation scheme called genome sorting by bridges (GSB) for solving this problem. Their result extended and unified the results of (i) He and Chen - a 2-approximation algorithm allowing reversals, transpositions, and block-interchanges (by also allowing transversals) and (ii) Hartman and Sharan - a 1.5-approximation algorithm allowing reversals, transpositions, and transversals (by also allowing block-interchanges). The GSB result is based on introduction of three bridge structures in the breakpoint graph, the L-bridge, T-bridge, and X-bridge that models goodreversal, transposition/transreversal, and block-interchange, respectively. However, the paper by Hao et al. focused on proving the 2-approximation GSB scheme and only mention a straightforward [Formula: see text] algorithm. In this paper, we give an [Formula: see text] algorithm for implementing the GSB scheme. The key idea behind our faster GSB algorithm is to represent cycles in the breakpoint graph by their canonical sequences, which greatly simplifies the search for these bridge structures. We also give some comparison results (running time and computed distances) against the original GSB implementation.
The computational structural mechanics testbed generic structural-element processor manual
NASA Technical Reports Server (NTRS)
Stanley, Gary M.; Nour-Omid, Shahram
1990-01-01
The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).
NASA Astrophysics Data System (ADS)
Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu
2016-12-01
Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single-body, the interference effects of the multi-bodies tend to be negligible. The computing practice has confirmed that it is feasible for the present method to compute the aerodynamics and reveal flow mechanism around complex multi-body vehicles covering all flow regimes from the gas-kinetic point of view of solving the unified Boltzmann model velocity distribution function equation.
STBC AF relay for unmanned aircraft system
NASA Astrophysics Data System (ADS)
Adachi, Fumiyuki; Miyazaki, Hiroyuki; Endo, Chikara
2015-01-01
If a large scale disaster similar to the Great East Japan Earthquake 2011 happens, some areas may be isolated from the communications network. Recently, unmanned aircraft system (UAS) based wireless relay communication has been attracting much attention since it is able to quickly re-establish the connection between isolated areas and the network. However, the channel between ground station (GS) and unmanned aircraft (UA) is unreliable due to UA's swing motion and as consequence, the relay communication quality degrades. In this paper, we introduce space-time block coded (STBC) amplify-and-forward (AF) relay for UAS based wireless relay communication to improve relay communication quality. A group of UAs forms single frequency network (SFN) to perform STBC-AF cooperative relay. In STBC-AF relay, only conjugate operation, block exchange and amplifying are required at UAs. Therefore, STBC-AF relay improves the relay communication quality while alleviating the complexity problem at UAs. It is shown by computer simulation that STBC-AF relay can achieve better throughput performance than conventional AF relay.
Malleable architecture generator for FPGA computing
NASA Astrophysics Data System (ADS)
Gokhale, Maya; Kaba, James; Marks, Aaron; Kim, Jang
1996-10-01
The malleable architecture generator (MARGE) is a tool set that translates high-level parallel C to configuration bit streams for field-programmable logic based computing systems. MARGE creates an application-specific instruction set and generates the custom hardware components required to perform exactly those computations specified by the C program. In contrast to traditional fixed-instruction processors, MARGE's dynamic instruction set creation provides for efficient use of hardware resources. MARGE processes intermediate code in which each operation is annotated by the bit lengths of the operands. Each basic block (sequence of straight line code) is mapped into a single custom instruction which contains all the operations and logic inherent in the block. A synthesis phase maps the operations comprising the instructions into register transfer level structural components and control logic which have been optimized to exploit functional parallelism and function unit reuse. As a final stage, commercial technology-specific tools are used to generate configuration bit streams for the desired target hardware. Technology- specific pre-placed, pre-routed macro blocks are utilized to implement as much of the hardware as possible. MARGE currently supports the Xilinx-based Splash-2 reconfigurable accelerator and National Semiconductor's CLAy-based parallel accelerator, MAPA. The MARGE approach has been demonstrated on systolic applications such as DNA sequence comparison.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Adamczyk, John J.; Miller, Christopher J.; Arnone, Andrea; Swanson, Charles
1993-01-01
The primary objective of this study was the development of a time-marching three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict steady and unsteady compressible transonic flows about ducted and unducted propfan propulsion systems employing multiple blade rows. The computer codes resulting from this study are referred to as ADPAC-AOACR (Advanced Ducted Propfan Analysis Codes-Angle of Attack Coupled Row). This report is intended to serve as a computer program user's manual for the ADPAC-AOACR codes developed under Task 5 of NASA Contract NAS3-25270, Unsteady Counterrotating Ducted Propfan Analysis. The ADPAC-AOACR program is based on a flexible multiple blocked grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. For convenience, several standard mesh block structures are described for turbomachinery applications. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Numerical calculations are compared with experimental data for several test cases to demonstrate the utility of this approach for predicting the aerodynamics of modern turbomachinery configurations employing multiple blade rows.
Erovic, Boban M; Chan, Harley H L; Daly, Michael J; Pothier, David D; Yu, Eugene; Coulson, Chris; Lai, Philip; Irish, Jonathan C
2014-01-01
Conventional computed tomography (CT) imaging is the standard imaging technique for temporal bone diseases, whereas cone-beam CT (CBCT) imaging is a very fast imaging tool with a significant less radiation dose compared with conventional CT. We hypothesize that a system for intraoperative cone-beam CT provides comparable image quality to diagnostic CT for identifying temporal bone anatomical landmarks in cadaveric specimens. Cross-sectional study. University tertiary care facility. Twenty cadaveric temporal bones were affixed into a head phantom and scanned with both a prototype cone-beam CT C-arm and multislice helical CT. Imaging performance was evaluated by 3 otologic surgeons and 1 head and neck radiologist. Participants were presented images in a randomized order and completed landmark identification questionnaires covering 21 structures. CBCT and multislice CT have comparable performance in identifying temporal structures. Three otologic surgeons indicated that CBCT provided statistically equivalent performance for 19 of 21 landmarks, with CBCT superior to CT for the chorda tympani and inferior for the crura of the stapes. Subgroup analysis showed that CBCT performed superiorly for temporal bone structures compared with CT. The radiologist rated CBCT and CT as statistically equivalent for 18 of 21 landmarks, with CT superior to CBCT for the crura of stapes, chorda tympani, and sigmoid sinus. CBCT provides comparable image quality to conventional CT for temporal bone anatomical sites in cadaveric specimens. Clinical applications of low-dose CBCT imaging in surgical planning, intraoperative guidance, and postoperative assessment are promising but require further investigation.
Statistical molecular design of balanced compound libraries for QSAR modeling.
Linusson, A; Elofsson, M; Andersson, I E; Dahlgren, M K
2010-01-01
A fundamental step in preclinical drug development is the computation of quantitative structure-activity relationship (QSAR) models, i.e. models that link chemical features of compounds with activities towards a target macromolecule associated with the initiation or progression of a disease. QSAR models are computed by combining information on the physicochemical and structural features of a library of congeneric compounds, typically assembled from two or more building blocks, and biological data from one or more in vitro assays. Since the models provide information on features affecting the compounds' biological activity they can be used as guides for further optimization. However, in order for a QSAR model to be relevant to the targeted disease, and drug development in general, the compound library used must contain molecules with balanced variation of the features spanning the chemical space believed to be important for interaction with the biological target. In addition, the assays used must be robust and deliver high quality data that are directly related to the function of the biological target and the associated disease state. In this review, we discuss and exemplify the concept of statistical molecular design (SMD) in the selection of building blocks and final synthetic targets (i.e. compounds to synthesize) to generate information-rich, balanced libraries for biological testing and computation of QSAR models.
A numerical study of mixing in supersonic combustors with hypermixing injectors
NASA Technical Reports Server (NTRS)
Lee, J.
1993-01-01
A numerical study was conducted to evaluate the performance of wall mounted fuel-injectors designed for potential Supersonic Combustion Ramjet (SCRAM-jet) engine applications. The focus of this investigation was to numerically simulate existing combustor designs for the purpose of validating the numerical technique and the physical models developed. Three different injector designs of varying complexity were studied to fully understand the computational implications involved in accurate predictions. A dual transverse injection system and two streamwise injector designs were studied. The streamwise injectors were designed with swept ramps to enhance fuel-air mixing and combustion characteristics at supersonic speeds without the large flow blockage and drag contribution of the transverse injection system. For this study, the Mass-Average Navier-Stokes equations and the chemical species continuity equations were solved. The computations were performed using a finite-volume implicit numerical technique and multiple block structured grid system. The interfaces of the multiple block structured grid systems were numerically resolved using the flux-conservative technique. Detailed comparisons between the computations and existing experimental data are presented. These comparisons show that numerical predictions are in agreement with the experimental data. These comparisons also show that a number of turbulence model improvements are needed for accurate combustor flowfield predictions.
A numerical study of mixing in supersonic combustors with hypermixing injectors
NASA Technical Reports Server (NTRS)
Lee, J.
1992-01-01
A numerical study was conducted to evaluate the performance of wall mounted fuel-injectors designed for potential Supersonic Combustion Ramjet (SCRAM-jet) engine applications. The focus of this investigation was to numerically simulate existing combustor designs for the purpose of validating the numerical technique and the physical models developed. Three different injector designs of varying complexity were studied to fully understand the computational implications involved in accurate predictions. A dual transverse injection system and two streamwise injector designs were studied. The streamwise injectors were designed with swept ramps to enhance fuel-air mixing and combustion characteristics at supersonic speeds without the large flow blockage and drag contribution of the transverse injection system. For this study, the Mass-Averaged Navier-Stokes equations and the chemical species continuity equations were solved. The computations were performed using a finite-volume implicit numerical technique and multiple block structured grid system. The interfaces of the multiple block structured grid systems were numerically resolved using the flux-conservative technique. Detailed comparisons between the computations and existing experimental data are presented. These comparisons show that numerical predictions are in agreement with the experimental data. These comparisons also show that a number of turbulence model improvements are needed for accurate combustor flowfield predictions.
Parallel software support for computational structural mechanics
NASA Technical Reports Server (NTRS)
Jordan, Harry F.
1987-01-01
The application of the parallel programming methodology known as the Force was conducted. Two application issues were addressed. The first involves the efficiency of the implementation and its completeness in terms of satisfying the needs of other researchers implementing parallel algorithms. Support for, and interaction with, other Computational Structural Mechanics (CSM) researchers using the Force was the main issue, but some independent investigation of the Barrier construct, which is extremely important to overall performance, was also undertaken. Another efficiency issue which was addressed was that of relaxing the strong synchronization condition imposed on the self-scheduled parallel DO loop. The Force was extended by the addition of logical conditions to the cases of a parallel case construct and by the inclusion of a self-scheduled version of this construct. The second issue involved applying the Force to the parallelization of finite element codes such as those found in the NICE/SPAR testbed system. One of the more difficult problems encountered is the determination of what information in COMMON blocks is actually used outside of a subroutine and when a subroutine uses a COMMON block merely as scratch storage for internal temporary results.
Improving Communication of Diagnostic Radiology Findings through Structured Reporting
Panicek, David M.; Berk, Alexandra R.; Li, Yuelin; Hricak, Hedvig
2011-01-01
Purpose: To compare the content, clarity, and clinical usefulness of conventional (ie, free-form) and structured radiology reports of body computed tomographic (CT) scans, as evaluated by referring physicians, attending radiologists, and radiology fellows at a tertiary care cancer center. Materials and Methods: The institutional review board approved the study as a quality improvement initiative; no written consent was required. Three radiologists, three radiology fellows, three surgeons, and two medical oncologists evaluated 330 randomly selected conventional and structured radiology reports of body CT scans. For nonradiologists, reports were randomly selected from patients with diagnoses relevant to the physician’s area of specialization. Each physician read 15 reports in each format and rated both the content and clarity of each report from 1 (very dissatisfied or very confusing) to 10 (very satisfied or very clear). By using a previously published radiology report grading scale, physicians graded each report’s effectiveness in advancing the patient’s position on the clinical spectrum. Mixed-effects models were used to test differences between report types. Results: Mean content satisfaction ratings were 7.61 (95% confidence interval [CI]: 7.12, 8.16) for conventional reports and 8.33 (95% CI: 7.82, 8.86) for structured reports, and the difference was significant (P < .0001). Mean clarity satisfaction ratings were 7.45 (95% CI: 6.89, 8.02) for conventional reports and 8.25 (95% CI: 7.68, 8.82) for structured reports, and the difference was significant (P < .0001). Grade ratings did not differ significantly between conventional and structured reports. Conclusion: Referring clinicians and radiologists found that structured reports had better content and greater clarity than conventional reports. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101913/-/DC1 PMID:21518775
Johnson, Thomas M; Badovinac, Rachel; Shaefer, Jeffry
2007-09-01
Surveys were sent to Harvard School of Dental Medicine students and graduates from the classes of 2000 through 2006 to determine their current primary means of achieving mandibular anesthesia. Orthodontists and orthodontic residents were excluded. All subjects received clinical training in the conventional inferior alveolar nerve block and two alternative techniques (the Akinosi mandibular block and the Gow-Gates mandibular block) during their predoctoral dental education. This study tests the hypothesis that students and graduates who received training in the conventional inferior alveolar nerve block, the Akinosi mandibular block, and the Gow-Gates mandibular block will report more frequent current utilization of alternatives to the conventional inferior alveolar nerve block than clinicians trained in the conventional technique only. At the 95 percent confidence level, we estimated that between 3.7 percent and 16.1 percent (mean=8.5 percent) of clinicians trained in using the Gow-Gates technique use this injection technique primarily, and between 35.4 percent and 56.3 percent (mean=47.5 percent) of those trained in the Gow-Gates method never use this technique. At the same confidence level, between 0.0 percent and 3.8 percent (mean=0.0 percent) of clinicians trained in using the Akinosi technique use this injection clinical technique primarily, and between 62.2 percent and 81.1 percent (mean=72.3 percent) of those trained in the Akinosi method never use this technique. No control group that was completely untrained in the Gow-Gates or Akinosi techniques was available for comparison. However, we presume that zero percent of clinicians who have not been trained in a given technique will use the technique in clinical practice. The confidence interval for the Gow-Gates method excludes this value, while the confidence interval for the Akinosi technique includes zero percent. We conclude that, in the study population, formal clinical training in the Gow-Gates and Akinosi injection techniques lead to a small but significant increase in current primary utilization of the Gow-Gates technique. No significant increase in current primary utilization of the Akinosi technique was found.
A structure preserving Lanczos algorithm for computing the optical absorption spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Meiyue; Jornada, Felipe H. da; Lin, Lin
2016-11-16
We present a new structure preserving Lanczos algorithm for approximating the optical absorption spectrum in the context of solving full Bethe-Salpeter equation without Tamm-Dancoff approximation. The new algorithm is based on a structure preserving Lanczos procedure, which exploits the special block structure of Bethe-Salpeter Hamiltonian matrices. A recently developed technique of generalized averaged Gauss quadrature is incorporated to accelerate the convergence. We also establish the connection between our structure preserving Lanczos procedure with several existing Lanczos procedures developed in different contexts. Numerical examples are presented to demonstrate the effectiveness of our Lanczos algorithm.
The CSM testbed matrix processors internal logic and dataflow descriptions
NASA Technical Reports Server (NTRS)
Regelbrugge, Marc E.; Wright, Mary A.
1988-01-01
This report constitutes the final report for subtask 1 of Task 5 of NASA Contract NAS1-18444, Computational Structural Mechanics (CSM) Research. This report contains a detailed description of the coded workings of selected CSM Testbed matrix processors (i.e., TOPO, K, INV, SSOL) and of the arithmetic utility processor AUS. These processors and the current sparse matrix data structures are studied and documented. Items examined include: details of the data structures, interdependence of data structures, data-blocking logic in the data structures, processor data flow and architecture, and processor algorithmic logic flow.
Reduced complexity structural modeling for automated airframe synthesis
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1987-01-01
A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.
Application of the implicit MacCormack scheme to the PNS equations
NASA Technical Reports Server (NTRS)
Lawrence, S. L.; Tannehill, J. C.; Chaussee, D. S.
1983-01-01
The two-dimensional parabolized Navier-Stokes equations are solved using MacCormack's (1981) implicit finite-difference scheme. It is shown that this method for solving the parabolized Navier-Stokes equations does not require the inversion of block tridiagonal systems of algebraic equations and allows the original explicit scheme to be employed in those regions where implicit treatment is not needed. The finite-difference algorithm is discussed and the computational results for two laminar test cases are presented. Results obtained using this method for the case of a flat plate boundary layer are compared with those obtained using the conventional Beam-Warming scheme, as well as those obtained from a boundary layer code. The computed results for a more severe test of the method, the hypersonic flow past a 15 deg compression corner, are found to compare favorably with experiment and a numerical solution of the complete Navier-Stokes equations.
Final report for “Extreme-scale Algorithms and Solver Resilience”
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, William Douglas
2017-06-30
This is a joint project with principal investigators at Oak Ridge National Laboratory, Sandia National Laboratories, the University of California at Berkeley, and the University of Tennessee. Our part of the project involves developing performance models for highly scalable algorithms and the development of latency tolerant iterative methods. During this project, we extended our performance models for the Multigrid method for solving large systems of linear equations and conducted experiments with highly scalable variants of conjugate gradient methods that avoid blocking synchronization. In addition, we worked with the other members of the project on alternative techniques for resilience and reproducibility.more » We also presented an alternative approach for reproducible dot-products in parallel computations that performs almost as well as the conventional approach by separating the order of computation from the details of the decomposition of vectors across the processes.« less
A Comprehensive Study on Energy Efficiency and Performance of Flash-based SSD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Seon-Yeon; Kim, Youngjae; Urgaonkar, Bhuvan
2011-01-01
Use of flash memory as a storage medium is becoming popular in diverse computing environments. However, because of differences in interface, flash memory requires a hard-disk-emulation layer, called FTL (flash translation layer). Although the FTL enables flash memory storages to replace conventional hard disks, it induces significant computational and space overhead. Despite the low power consumption of flash memory, this overhead leads to significant power consumption in an overall storage system. In this paper, we analyze the characteristics of flash-based storage devices from the viewpoint of power consumption and energy efficiency by using various methodologies. First, we utilize simulation tomore » investigate the interior operation of flash-based storage of flash-based storages. Subsequently, we measure the performance and energy efficiency of commodity flash-based SSDs by using microbenchmarks to identify the block-device level characteristics and macrobenchmarks to reveal their filesystem level characteristics.« less
FaCSI: A block parallel preconditioner for fluid-structure interaction in hemodynamics
NASA Astrophysics Data System (ADS)
Deparis, Simone; Forti, Davide; Grandperrin, Gwenol; Quarteroni, Alfio
2016-12-01
Modeling Fluid-Structure Interaction (FSI) in the vascular system is mandatory to reliably compute mechanical indicators in vessels undergoing large deformations. In order to cope with the computational complexity of the coupled 3D FSI problem after discretizations in space and time, a parallel solution is often mandatory. In this paper we propose a new block parallel preconditioner for the coupled linearized FSI system obtained after space and time discretization. We name it FaCSI to indicate that it exploits the Factorized form of the linearized FSI matrix, the use of static Condensation to formally eliminate the interface degrees of freedom of the fluid equations, and the use of a SIMPLE preconditioner for saddle-point problems. FaCSI is built upon a block Gauss-Seidel factorization of the FSI Jacobian matrix and it uses ad-hoc preconditioners for each physical component of the coupled problem, namely the fluid, the structure and the geometry. In the fluid subproblem, after operating static condensation of the interface fluid variables, we use a SIMPLE preconditioner on the reduced fluid matrix. Moreover, to efficiently deal with a large number of processes, FaCSI exploits efficient single field preconditioners, e.g., based on domain decomposition or the multigrid method. We measure the parallel performances of FaCSI on a benchmark cylindrical geometry and on a problem of physiological interest, namely the blood flow through a patient-specific femoropopliteal bypass. We analyze the dependence of the number of linear solver iterations on the cores count (scalability of the preconditioner) and on the mesh size (optimality).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Liwen, E-mail: lwcheng@yzu.edu.cn; Chen, Haitao; Wu, Shudong
2015-08-28
The effects of removing the AlGaN electron blocking layer (EBL), and using a last quantum barrier (LQB) with a unique design in conventional blue InGaN light-emitting diodes (LEDs), were investigated through simulations. Compared with the conventional LED design that contained a GaN LQB and an AlGaN EBL, the LED that contained an AlGaN LQB with a graded-composition and no EBL exhibited enhanced optical performance and less efficiency droop. This effect was caused by an enhanced electron confinement and hole injection efficiency. Furthermore, when the AlGaN LQB was replaced with a triangular graded-composition, the performance improved further and the efficiency droopmore » was lowered. The simulation results indicated that the enhanced hole injection efficiency and uniform distribution of carriers observed in the quantum wells were caused by the smoothing and thinning of the potential barrier for the holes. This allowed a greater number of holes to tunnel into the quantum wells from the p-type regions in the proposed LED structure.« less
Efficient block processing of long duration biotelemetric brain data for health care monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soumya, I.; Zia Ur Rahman, M., E-mail: mdzr-5@ieee.org; Rama Koti Reddy, D. V.
In real time clinical environment, the brain signals which doctor need to analyze are usually very long. Such a scenario can be made simple by partitioning the input signal into several blocks and applying signal conditioning. This paper presents various block based adaptive filter structures for obtaining high resolution electroencephalogram (EEG) signals, which estimate the deterministic components of the EEG signal by removing noise. To process these long duration signals, we propose Time domain Block Least Mean Square (TDBLMS) algorithm for brain signal enhancement. In order to improve filtering capability, we introduce normalization in the weight update recursion of TDBLMS,more » which results TD-B-normalized-least mean square (LMS). To increase accuracy and resolution in the proposed noise cancelers, we implement the time domain cancelers in frequency domain which results frequency domain TDBLMS and FD-B-Normalized-LMS. Finally, we have applied these algorithms on real EEG signals obtained from human using Emotive Epoc EEG recorder and compared their performance with the conventional LMS algorithm. The results show that the performance of the block based algorithms is superior to the LMS counter-parts in terms of signal to noise ratio, convergence rate, excess mean square error, misadjustment, and coherence.« less
Anesthesiology training using 3D imaging and virtual reality
NASA Astrophysics Data System (ADS)
Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.
1996-04-01
Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.
NASA Technical Reports Server (NTRS)
Lutzky, D.; Bjorkman, W. S.
1973-01-01
The Mission Analysis Evaluation and Space Trajectory Operations program known as MAESTRO is described. MAESTRO is an all FORTRAN, block style, computer program designed to perform various mission control tasks. This manual is a guide to MAESTRO, providing individuals the capability of modifying the program to suit their needs. Descriptions are presented of each of the subroutines descriptions consist of input/output description, theory, subroutine description, and a flow chart where applicable. The programmer's manual also contains a detailed description of the common blocks, a subroutine cross reference map, and a general description of the program structure.
GPU-accelerated FDTD modeling of radio-frequency field-tissue interactions in high-field MRI.
Chi, Jieru; Liu, Feng; Weber, Ewald; Li, Yu; Crozier, Stuart
2011-06-01
The analysis of high-field RF field-tissue interactions requires high-performance finite-difference time-domain (FDTD) computing. Conventional CPU-based FDTD calculations offer limited computing performance in a PC environment. This study presents a graphics processing unit (GPU)-based parallel-computing framework, producing substantially boosted computing efficiency (with a two-order speedup factor) at a PC-level cost. Specific details of implementing the FDTD method on a GPU architecture have been presented and the new computational strategy has been successfully applied to the design of a novel 8-element transceive RF coil system at 9.4 T. Facilitated by the powerful GPU-FDTD computing, the new RF coil array offers optimized fields (averaging 25% improvement in sensitivity, and 20% reduction in loop coupling compared with conventional array structures of the same size) for small animal imaging with a robust RF configuration. The GPU-enabled acceleration paves the way for FDTD to be applied for both detailed forward modeling and inverse design of MRI coils, which were previously impractical.
NASA Astrophysics Data System (ADS)
Burke, Christopher; Reddy, Abhiram; Prasad, Ishan; Grason, Gregory
Block copolymer (BCP) melts form a number of symmetric microphases, e.g. columnar or double gyroid phases. BCPs with a block composed of chiral monomers are observed to form bulk phases with broken chiral symmetry e.g. a phase of hexagonally ordered helical mesodomains. Other new structures may be possible, e.g. double gyroid with preferred chirality which has potential photonic applications. One approach to understanding chirality transfer from monomer to the bulk is to use self consistent field theory (SCFT) and incorporate an orientational order parameter with a preference for handed twist in chiral block segments, much like the texture of cholesteric liquid crystal. Polymer chains in achiral BCPs exhibit orientational ordering which couples to the microphase geometry; a spontaneous preference for ordering may have an effect on the geometry. The influence of a preference for chiral polar (vectorial) segment order has been studied to some extent, though the influence of coupling to chiral tensorial (nematic) order has not yet been developed. We present a computational approach using SCFT with vector and tensor order which employs well developed pseudo-spectral methods. Using this we explore how tensor order influences which structures form, and if it can promote chiral phases.
The geometry of folds in granitoid rocks of northeastern Alberta
NASA Astrophysics Data System (ADS)
Willem Langenberg, C.; Ramsden, John
1980-06-01
Granitoid rocks which predominate in the Precambrian shield of northeastern Alberta show large-scale fold structures. A numerical procedure has been used to obtain modal foliation orientations. This procedure results in the smoothing of folded surfaces that show roughness on a detailed scale. Statistical tests are used to divide the study areas into cylindrical domains. Structural sections can be obtained for each domain, and horizontal and vertical sections are used to construct block diagrams. The projections are performed numerically and plotted by computer. This method permits blocks to be viewed from every possible angle. Both perspective and orthographic projections can be produced. The geometries of a dome in the Tulip Lake area and a synform in the Hooker Lake area have been obtained. The domal structure is compared with polyphase deformational interference patterns and with experimental diapiric structures obtained in a centrifuge system. The synform in the Hooker Lake area may be genetically related to the doming in the Tulip Lake area.
An electrostatic mechanism for Ca2+-mediated regulation of gap junction channels
Bennett, Brad C.; Purdy, Michael D.; Baker, Kent A.; Acharya, Chayan; McIntire, William E.; Stevens, Raymond C.; Zhang, Qinghai; Harris, Andrew L.; Abagyan, Ruben; Yeager, Mark
2016-01-01
Gap junction channels mediate intercellular signalling that is crucial in tissue development, homeostasis and pathologic states such as cardiac arrhythmias, cancer and trauma. To explore the mechanism by which Ca2+ blocks intercellular communication during tissue injury, we determined the X-ray crystal structures of the human Cx26 gap junction channel with and without bound Ca2+. The two structures were nearly identical, ruling out both a large-scale structural change and a local steric constriction of the pore. Ca2+ coordination sites reside at the interfaces between adjacent subunits, near the entrance to the extracellular gap, where local, side chain conformational rearrangements enable Ca2+chelation. Computational analysis revealed that Ca2+-binding generates a positive electrostatic barrier that substantially inhibits permeation of cations such as K+ into the pore. Our results provide structural evidence for a unique mechanism of channel regulation: ionic conduction block via an electrostatic barrier rather than steric occlusion of the channel pore. PMID:26753910
Decomposition Algorithm for Global Reachability on a Time-Varying Graph
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2010-01-01
A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Gyroid Nickel Nanostructures from Diblock Copolymer Supramolecules
Vukovic, Ivana; Punzhin, Sergey; Voet, Vincent S. D.; Vukovic, Zorica; de Hosson, Jeff Th. M.; ten Brinke, Gerrit; Loos, Katja
2014-01-01
Nanoporous metal foams possess a unique combination of properties - they are catalytically active, thermally and electrically conductive, and furthermore, have high porosity, high surface-to-volume and strength-to-weight ratio. Unfortunately, common approaches for preparation of metallic nanostructures render materials with highly disordered architecture, which might have an adverse effect on their mechanical properties. Block copolymers have the ability to self-assemble into ordered nanostructures and can be applied as templates for the preparation of well-ordered metal nanofoams. Here we describe the application of a block copolymer-based supramolecular complex - polystyrene-block-poly(4-vinylpyridine)(pentadecylphenol) PS-b-P4VP(PDP) - as a precursor for well-ordered nickel nanofoam. The supramolecular complexes exhibit a phase behavior similar to conventional block copolymers and can self-assemble into the bicontinuous gyroid morphology with two PS networks placed in a P4VP(PDP) matrix. PDP can be dissolved in ethanol leading to the formation of a porous structure that can be backfilled with metal. Using electroless plating technique, nickel can be inserted into the template's channels. Finally, the remaining polymer can be removed via pyrolysis from the polymer/inorganic nanohybrid resulting in nanoporous nickel foam with inverse gyroid morphology. PMID:24797367
Gyroid nickel nanostructures from diblock copolymer supramolecules.
Vukovic, Ivana; Punzhin, Sergey; Voet, Vincent S D; Vukovic, Zorica; de Hosson, Jeff Th M; ten Brinke, Gerrit; Loos, Katja
2014-04-28
Nanoporous metal foams possess a unique combination of properties - they are catalytically active, thermally and electrically conductive, and furthermore, have high porosity, high surface-to-volume and strength-to-weight ratio. Unfortunately, common approaches for preparation of metallic nanostructures render materials with highly disordered architecture, which might have an adverse effect on their mechanical properties. Block copolymers have the ability to self-assemble into ordered nanostructures and can be applied as templates for the preparation of well-ordered metal nanofoams. Here we describe the application of a block copolymer-based supramolecular complex - polystyrene-block-poly(4-vinylpyridine)(pentadecylphenol) PS-b-P4VP(PDP) - as a precursor for well-ordered nickel nanofoam. The supramolecular complexes exhibit a phase behavior similar to conventional block copolymers and can self-assemble into the bicontinuous gyroid morphology with two PS networks placed in a P4VP(PDP) matrix. PDP can be dissolved in ethanol leading to the formation of a porous structure that can be backfilled with metal. Using electroless plating technique, nickel can be inserted into the template's channels. Finally, the remaining polymer can be removed via pyrolysis from the polymer/inorganic nanohybrid resulting in nanoporous nickel foam with inverse gyroid morphology.
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
NASA Astrophysics Data System (ADS)
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)
1998-01-01
Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.
Aspects of Unstructured Grids and Finite-Volume Solvers for the Euler and Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
1992-01-01
One of the major achievements in engineering science has been the development of computer algorithms for solving nonlinear differential equations such as the Navier-Stokes equations. In the past, limited computer resources have motivated the development of efficient numerical schemes in computational fluid dynamics (CFD) utilizing structured meshes. The use of structured meshes greatly simplifies the implementation of CFD algorithms on conventional computers. Unstructured grids on the other hand offer an alternative to modeling complex geometries. Unstructured meshes have irregular connectivity and usually contain combinations of triangles, quadrilaterals, tetrahedra, and hexahedra. The generation and use of unstructured grids poses new challenges in CFD. The purpose of this note is to present recent developments in the unstructured grid generation and flow solution technology.
Djumas, Lee; Molotnikov, Andrey; Simon, George P.; Estrin, Yuri
2016-01-01
Structural composites inspired by nacre have emerged as prime exemplars for guiding materials design of fracture-resistant, rigid hybrid materials. The intricate microstructure of nacre, which combines a hard majority phase with a small fraction of a soft phase, achieves superior mechanical properties compared to its constituents and has generated much interest. However, replicating the hierarchical microstructure of nacre is very challenging, not to mention improving it. In this article, we propose to alter the geometry of the hard building blocks by introducing the concept of topological interlocking. This design principle has previously been shown to provide an inherently brittle material with a remarkable flexural compliance. We now demonstrate that by combining the basic architecture of nacre with topological interlocking of discrete hard building blocks, hybrid materials of a new type can be produced. By adding a soft phase at the interfaces between topologically interlocked blocks in a single-build additive manufacturing process, further improvement of mechanical properties is achieved. The design of these fabricated hybrid structures has been guided by computational work elucidating the effect of various geometries. To our knowledge, this is the first reported study that combines the advantages of nacre-inspired structures with the benefits of topological interlocking. PMID:27216277
Inverse-designed stretchable metalens with tunable focal distance
NASA Astrophysics Data System (ADS)
Callewaert, Francois; Velev, Vesselin; Jiang, Shizhou; Sahakian, Alan Varteres; Kumar, Prem; Aydin, Koray
2018-02-01
In this paper, we present an inverse-designed 3D-printed all-dielectric stretchable millimeter wave metalens with a tunable focal distance. A computational inverse-design method is used to design a flat metalens made of disconnected polymer building blocks with complex shapes, as opposed to conventional monolithic lenses. The proposed metalens provides better performance than a conventional Fresnel lens, using lesser amount of material and enabling larger focal distance tunability. The metalens is fabricated using a commercial 3D-printer and attached to a stretchable platform. Measurements and simulations show that the focal distance can be tuned by a factor of 4 with a stretching factor of only 75%, a nearly diffraction-limited focal spot, and with a 70% relative focusing efficiency, defined as the ratio between power focused in the focal spot and power going through the focal plane. The proposed platform can be extended for design and fabrication of multiple electromagnetic devices working from visible to microwave radiation depending on scaling of the devices.
2012-09-01
allowing it to dry or baking it in a kiln . A modern factory would take a block of raw material and then use machinery to pare away un- necessary... conventional “subtractive manu- facturing”—taking a block of raw material and removing excess until the finished product remains—the process as a whole...is more efficient and less wasteful. Another major benefit of AM is the fact that com- plexity is “free.” In conventional manufacturing, increasing
Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh
NASA Astrophysics Data System (ADS)
Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru
2017-11-01
We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.
Performance review using sequential sampling and a practice computer.
Difford, F
1988-06-01
The use of sequential sample analysis for repeated performance review is described with examples from several areas of practice. The value of a practice computer in providing a random sample from a complete population, evaluating the parameters of a sequential procedure, and producing a structured worksheet is discussed. It is suggested that sequential analysis has advantages over conventional sampling in the area of performance review in general practice.
Consolidation of archaeological gypsum plaster by bacterial biomineralization of calcium carbonate.
Jroundi, Fadwa; Gonzalez-Muñoz, Maria Teresa; Garcia-Bueno, Ana; Rodriguez-Navarro, Carlos
2014-09-01
Gypsum plasterworks and decorative surfaces are easily degraded, especially when exposed to humidity, and thus they require protection and/or consolidation. However, the conservation of historical gypsum-based structural and decorative materials by conventional organic and inorganic consolidants shows limited efficacy. Here, a new method based on the bioconsolidation capacity of carbonatogenic bacteria inhabiting the material was assayed on historical gypsum plasters and compared with conventional consolidation treatments (ethyl silicate; methylacrylate-ethylmethacrylate copolymer and polyvinyl butyral). Conventional products do not reach in-depth consolidation, typically forming a thin impervious surface layer which blocks pores. In contrast, the bacterial treatment produces vaterite (CaCO3) biocement, which does not block pores and produces a good level of consolidation, both at the surface and in-depth, as shown by drilling resistance measurement system analyses. Transmission electron microscopy analyses show that bacterial vaterite cement formed via oriented aggregation of CaCO3 nanoparticles (∼20nm in size), resulting in mesocrystals which incorporate bacterial biopolymers. Such a biocomposite has superior mechanical properties, thus explaining the fact that drilling resistance of bioconsolidated gypsum plasters is within the range of inorganic calcite materials of equivalent porosity, despite the fact that the bacterial vaterite cement accounts for only a 0.02 solid volume fraction. Bacterial bioconsolidation is proposed for the effective consolidation of this type of material. The potential applications of bacterial calcium carbonate consolidation of gypsum biomaterials used as bone graft substitutes are discussed. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Composite theory applied to elastomers
NASA Technical Reports Server (NTRS)
Clark, S. K.
1986-01-01
Reinforced elastomers form the basis for most of the structural or load carrying applications of rubber products. Computer based structural analysis in the form of finite element codes was highly successful in refining structural design in both isotropic materials and rigid composites. This has lead the rubber industry to attempt to make use of such techniques in the design of structural cord-rubber composites. While such efforts appear promising, they were not easy to achieve for several reasons. Among these is a distinct lack of a clearly defined set of material property descriptors suitable for computer analysis. There are substantial differences between conventional steel, aluminum, or even rigid composites such as graphite-epoxy, and textile-cord reinforced rubber. These differences which are both conceptual and practical are discussed.
26 CFR 1.1248-3 - Earnings and profits attributable to stock in complex cases.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 2. (5) Share or block. In general, the computation under this paragraph shall be made separately for each share of stock sold or exchanged, except that if a group of shares constitute a block of stock the computation may be made in respect of the block. For purposes of this section, the term block of stock means a...
26 CFR 1.1248-3 - Earnings and profits attributable to stock in complex cases.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 2. (5) Share or block. In general, the computation under this paragraph shall be made separately for each share of stock sold or exchanged, except that if a group of shares constitute a block of stock the computation may be made in respect of the block. For purposes of this section, the term block of stock means a...
26 CFR 1.1248-3 - Earnings and profits attributable to stock in complex cases.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 2. (5) Share or block. In general, the computation under this paragraph shall be made separately for each share of stock sold or exchanged, except that if a group of shares constitute a block of stock the computation may be made in respect of the block. For purposes of this section, the term block of stock means a...
26 CFR 1.1248-3 - Earnings and profits attributable to stock in complex cases.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 2. (5) Share or block. In general, the computation under this paragraph shall be made separately for each share of stock sold or exchanged, except that if a group of shares constitute a block of stock the computation may be made in respect of the block. For purposes of this section, the term block of stock means a...
26 CFR 1.1248-3 - Earnings and profits attributable to stock in complex cases.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 2. (5) Share or block. In general, the computation under this paragraph shall be made separately for each share of stock sold or exchanged, except that if a group of shares constitute a block of stock the computation may be made in respect of the block. For purposes of this section, the term block of stock means a...
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
Vectorization, threading, and cache-blocking considerations for hydrocodes on emerging architectures
Fung, J.; Aulwes, R. T.; Bement, M. T.; ...
2015-07-14
This work reports on considerations for improving computational performance in preparation for current and expected changes to computer architecture. The algorithms studied will include increasingly complex prototypes for radiation hydrodynamics codes, such as gradient routines and diffusion matrix assembly (e.g., in [1-6]). The meshes considered for the algorithms are structured or unstructured meshes. The considerations applied for performance improvements are meant to be general in terms of architecture (not specifically graphical processing unit (GPUs) or multi-core machines, for example) and include techniques for vectorization, threading, tiling, and cache blocking. Out of a survey of optimization techniques on applications such asmore » diffusion and hydrodynamics, we make general recommendations with a view toward making these techniques conceptually accessible to the applications code developer. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.« less
Accelerated Gaussian mixture model and its application on image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi
2013-03-01
Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.
Highly Conductive Anion Exchange Block Copolymers
We are developing a comprehensive fundamental understanding of the interplay between transport and morphology in newly synthesized hydroxide...conducting block copolymers. We are synthesizing hydroxide conducting block copolymers of various (1) morphology types, (2) ionic concentrations, and (3...ionic domain sizes. We are carefully characterizing the morphology and transport properties using both conventional and new advanced in situ techniques
Computational neuropharmacology: dynamical approaches in drug discovery.
Aradi, Ildiko; Erdi, Péter
2006-05-01
Computational approaches that adopt dynamical models are widely accepted in basic and clinical neuroscience research as indispensable tools with which to understand normal and pathological neuronal mechanisms. Although computer-aided techniques have been used in pharmaceutical research (e.g. in structure- and ligand-based drug design), the power of dynamical models has not yet been exploited in drug discovery. We suggest that dynamical system theory and computational neuroscience--integrated with well-established, conventional molecular and electrophysiological methods--offer a broad perspective in drug discovery and in the search for novel targets and strategies for the treatment of neurological and psychiatric diseases.
Bifilar analysis users manual, volume 2
NASA Technical Reports Server (NTRS)
Cassarino, S. J.
1980-01-01
The digital computer program developed to study the vibration response of a coupled rotor/bifilar/airframe coupled system is described. The theoretical development of the rotor/airframe system equations of motion is provided. The fuselage and bifilar absorber equations of motion are discussed. The modular block approach used in the make-up of this computer program is described. The input data needed to run the rotor and bifilar absorber analyses is described. Sample output formats are presented and discussed. The results for four test cases, which use the major logic paths of the computer program, are presented. The overall program structure is discussed in detail. The FORTRAN subroutines are described in detail.
NASA Astrophysics Data System (ADS)
Mao, Deqing; Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2018-01-01
Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg-Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson-Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.
Gu, Ming-liang; Chu, Jia-you
2007-12-01
Human genome has structures of haplotype and haplotype block which provide valuable information on human evolutionary history and may lead to the development of more efficient strategies to identify genetic variants that increase susceptibility to complex diseases. Haplotype block can be divided into discrete blocks of limited haplotype diversity. In each block, a small fraction of ptag SNPsq can be used to distinguish a large fraction of the haplotypes. These tag SNPs can be potentially useful for construction of haplotype and haplotype block, and association studies in complex diseases. There are two general classes of methods to construct haplotype and haplotype blocks based on genotypes on large pedigrees and statistical algorithms respectively. The author evaluate several construction methods to assess the power of different association tests with a variety of disease models and block-partitioning criteria. The advantages, limitations and applications of each method and the application in the association studies are discussed equitably. With the completion of the HapMap and development of statistical algorithms for addressing haplotype reconstruction, ideas of construction of haplotype based on combination of mathematics, physics, and computer science etc will have profound impacts on population genetics, location and cloning for susceptible genes in complex diseases, and related domain with life science etc.
On modelling three-dimensional piezoelectric smart structures with boundary spectral element method
NASA Astrophysics Data System (ADS)
Zou, Fangxin; Aliabadi, M. H.
2017-05-01
The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.
Programmable DNA-Mediated Multitasking Processor.
Shu, Jian-Jun; Wang, Qi-Wen; Yong, Kian-Yan; Shao, Fangwei; Lee, Kee Jin
2015-04-30
Because of DNA appealing features as perfect material, including minuscule size, defined structural repeat and rigidity, programmable DNA-mediated processing is a promising computing paradigm, which employs DNAs as information storing and processing substrates to tackle the computational problems. The massive parallelism of DNA hybridization exhibits transcendent potential to improve multitasking capabilities and yield a tremendous speed-up over the conventional electronic processors with stepwise signal cascade. As an example of multitasking capability, we present an in vitro programmable DNA-mediated optimal route planning processor as a functional unit embedded in contemporary navigation systems. The novel programmable DNA-mediated processor has several advantages over the existing silicon-mediated methods, such as conducting massive data storage and simultaneous processing via much fewer materials than conventional silicon devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearlberg, J.L.; Sandler, M.A.; Kvale, P.
1985-03-01
Laser therapy is a new modality for treatment of airway lesions. The authors examined 18 patients prior to laser photoresection of tracheobronchial lesions. Thirteen had cancers involving the distal trachea, carina, and/or proximal bronchi; five had benign lesions of the middle or proximal trachea. Each patient was examined by conventional linear tomography (CLT) and computed tomography (CT). CT was valuable in patients who had lesions of the distal trachea, carina, and/or proximal bronchi. Its particular usefulness, and its advantage relative to CLT, consisted in its ability to delineate vascular structures adjacent to the planned area of photoresection. Neither CLT normore » CT was helpful in evaluation of benign lesions of the proximal trachea.« less
Calculating a checksum with inactive networking components in a computing system
Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T
2014-12-16
Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.
Calculating a checksum with inactive networking components in a computing system
Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T
2015-01-27
Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.
Studies of Learning and Self-Contained Educational Systems, 1973-1976
1976-03-01
ISTKIW WORDS fCondnu» on reverse aide // neceeeary ««’ ’-dtntlly by block numb«; Leirning, teaching, memory, tutorial instruction. 20. AB ^RACT...poorly acquired or because the learner might have missed exposi -e to that part of the material, then the rest of the structure is weakened and may...War in the computer data bsnt, including the causal structure of the actions during the war. We are ab ]^ to use the data base to interact with a
NASA Technical Reports Server (NTRS)
Hecht-Nielsen, Robert
1990-01-01
The present work is intended to give technologists, research scientists, and mathematicians a graduate-level overview of the field of neurocomputing. After exploring the relationship of this field to general neuroscience, attention is given to neural network building blocks, the self-adaptation equations of learning laws, the data-transformation structures of associative networks, and the multilayer data-transformation structures of mapping networks. Also treated are the neurocomputing frontiers of spatiotemporal, stochastic, and hierarchical networks, 'neurosoftware', the creation of neural network-based computers, and neurocomputing applications in sensor processing, control, and data analysis.
Reinforcement learning for resource allocation in LEO satellite networks.
Usaha, Wipawee; Barria, Javier A
2007-06-01
In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.
An efficient solver for large structured eigenvalue problems in relativistic quantum chemistry
NASA Astrophysics Data System (ADS)
Shiozaki, Toru
2017-01-01
We report an efficient program for computing the eigenvalues and symmetry-adapted eigenvectors of very large quaternionic (or Hermitian skew-Hamiltonian) matrices, using which structure-preserving diagonalisation of matrices of dimension N > 10, 000 is now routine on a single computer node. Such matrices appear frequently in relativistic quantum chemistry owing to the time-reversal symmetry. The implementation is based on a blocked version of the Paige-Van Loan algorithm, which allows us to use the Level 3 BLAS subroutines for most of the computations. Taking advantage of the symmetry, the program is faster by up to a factor of 2 than state-of-the-art implementations of complex Hermitian diagonalisation; diagonalising a 12, 800 × 12, 800 matrix took 42.8 (9.5) and 85.6 (12.6) minutes with 1 CPU core (16 CPU cores) using our symmetry-adapted solver and Intel Math Kernel Library's ZHEEV that is not structure-preserving, respectively. The source code is publicly available under the FreeBSD licence.
Coding conventions and principles for a National Land-Change Modeling Framework
Donato, David I.
2017-07-14
This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.
A performance analysis of advanced I/O architectures for PC-based network file servers
NASA Astrophysics Data System (ADS)
Huynh, K. D.; Khoshgoftaar, T. M.
1994-12-01
In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.
Lee, Jonghyun; Rolle, Massimo; Kitanidis, Peter K
2017-09-15
Most recent research on hydrodynamic dispersion in porous media has focused on whole-domain dispersion while other research is largely on laboratory-scale dispersion. This work focuses on the contribution of a single block in a numerical model to dispersion. Variability of fluid velocity and concentration within a block is not resolved and the combined spreading effect is approximated using resolved quantities and macroscopic parameters. This applies whether the formation is modeled as homogeneous or discretized into homogeneous blocks but the emphasis here being on the latter. The process of dispersion is typically described through the Fickian model, i.e., the dispersive flux is proportional to the gradient of the resolved concentration, commonly with the Scheidegger parameterization, which is a particular way to compute the dispersion coefficients utilizing dispersivity coefficients. Although such parameterization is by far the most commonly used in solute transport applications, its validity has been questioned. Here, our goal is to investigate the effects of heterogeneity and mass transfer limitations on block-scale longitudinal dispersion and to evaluate under which conditions the Scheidegger parameterization is valid. We compute the relaxation time or memory of the system; changes in time with periods larger than the relaxation time are gradually leading to a condition of local equilibrium under which dispersion is Fickian. The method we use requires the solution of a steady-state advection-dispersion equation, and thus is computationally efficient, and applicable to any heterogeneous hydraulic conductivity K field without requiring statistical or structural assumptions. The method was validated by comparing with other approaches such as the moment analysis and the first order perturbation method. We investigate the impact of heterogeneity, both in degree and structure, on the longitudinal dispersion coefficient and then discuss the role of local dispersion and mass transfer limitations, i.e., the exchange of mass between the permeable matrix and the low permeability inclusions. We illustrate the physical meaning of the method and we show how the block longitudinal dispersivity approaches, under certain conditions, the Scheidegger limit at large Péclet numbers. Lastly, we discuss the potential and limitations of the method to accurately describe dispersion in solute transport applications in heterogeneous aquifers. Copyright © 2017. Published by Elsevier B.V.
Vibrational Spectroscopy and Astrobiology
NASA Technical Reports Server (NTRS)
Chaban, Galina M.; Kwak, D. (Technical Monitor)
2001-01-01
Role of vibrational spectroscopy in solving problems related to astrobiology will be discussed. Vibrational (infrared) spectroscopy is a very sensitive tool for identifying molecules. Theoretical approach used in this work is based on direct computation of anharmonic vibrational frequencies and intensities from electronic structure codes. One of the applications of this computational technique is possible identification of biological building blocks (amino acids, small peptides, DNA bases) in the interstellar medium (ISM). Identifying small biological molecules in the ISM is very important from the point of view of origin of life. Hybrid (quantum mechanics/molecular mechanics) theoretical techniques will be discussed that may allow to obtain accurate vibrational spectra of biomolecular building blocks and to create a database of spectroscopic signatures that can assist observations of these molecules in space. Another application of the direct computational spectroscopy technique is to help to design and analyze experimental observations of ice surfaces of one of the Jupiter's moons, Europa, that possibly contains hydrated salts. The presence of hydrated salts on the surface can be an indication of a subsurface ocean and the possible existence of life forms inhabiting such an ocean.
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.
1994-01-01
Electronic and optoelectronic hardware implementations of highly parallel computing architectures address several ill-defined and/or computation-intensive problems not easily solved by conventional computing techniques. The concurrent processing architectures developed are derived from a variety of advanced computing paradigms including neural network models, fuzzy logic, and cellular automata. Hardware implementation technologies range from state-of-the-art digital/analog custom-VLSI to advanced optoelectronic devices such as computer-generated holograms and e-beam fabricated Dammann gratings. JPL's concurrent processing devices group has developed a broad technology base in hardware implementable parallel algorithms, low-power and high-speed VLSI designs and building block VLSI chips, leading to application-specific high-performance embeddable processors. Application areas include high throughput map-data classification using feedforward neural networks, terrain based tactical movement planner using cellular automata, resource optimization (weapon-target assignment) using a multidimensional feedback network with lateral inhibition, and classification of rocks using an inner-product scheme on thematic mapper data. In addition to addressing specific functional needs of DOD and NASA, the JPL-developed concurrent processing device technology is also being customized for a variety of commercial applications (in collaboration with industrial partners), and is being transferred to U.S. industries. This viewgraph p resentation focuses on two application-specific processors which solve the computation intensive tasks of resource allocation (weapon-target assignment) and terrain based tactical movement planning using two extremely different topologies. Resource allocation is implemented as an asynchronous analog competitive assignment architecture inspired by the Hopfield network. Hardware realization leads to a two to four order of magnitude speed-up over conventional techniques and enables multiple assignments, (many to many), not achievable with standard statistical approaches. Tactical movement planning (finding the best path from A to B) is accomplished with a digital two-dimensional concurrent processor array. By exploiting the natural parallel decomposition of the problem in silicon, a four order of magnitude speed-up over optimized software approaches has been demonstrated.
NASA Astrophysics Data System (ADS)
Huang, Hung-Wen; Huang, Jhi-Kai; Kuo, Shou-Yi; Lee, Kang-Yuan; Kuo, Hao-Chung
2010-06-01
In this paper, GaN-based LEDs with a nanoscale patterned sapphire substrate (NPSS) and a SiO2 photonic quasicrystal (PQC) structure on an n-GaN layer using nanoimprint lithography are fabricated and investigated. The light output power of LED with a NPSS and a SiO2 PQC structure on an n-GaN layer was 48% greater than that of conventional LED. Strong enhancement in output power is attributed to better epitaxial quality and higher reflectance resulted from NPSS and PQC structures. Transmission electron microscopy images reveal that threading dislocations are blocked or bended in the vicinities of NPSS layer. These results provide promising potential to increase output power for commercial light emitting devices.
Structured Metal Film as Perfect Absorber
NASA Astrophysics Data System (ADS)
Xiong, Xiang; Jiang, Shang-Chi; Peng, Ru-Wen; Wang, Mu
2014-03-01
With standing U-shaped resonators, fish-spear-like resonator has been designed for the first time as the building block to assemble perfect absorbers. The samples have been fabricated with two-photon polymerization process and FTIR measurement results support the effectiveness of the perfect absorber design. In such a structure the polarization-dependent resonance occurs between the tines of the spears instead of the conventional design where the resonance occurs between the metallic layers separated by a dielectric interlayer. The incident light neither transmits nor reflects back which results in unit absorbance. The power of light is trapped between the tines of spears and finally be absorbed. The whole structure is covered with a continuous metallic layer with good thermo-conductance, which provides an excellent approach to deal with heat dissipation, is enlightening in exploring metamaterial absorbers.
Rodrigo Pereira Jr.; Johan Zweedea; Gregory P. Asnerb; Keller; Michael
2002-01-01
We investigated ground and canopy damage and recovery following conventional logging and reduced-impact logging (RIL) of moist tropical forest in the eastern Amazon of Brazil. Paired conventional and RIL blocks were selectively logged with a harvest intensity of approximately 23 m3 ha
1997-07-26
The first of two Pressurized Mating Adapters, or PMAs, for the International Space Station arrive in KSC’s Space Station Processing Facility in July. A PMA is a cone-shaped connector that will be attached to Node 1, the space station’s structural building block, during ground processing. The adapter will house space station computers and various electrical support equipment and eventually will serve as the passageway for astronauts between the node and the U.S-financed, Russian-built Functional Cargo Block. Node 1 with two adapters attached will be the first element of the station to be launched aboard the Space Shuttle Endeavour on STS-88 in July 1998
1997-07-26
The first of two Pressurized Mating Adapters, or PMAs, for the International Space Station arrive in KSC’s Space Station Processing Facility in July. A PMA is a cone-shaped connector that will be attached to Node 1, the space station’s structural building block, during ground processing. The adapter will house space station computers and various electrical support equipment and eventually will serve as the passageway for astronauts between the node and the U.S-financed, Russian-built Functional Cargo Block. Node 1 with two adapters attached will be the first element of the station to be launched aboard the Space Shuttle Endeavour on STS-88 in July 1998
Nonlinear constitutive theory for turbine engine structural analysis
NASA Technical Reports Server (NTRS)
Thompson, R. L.
1982-01-01
A number of viscoplastic constitutive theories and a conventional constitutive theory are evaluated and compared in their ability to predict nonlinear stress-strain behavior in gas turbine engine components at elevated temperatures. Specific application of these theories is directed towards the structural analysis of combustor liners undergoing transient, cyclic, thermomechanical load histories. The combustor liner material considered in this study is Hastelloy X. The material constants for each of the theories (as a function of temperature) are obtained from existing, published experimental data. The viscoplastic theories and a conventional theory are incorporated into a general purpose, nonlinear, finite element computer program. Several numerical examples of combustor liner structural analysis using these theories are given to demonstrate their capabilities. Based on the numerical stress-strain results, the theories are evaluated and compared.
NASA Astrophysics Data System (ADS)
Montes, Carlos; Broussard, Kaylin; Gongre, Matthew; Simicevic, Neven; Mejia, Johanna; Tham, Jessica; Allouche, Erez; Davis, Gabrielle
2015-09-01
Future manned missions to the moon will require the ability to build structures using the moon's natural resources. The geopolymer binder described in this paper (Lunamer) is a construction material that consists of up to 98% lunar regolith, drastically reducing the amount of material that must be carried from Earth in the event of lunar construction. This material could be used to fabricate structural panels and interlocking blocks that have radiation shielding and thermal insulation characteristics. These panels and blocks could be used to construct living quarters and storage facilities on the lunar surface, or as shielding panels to be installed on crafts launched from the moon surface to deep-space destinations. Lunamer specimens were manufactured in the laboratory and compressive strength results of up to 16 MPa when cast with conventional methods and 37 MPa when cast using uniaxial pressing were obtained. Simulation results have shown that the mechanical and chemical properties of Lunamer allow for adequate radiation shielding for a crew inside the lunar living quarters without additional requirements.
Wilke, Scott A.; Antonios, Joseph K.; Bushong, Eric A.; Badkoobehi, Ali; Malek, Elmar; Hwang, Minju; Terada, Masako; Ellisman, Mark H.
2013-01-01
The hippocampal mossy fiber (MF) terminal is among the largest and most complex synaptic structures in the brain. Our understanding of the development of this morphologically elaborate structure has been limited because of the inability of standard electron microscopy techniques to quickly and accurately reconstruct large volumes of neuropil. Here we use serial block-face electron microscopy (SBEM) to surmount these limitations and investigate the establishment of MF connectivity during mouse postnatal development. Based on volume reconstructions, we find that MF axons initially form bouton-like specializations directly onto dendritic shafts, that dendritic protrusions primarily arise independently of bouton contact sites, and that a dramatic increase in presynaptic and postsynaptic complexity follows the association of MF boutons with CA3 dendritic protrusions. We also identify a transient period of MF bouton filopodial exploration, followed by refinement of sites of synaptic connectivity. These observations enhance our understanding of the development of this highly specialized synapse and illustrate the power of SBEM to resolve details of developing microcircuits at a level not easily attainable with conventional approaches. PMID:23303931
Toward a Global Bundle Adjustment of SPOT 5 - HRS Images
NASA Astrophysics Data System (ADS)
Massera, S.; Favé, P.; Gachet, R.; Orsoni, A.
2012-07-01
The HRS (High Resolution Stereoscopic) instrument carried on SPOT 5 enables quasi-simultaneous acquisition of stereoscopic images on wide segments - 120 km wide - with two forward and backward-looking telescopes observing the Earth with an angle of 20° ahead and behind the vertical. For 8 years IGN (Institut Géographique National) has been developing techniques to achieve spatiotriangulation of these images. During this time the capacities of bundle adjustment of SPOT 5 - HRS spatial images have largely improved. Today a global single block composed of about 20,000 images can be computed in reasonable calculation time. The progression was achieved step by step: first computed blocks were only composed of 40 images, then bigger blocks were computed. Finally only one global block is now computed. In the same time calculation tools have improved: for example the adjustment of 2,000 images of North Africa takes about 2 minutes whereas 8 hours were needed two years ago. To reach such a result a new independent software was developed to compute fast and efficient bundle adjustments. In the same time equipment - GCPs (Ground Control Points) and tie points - and techniques have also evolved over the last 10 years. Studies were made to get recommendations about the equipment in order to make an accurate single block. Tie points can now be quickly and automatically computed with SURF (Speeded Up Robust Features) techniques. Today the updated equipment is composed of about 500 GCPs and studies show that the ideal configuration is around 100 tie points by square degree. With such an equipment, the location of the global HRS block becomes a few meters accurate whereas non adjusted images are only 15 m accurate. This paper will describe the methods used in IGN Espace to compute a global single block composed of almost 20,000 HRS images, 500 GCPs and several million of tie points in reasonable calculation time. Many advantages can be found to use such a block. Because the global block is unique it becomes easier to manage the historic and the different evolutions of the computations (new images, new GCPs or tie points). The location is now unique and consequently coherent all around the world, avoiding steps and artifacts on the borders of DSMs (Digital Surface Models) and OrthoImages historically calculated from different blocks. No extrapolation far from GCPs in the limits of images is done anymore. Using the global block as a reference will allow new images from other sources to be easily located on this reference.
A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications
NASA Astrophysics Data System (ADS)
Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.
2018-04-01
Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.
Parallel Adjective High-Order CFD Simulations Characterizing SOFIA Cavity Acoustics
NASA Technical Reports Server (NTRS)
Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak
2016-01-01
This paper presents large-scale MPI-parallel computational uid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft fuselage of a Boeing 747SP. These simulations focus on how the unsteady ow eld inside and over the cavity interferes with the optical path and mounting structure of the telescope. A temporally fourth-order accurate Runge-Kutta, and spatially fth-order accurate WENO- 5Z scheme was used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh re nement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32k CPU cores and 4 billion compu- tational cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregular numerical cost associated with blocks con- taining boundaries. Limits to scaling beyond 32k cores are identi ed, and targeted code optimizations are discussed.
Parallel Adaptive High-Order CFD Simulations Characterizing SOFIA Cavitiy Acoustics
NASA Technical Reports Server (NTRS)
Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak
2015-01-01
This paper presents large-scale MPI-parallel computational uid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft fuselage of a Boeing 747SP. These simulations focus on how the unsteady ow eld inside and over the cavity interferes with the optical path and mounting structure of the telescope. A tempo- rally fourth-order accurate Runge-Kutta, and a spatially fth-order accurate WENO-5Z scheme were used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh re nement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32k CPU cores and 4 billion compu- tational cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregular numerical cost associated with blocks con- taining boundaries. Limits to scaling beyond 32k cores are identi ed, and targeted code optimizations are discussed.
NASA Technical Reports Server (NTRS)
Lockard, David P.
2011-01-01
Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.
A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows
NASA Technical Reports Server (NTRS)
Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert
1996-01-01
The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.
A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint
NASA Astrophysics Data System (ADS)
Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru
Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Wang, Y.; Shu, C.
2017-12-01
This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.
2012-09-01
allowing it to dry or baking it in a kiln . A modern factory would take a block of raw material and then use machinery to pare away un- necessary... conventional “subtractive manu- facturing”—taking a block of raw material and removing excess until the finished product remains—the process as a whole...is more efficient and less wasteful. Another major benefit of AM is the fact that com- plexity is “free.” In conventional manufacturing, increasing
Matsumoto, Shinnosuke; Koba, Yusuke; Kohno, Ryosuke; Lee, Choonsik; Bolch, Wesley E; Kai, Michiaki
2016-04-01
Proton therapy has the physical advantage of a Bragg peak that can provide a better dose distribution than conventional x-ray therapy. However, radiation exposure of normal tissues cannot be ignored because it is likely to increase the risk of secondary cancer. Evaluating secondary neutrons generated by the interaction of the proton beam with the treatment beam-line structure is necessary; thus, performing the optimization of radiation protection in proton therapy is required. In this research, the organ dose and energy spectrum were calculated from secondary neutrons using Monte Carlo simulations. The Monte Carlo code known as the Particle and Heavy Ion Transport code System (PHITS) was used to simulate the transport proton and its interaction with the treatment beam-line structure that modeled the double scattering body of the treatment nozzle at the National Cancer Center Hospital East. The doses of the organs in a hybrid computational phantom simulating a 5-y-old boy were calculated. In general, secondary neutron doses were found to decrease with increasing distance to the treatment field. Secondary neutron energy spectra were characterized by incident neutrons with three energy peaks: 1×10, 1, and 100 MeV. A block collimator and a patient collimator contributed significantly to organ doses. In particular, the secondary neutrons from the patient collimator were 30 times higher than those from the first scatter. These results suggested that proactive protection will be required in the design of the treatment beam-line structures and that organ doses from secondary neutrons may be able to be reduced.
NASA Astrophysics Data System (ADS)
Xia, Qiang-sheng; Ding, Hong-ming; Ma, Yu-qiang
2018-03-01
Efficient delivery of nanoparticles into specific cell interiors is of great importance in biomedicine. Recently, the pH-responsive micelle has emerged as one potential nanocarrier to realize such purpose since there exist obvious pH differences between normal tissues and tumors. Herein, by using dissipative particle dynamics simulation, we investigate the interaction of the pH-sensitive triblock copolymer micelles composed of ligand (L), hydrophobic block (C) and polyelectrolyte block (P) with cell membrane. It is found that the structure rearrangement of the micelle can facilitate its penetration into the lower leaflet of the bilayer. However, when the ligand-receptor specific interaction is weak, the micelles may just fuse with the upper leaflet of the bilayer. Moreover, the ionization degree of polyelectrolyte block and the length of hydrophobic block also play a vital role in the penetration efficiency. Further, when the sequence of the L, P, C beads in the copolymers is changed, the translocation pathways of the micelles may change from direct penetration to Janus engulfment. The present study reveals the relationship between the molecular structure of the copolymer and the uptake of the pH-sensitive micelles, which may give some significant insights into the experimental design of responsive micellar nanocarriers for highly efficient cellular delivery.
NASA Astrophysics Data System (ADS)
Selwyn, Ebenezer Juliet; Florinabel, D. Jemi
2018-04-01
Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.
Comparative study between manual injection intraosseous anesthesia and conventional oral anesthesia.
Peñarrocha-Oltra, D; Ata-Ali, J; Oltra-Moscardó, M-J; Peñarrocha-Diago, M-A; Peñarrocha, M
2012-03-01
To compare intraosseous anesthesia (IA) with the conventional oral anesthesia techniques. A simple-blind, prospective clinical study was carried out. Each patient underwent two anesthetic techniques: conventional (local infiltration and locoregional anesthetic block) and intraosseous, for respective dental operations. In order to allow comparison of IA versus conventional anesthesia, the two operations were similar and affected the same two teeth in opposite quadrants. A total of 200 oral anesthetic procedures were carried out in 100 patients. The mean patient age was 28.6±9.92 years. Fifty-five vestibular infiltrations and 45 mandibular blocks were performed. All patients were also subjected to IA. The type of intervention (conservative or endodontic) exerted no significant influence (p=0.58 and p=0.62, respectively). The latency period was 8.52±2.44 minutes for the conventional techniques and 0.89±0.73 minutes for IA - the difference being statistically significant (p<0.05). Regarding patient anesthesia sensation, the infiltrative techniques lasted a maximum of one hour, the inferior alveolar nerve blocks lasted between 1-3 hours, and IA lasted only 2.5 minutes - the differences being statistically significant (p≤0.0000, Φ=0.29). Anesthetic success was recorded in 89% of the conventional procedures and in 78% of the IA. Most patients preferred IA (61%)(p=0.0032). The two anesthetic procedures have been compared for latency, duration of anesthetic effect, anesthetic success rate and patient preference. Intraosseous anesthesia has been shown to be a technique to be taken into account when planning conservative and endodontic treatments.
Multi-purpose greenhouse of changeable geometry (MGCG)
NASA Astrophysics Data System (ADS)
Kordium, V.; Kornejchuk, A.
In the frames of scientific program of National Cosmic Agency of Ukraine the multipurpose greenhouse is being developed. It is destined for the performance of biological and biotechnological experiments as well as for planting of fast growing vegetable cultures for crew ration enrichment and positive psychological influence under the conditions of long-term flight in the international space station or in other cosmic flying objects. Main principle of greenhouse arrangement is the existence of unified modules. Their sets and combinations permit to form executively different space greenhouse configurations. The minimal structural greenhouse unit suitable either for construction of total configuration or for autonomous functioning, is a carrying composite platform (CCP). The experimental vegetative module (EVM) and the module, supporting microclimate needed inside EVM, are launched to CCP. The amount of these modules and their configuration depend on quantity and complexity of tasks to be solved as well as on duration of their performance. These modules form the experimental block. Four modules of much larger sizes, which form experimentally technological block, are used for further studies of objectives approved in the experimental block. The technologies developed for growing plants are used in the third, technological block, which is a one large vegetative module. All three greenhouse blocks can be changed in their sizes in three dimensions, and function either in a complete greenhouse structure, or autonomously. The control is performed from a board computer, or, if necessary, it is governed with automation devices placed on a front panel of blocks. All three blocks are pulled out along the directing base into the station passage, and this makes free access to the base modules, convenient work with them, and à good survey.
Arnuntasupakul, Vanlapa; Van Zundert, Tom C R V; Vijitpavan, Amorn; Aliste, Julian; Engsusophon, Phatthanaphol; Leurcharusmee, Prangmalee; Ah-Kye, Sonia; Finlayson, Roderick J; Tran, De Q H
2016-01-01
Epidural waveform analysis (EWA) provides a simple confirmatory adjunct for loss of resistance (LOR): when the needle tip is correctly positioned inside the epidural space, pressure measurement results in a pulsatile waveform. In this randomized trial, we compared conventional and EWA-confirmed LOR in 2 teaching centers. Our research hypothesis was that EWA-confirmed LOR would decrease the failure rate of thoracic epidural blocks. One hundred patients undergoing thoracic epidural blocks for thoracic surgery, abdominal surgery, or rib fractures were randomized to conventional LOR or EWA-LOR. The operator was allowed as many attempts as necessary to achieve a satisfactory LOR (by feel) in the conventional group. In the EWA-LOR group, LOR was confirmed by connecting the epidural needle to a pressure transducer using a rigid extension tubing. Positive waveforms indicated that the needle tip was positioned inside the epidural space. The operator was allowed a maximum of 3 different intervertebral levels to obtain a positive waveform. If waveforms were still absent at the third level, the operator simply accepted LOR as the technical end point. However, the patient was retained in the EWA-LOR group (intent-to-treat analysis).After achieving a satisfactory tactile LOR (conventional group), positive waveforms (EWA-LOR group), or a third intervertebral level with LOR but no waveform (EWA-LOR group), the operator administered a 4-mL test dose of lidocaine 2% with epinephrine 5 μg/mL. Fifteen minutes after the test dose, a blinded investigator assessed the patient for sensory block to ice. Compared with LOR, EWA-LOR resulted in a lower rate of primary failure (2% vs 24%; P = 0.002). Subgroup analysis based on experience level reveals that EWA-LOR outperformed conventional LOR for novice (P = 0.001) but not expert operators. The performance time was longer in the EWA-LOR group (11.2 ± 6.2 vs 8.0 ± 4.6 minutes; P = 0.006). Both groups were comparable in terms of operator's level of expertise, depth of the epidural space, approach, and LOR medium. In the EWA-LOR group, operators obtained a pulsatile waveform with the first level attempted in 60% of patients. However, 40% of subjects required performance at a second or third level. Compared with its conventional counterpart, EWA-confirmed LOR results in a lower failure rate for thoracic epidural blocks (2% vs 24%) in our teaching centers. Confirmatory EWA provides significant benefits for inexperienced operators.
ATLAS, an integrated structural analysis and design system. Volume 2: System design document
NASA Technical Reports Server (NTRS)
Erickson, W. J. (Editor)
1979-01-01
ATLAS is a structural analysis and design system, operational on the Control Data Corporation 6600/CYBER computers. The overall system design, the design of the individual program modules, and the routines in the ATLAS system library are described. The overall design is discussed in terms of system architecture, executive function, data base structure, user program interfaces and operational procedures. The program module sections include detailed code description, common block usage and random access file usage. The description of the ATLAS program library includes all information needed to use these general purpose routines.
Cascaded spintronic logic with low-dimensional carbon
NASA Astrophysics Data System (ADS)
Friedman, Joseph S.; Girdhar, Anuj; Gelfand, Ryan M.; Memik, Gokhan; Mohseni, Hooman; Taflove, Allen; Wessels, Bruce W.; Leburton, Jean-Pierre; Sahakian, Alan V.
2017-06-01
Remarkable breakthroughs have established the functionality of graphene and carbon nanotube transistors as replacements to silicon in conventional computing structures, and numerous spintronic logic gates have been presented. However, an efficient cascaded logic structure that exploits electron spin has not yet been demonstrated. In this work, we introduce and analyse a cascaded spintronic computing system composed solely of low-dimensional carbon materials. We propose a spintronic switch based on the recent discovery of negative magnetoresistance in graphene nanoribbons, and demonstrate its feasibility through tight-binding calculations of the band structure. Covalently connected carbon nanotubes create magnetic fields through graphene nanoribbons, cascading logic gates through incoherent spintronic switching. The exceptional material properties of carbon materials permit Terahertz operation and two orders of magnitude decrease in power-delay product compared to cutting-edge microprocessors. We hope to inspire the fabrication of these cascaded logic circuits to stimulate a transformative generation of energy-efficient computing.
15 CFR Supplement No. 2 to Part 752 - Instructions for Completing Form BIS-748P-A, “Item Annex”
Code of Federal Regulations, 2014 CFR
2014-01-01
... within the lines for each block or box. Block 1: Application Control No. Enter the application control... or reexport a computer or equipment that contains a computer. Instructions on calculating the APP are... processing of your application. Block 24: Continuation of Additional Information. Enter any identifying...
15 CFR Supplement No. 2 to Part 752 - Instructions for Completing Form BIS-748P-B, “Item Annex”
Code of Federal Regulations, 2012 CFR
2012-01-01
... within the lines for each block or box. Block 1: Application Control No. Enter the application control... or reexport a computer or equipment that contains a computer. Instructions on calculating the APP are... processing of your application. Block 24: Continuation of Additional Information. Enter any identifying...
15 CFR Supplement No. 2 to Part 752 - Instructions for Completing Form BIS-748P-B, “Item Annex”
Code of Federal Regulations, 2013 CFR
2013-01-01
... within the lines for each block or box. Block 1: Application Control No. Enter the application control... or reexport a computer or equipment that contains a computer. Instructions on calculating the APP are... processing of your application. Block 24: Continuation of Additional Information. Enter any identifying...
15 CFR Supplement No. 2 to Part 752 - Instructions for Completing Form BIS-748P-B, “Item Annex”
Code of Federal Regulations, 2011 CFR
2011-01-01
... within the lines for each block or box. Block 1: Application Control No. Enter the application control... or reexport a computer or equipment that contains a computer. Instructions on calculating the APP are... processing of your application. Block 24: Continuation of Additional Information. Enter any identifying...
Holographic entanglement and Poincaré blocks in three-dimensional flat space
NASA Astrophysics Data System (ADS)
Hijano, Eliot; Rabideau, Charles
2018-05-01
We propose a covariant prescription to compute holographic entanglement entropy and Poincaré blocks (Global BMS blocks) in the context of three-dimensional Einstein gravity in flat space. We first present a prescription based on worldline methods in the probe limit, inspired by recent analog calculations in AdS/CFT. Building on this construction, we propose a full extrapolate dictionary and use it to compute holographic correlators and blocks away from the probe limit.
Aquaporin-Based Biomimetic Polymeric Membranes: Approaches and Challenges
Habel, Joachim; Hansen, Michael; Kynde, Søren; Larsen, Nanna; Midtgaard, Søren Roi; Jensen, Grethe Vestergaard; Bomholt, Julie; Ogbonna, Anayo; Almdal, Kristoffer; Schulz, Alexander; Hélix-Nielsen, Claus
2015-01-01
In recent years, aquaporin biomimetic membranes (ABMs) for water separation have gained considerable interest. Although the first ABMs are commercially available, there are still many challenges associated with further ABM development. Here, we discuss the interplay of the main components of ABMs: aquaporin proteins (AQPs), block copolymers for AQP reconstitution, and polymer-based supporting structures. First, we briefly cover challenges and review recent developments in understanding the interplay between AQP and block copolymers. Second, we review some experimental characterization methods for investigating AQP incorporation including freeze-fracture transmission electron microscopy, fluorescence correlation spectroscopy, stopped-flow light scattering, and small-angle X-ray scattering. Third, we focus on recent efforts in embedding reconstituted AQPs in membrane designs that are based on conventional thin film interfacial polymerization techniques. Finally, we describe some new developments in interfacial polymerization using polyhedral oligomeric silsesquioxane cages for increasing the physical and chemical durability of thin film composite membranes. PMID:26264033
Ambient Cured Alkali Activated Flyash Masonry Units
NASA Astrophysics Data System (ADS)
Venugopal, K.; Radhakrishna; Sasalatti, Vinod M.
2016-09-01
Geopolymers belong to a category of non-conventional and non-Portland cement based cementitious binders which are produced using industrial by products like fly ash and ground granulated blast furnace slag (GGBFS). This paper reports on the development of geopolymer mortars for production of masonry units. The geopolymer mortars were prepared by mixing various by products with manufactured sand and a liquid mixture of sodium silicate and sodium hydroxide solutions. After curing at ambient conditions, the masonry units were tested for strength properties such as water absorption, initial rate of absorption, compression, shear- bond, and stress-strain behaviour etc. It was observed that the flexural strength of the blocks is more than 2 MPa and shear bond strength is more than 0.4MPa. It was found that the properties of geopolymer blocks were superior to the traditional masonry units. Hence they can be recommended for structural masonry.
Computer aided detection of tumor and edema in brain FLAIR magnetic resonance image using ANN
NASA Astrophysics Data System (ADS)
Pradhan, Nandita; Sinha, A. K.
2008-03-01
This paper presents an efficient region based segmentation technique for detecting pathological tissues (Tumor & Edema) of brain using fluid attenuated inversion recovery (FLAIR) magnetic resonance (MR) images. This work segments FLAIR brain images for normal and pathological tissues based on statistical features and wavelet transform coefficients using k-means algorithm. The image is divided into small blocks of 4×4 pixels. The k-means algorithm is used to cluster the image based on the feature vectors of blocks forming different classes representing different regions in the whole image. With the knowledge of the feature vectors of different segmented regions, supervised technique is used to train Artificial Neural Network using fuzzy back propagation algorithm (FBPA). Segmentation for detecting healthy tissues and tumors has been reported by several researchers by using conventional MRI sequences like T1, T2 and PD weighted sequences. This work successfully presents segmentation of healthy and pathological tissues (both Tumors and Edema) using FLAIR images. At the end pseudo coloring of segmented and classified regions are done for better human visualization.
NASA Technical Reports Server (NTRS)
Hajela, P.; Chen, J. L.
1986-01-01
The present paper describes an approach for the optimum sizing of single and joined wing structures that is based on representing the built-up finite element model of the structure by an equivalent beam model. The low order beam model is computationally more efficient in an environment that requires repetitive analysis of several trial designs. The design procedure is implemented in a computer program that requires geometry and loading data typically available from an aerodynamic synthesis program, to create the finite element model of the lifting surface and an equivalent beam model. A fully stressed design procedure is used to obtain rapid estimates of the optimum structural weight for the beam model for a given geometry, and a qualitative description of the material distribution over the wing structure. The synthesis procedure is demonstrated for representative single wing and joined wing structures.
Bae, Dae Kyung; Song, Sang Jun; Kim, Kang Il; Hur, Dong; Jeong, Ho Yeon
2016-03-01
The purpose of the present study was to compare the clinical and radiographic results and survival rates between computer-assisted and conventional closing wedge high tibial osteotomies (HTOs). Data from a consecutive cohort comprised of 75 computer-assisted HTOs and 75 conventional HTOs were retrospectively reviewed. The Knee Society knee and function scores, Hospital for Special Surgery (HSS) score and femorotibial angle (FTA) were compared between the two groups. Survival rates were also compared with procedure failure. The knee and function scores at one year postoperatively were slightly better in the computer-assisted group than those in conventional group (90.1 vs. 86.1) (82.0 vs. 76.0). The HSS scores at one year postoperatively were slightly better for the computer-assisted HTOs than those of conventional HTOs (89.5 vs. 81.8). The inlier of the postoperative FTA was wider in the computer-assisted group than that in the conventional HTO group (88.0% vs. 58.7%), and mean postoperative FTA was greater in the computer-assisted group that in the conventional HTO group (valgus 9.0° vs. valgus 7.6°, p<0.001). The five- and 10-year survival rates were 97.1% and 89.6%, respectively. No difference was detected in nine-year survival rates (p=0.369) between the two groups, although the clinical and radiographic results were better in the computer-assisted group that those in the conventional HTO group. Mid-term survival rates did not differ between computer-assisted and conventional HTOs. A comparative analysis of longer-term survival rate is required to demonstrate the long-term benefit of computer-assisted HTO. III. Copyright © 2015 Elsevier B.V. All rights reserved.
Politis, Argyris; Schmidt, Carla
2018-03-20
Structural mass spectrometry with its various techniques is a powerful tool for the structural elucidation of medically relevant protein assemblies. It delivers information on the composition, stoichiometries, interactions and topologies of these assemblies. Most importantly it can deal with heterogeneous mixtures and assemblies which makes it universal among the conventional structural techniques. In this review we summarise recent advances and challenges in structural mass spectrometric techniques. We describe how the combination of the different mass spectrometry-based methods with computational strategies enable structural models at molecular levels of resolution. These models hold significant potential for helping us in characterizing the function of protein assemblies related to human health and disease. In this review we summarise the techniques of structural mass spectrometry often applied when studying protein-ligand complexes. We exemplify these techniques through recent examples from literature that helped in the understanding of medically relevant protein assemblies. We further provide a detailed introduction into various computational approaches that can be integrated with these mass spectrometric techniques. Last but not least we discuss case studies that integrated mass spectrometry and computational modelling approaches and yielded models of medically important protein assembly states such as fibrils and amyloids. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Benner, Peter; Dolgov, Sergey; Khoromskaia, Venera; Khoromskij, Boris N.
2017-04-01
In this paper, we propose and study two approaches to approximate the solution of the Bethe-Salpeter equation (BSE) by using structured iterative eigenvalue solvers. Both approaches are based on the reduced basis method and low-rank factorizations of the generating matrices. We also propose to represent the static screen interaction part in the BSE matrix by a small active sub-block, with a size balancing the storage for rank-structured representations of other matrix blocks. We demonstrate by various numerical tests that the combination of the diagonal plus low-rank plus reduced-block approximation exhibits higher precision with low numerical cost, providing as well a distinct two-sided error estimate for the smallest eigenvalues of the Bethe-Salpeter operator. The complexity is reduced to O (Nb2) in the size of the atomic orbitals basis set, Nb, instead of the practically intractable O (Nb6) scaling for the direct diagonalization. In the second approach, we apply the quantized-TT (QTT) tensor representation to both, the long eigenvectors and the column vectors in the rank-structured BSE matrix blocks, and combine this with the ALS-type iteration in block QTT format. The QTT-rank of the matrix entities possesses almost the same magnitude as the number of occupied orbitals in the molecular systems, No
Bakshi, Mandeep Singh
2014-11-01
Target drug delivery methodology is becoming increasingly important to overcome the shortcomings of conventional drug delivery absorption method. It improves the action time with uniform distribution and poses minimum side effects, but is usually difficult to design to achieve the desire results. Economically favorable, environment friendly, multifunctional, and easy to design, hybrid nanomaterials have demonstrated their enormous potential as target drug delivery vehicles. A combination of both micelles and nanoparticles makes them fine target delivery vehicles in a variety of biological applications where precision is primarily required to achieve the desired results as in the case of cytotoxicity of cancer cells, chemotherapy, and computed tomography guided radiation therapy. Copyright © 2014 Elsevier B.V. All rights reserved.
Marolf, Angela; Blaik, Margaret; Ackerman, Norman; Watson, Elizabeth; Gibson, Nicole; Thompson, Margret
2008-01-01
The role of digital imaging is increasing as these systems are becoming more affordable and accessible. Advantages of computed radiography compared with conventional film/screen combinations include improved contrast resolution and postprocessing capabilities. Computed radiography's spatial resolution is inferior to conventional radiography; however, this limitation is considered clinically insignificant. This study prospectively compared digital imaging and conventional radiography in detecting small volume pneumoperitoneum. Twenty cadaver dogs (15-30 kg) were injected with 0.25, 0.25, and 0.5 ml for 1 ml total of air intra-abdominally, and radiographed sequentially using computed and conventional radiographic technologies. Three radiologists independently evaluated the images, and receiver operating curve (ROC) analysis compared the two imaging modalities. There was no statistical difference between computed and conventional radiography in detecting free abdominal air, but overall computed radiography was relatively more sensitive based on ROC analysis. Computed radiographic images consistently and significantly demonstrated a minimal amount of 0.5 ml of free air based on ROC analysis. However, no minimal air amount was consistently or significantly detected with conventional film. Readers were more likely to detect free air on lateral computed images than the other projections, with no significant increased sensitivity between film/screen projections. Further studies are indicated to determine the differences or lack thereof between various digital imaging systems and conventional film/screen systems.
Mixture design and treatment methods for recycling contaminated sediment.
Wang, Lei; Kwok, June S H; Tsang, Daniel C W; Poon, Chi-Sun
2015-01-01
Conventional marine disposal of contaminated sediment presents significant financial and environmental burden. This study aimed to recycle the contaminated sediment by assessing the roles and integration of binder formulation, sediment pretreatment, curing method, and waste inclusion in stabilization/solidification. The results demonstrated that the 28-d compressive strength of sediment blocks produced with coal fly ash and lime partially replacing cement at a binder-to-sediment ratio of 3:7 could be used as fill materials for construction. The X-ray diffraction analysis revealed that hydration products (calcium hydroxide) were difficult to form at high sediment content. Thermal pretreatment of sediment removed 90% of indigenous organic matter, significantly increased the compressive strength, and enabled reuse as non-load-bearing masonry units. Besides, 2-h CO2 curing accelerated early-stage carbonation inside the porous structure, sequestered 5.6% of CO2 (by weight) in the sediment blocks, and acquired strength comparable to 7-d curing. Thermogravimetric analysis indicated substantial weight loss corresponding to decomposition of poorly and well crystalline calcium carbonate. Moreover, partial replacement of contaminated sediment by various granular waste materials notably augmented the strength of sediment blocks. The metal leachability of sediment blocks was minimal and acceptable for reuse. These results suggest that contaminated sediment should be viewed as useful resources. Copyright © 2014 Elsevier B.V. All rights reserved.
Thermal convection of liquid metal in the titanium reduction reactor
NASA Astrophysics Data System (ADS)
Teimurazov, A.; Frick, P.; Stefani, F.
2017-06-01
The structure of the convective flow of molten magnesium in a metallothermic titanium reduction reactor has been studied numerically in a three-dimensional non-stationary formulation with conjugated heat transfer between liquid magnesium and solids (steel walls of the cavity and titanium block). A nonuniform computational mesh with a total of 3.7 million grid points was used. The Large Eddy Simulation technique was applied to take into account the turbulence in the liquid phase. The instantaneous and average characteristics of the process and the velocity and temperature pulsation fields are analyzed. The simulations have been performed for three specific heating regimes: with furnace heaters operating at full power, with furnace heaters switched on at the bottom of the vessel only, and with switched-off furnace heaters. It is shown that the localization of the cooling zone can completely reorganize the structure of the large-scale flow. Therefore, by changing heating regimes, it is possible to influence the flow structure for the purpose of creating the most favorable conditions for the reaction. It is also shown that the presence of the titanium block strongly affects the flow structure.
Reynolds-Averaged Navier-Stokes Simulations of Two Partial-Span Flap Wing Experiments
NASA Technical Reports Server (NTRS)
Takalluk, M. A.; Laflin, Kelly R.
1998-01-01
Structured Reynolds Averaged Navier-Stokes simulations of two partial-span flap wing experiments were performed. The high-lift aerodynamic and aeroacoustic wind-tunnel experiments were conducted at both the NASA Ames 7-by 10-Foot Wind Tunnel and at the NASA Langley Quiet Flow Facility. The purpose of these tests was to accurately document the acoustic and aerodynamic characteristics associated with the principle airframe noise sources, including flap side-edge noise. Specific measurements were taken that can be used to validate analytic and computational models of the noise sources and associated aerodynamic for configurations and conditions approximating flight for transport aircraft. The numerical results are used to both calibrate a widely used CFD code, CFL3D, and to obtain details of flap side-edge flow features not discernible from experimental observations. Both experimental set-ups were numerically modeled by using multiple block structured grids. Various turbulence models, grid block-interface interaction methods and grid topologies were implemented. Numerical results of both simulations are in excellent agreement with experimental measurements and flow visualization observations. The flow field in the flap-edge region was adequately resolved to discern some crucial information about the flow physics and to substantiate the merger of the two vortical structures. As a result of these investigations, airframe noise modelers have proposed various simplified models which use the results obtained from the steady-state computations as input.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Design and synthesis of emodin derivatives as novel inhibitors of ATP-citrate lyase.
Koerner, Steffi K; Hanai, Jun-Ichi; Bai, Sha; Jernigan, Finith E; Oki, Miwa; Komaba, Chieko; Shuto, Emi; Sukhatme, Vikas P; Sun, Lijun
2017-01-27
Aberrant cellular metabolism drives cancer proliferation and metastasis. ATP citrate lyase (ACL) plays a critical role in generating cytosolic acetyl CoA, a key building block for de novo fatty acid and cholesterol biosynthesis. ACL is overexpressed in cancer cells, and siRNA knockdown of ACL limits cancer cell proliferation and reduces cancer stemness. We characterized a new class of ACL inhibitors bearing the key structural feature of the natural product emodin. Structure-activity relationship (SAR) study led to the identification of 1d as a potent lead that demonstrated dose-dependent inhibition of proliferation and cancer stemness of the A549 lung cancer cell line. Computational modeling indicates this class of inhibitors occupies an allosteric binding site and blocks the entrance of the substrate citrate to its binding site. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Guiding principles for peptide nanotechnology through directed discovery.
Lampel, A; Ulijn, R V; Tuttle, T
2018-05-21
Life's diverse molecular functions are largely based on only a small number of highly conserved building blocks - the twenty canonical amino acids. These building blocks are chemically simple, but when they are organized in three-dimensional structures of tremendous complexity, new properties emerge. This review explores recent efforts in the directed discovery of functional nanoscale systems and materials based on these same amino acids, but that are not guided by copying or editing biological systems. The review summarises insights obtained using three complementary approaches of searching the sequence space to explore sequence-structure relationships for assembly, reactivity and complexation, namely: (i) strategic editing of short peptide sequences; (ii) computational approaches to predicting and comparing assembly behaviours; (iii) dynamic peptide libraries that explore the free energy landscape. These approaches give rise to guiding principles on controlling order/disorder, complexation and reactivity by peptide sequence design.
Coronary angiogram video compression for remote browsing and archiving applications.
Ouled Zaid, Azza; Fradj, Bilel Ben
2010-12-01
In this paper, we propose a H.264/AVC based compression technique adapted to coronary angiograms. H.264/AVC coder has proven to use the most advanced and accurate motion compensation process, but, at the cost of high computational complexity. On the other hand, analysis of coronary X-ray images reveals large areas containing no diagnostically important information. Our contribution is to exploit the energy characteristics in slice equal size regions to determine the regions with relevant information content, to be encoded using the H.264 coding paradigm. The others regions, are compressed using fixed block motion compensation and conventional hard-decision quantization. Experiments have shown that at the same bitrate, this procedure reduces the H.264 coder computing time of about 25% while attaining the same visual quality. A subjective assessment, based on the consensus approach leads to a compression ratio of 30:1 which insures both a diagnostic adequacy and a sufficient compression in regards to storage and transmission requirements. Copyright © 2010 Elsevier Ltd. All rights reserved.
Seruya, Mitchel; Fisher, Mark; Rodriguez, Eduardo D
2013-11-01
There has been rising interest in computer-aided design/computer-aided manufacturing for preoperative planning and execution of osseous free flap reconstruction. The purpose of this study was to compare outcomes between computer-assisted and conventional fibula free flap techniques for craniofacial reconstruction. A two-center, retrospective review was carried out on patients who underwent fibula free flap surgery for craniofacial reconstruction from 2003 to 2012. Patients were categorized by the type of reconstructive technique: conventional (between 2003 and 2009) or computer-aided design/computer-aided manufacturing (from 2010 to 2012). Demographics, surgical factors, and perioperative and long-term outcomes were compared. A total of 68 patients underwent microsurgical craniofacial reconstruction: 58 conventional and 10 computer-aided design and manufacturing fibula free flaps. By demographics, patients undergoing the computer-aided design/computer-aided manufacturing method were significantly older and had a higher rate of radiotherapy exposure compared with conventional patients. Intraoperatively, the median number of osteotomies was significantly higher (2.0 versus 1.0, p=0.002) and the median ischemia time was significantly shorter (120 minutes versus 170 minutes, p=0.004) for the computer-aided design/computer-aided manufacturing technique compared with conventional techniques; operative times were shorter for patients undergoing the computer-aided design/computer-aided manufacturing technique, although this did not reach statistical significance. Perioperative and long-term outcomes were equivalent for the two groups, notably, hospital length of stay, recipient-site infection, partial and total flap loss, and rate of soft-tissue and bony tissue revisions. Microsurgical craniofacial reconstruction using a computer-assisted fibula flap technique yielded significantly shorter ischemia times amidst a higher number of osteotomies compared with conventional techniques. Therapeutic, III.
Investigation of Kevlar fabric based materials for use with inflatable structures
NASA Technical Reports Server (NTRS)
Niccum, R. J.; Munson, J. B.
1974-01-01
Design, manufacture and testing of laminated and coated composite materials incorporating a structural matrix of Kevlar are reported in detail. The practicality of using Kevlar in aerostat materials is demonstrated and data are provided on practical weaves, lamination and coating particulars, rigidity, strength, weight, elastic coefficients, abrasion resistance, crease effects, peel strength, blocking tendencies, helium permeability, and fabrication techniques. Properties of the Kevlar based materials are compared with conventional, Dacron reinforced counterparts. A comprehensive test and qualification program is discussed and quantitative biaxial tensile and shear test data are provided. The investigation shows that single ply laminates of Kevlar and plastic films offer significant strength to weight improvements, are less permeable than two ply coated materials, but have a lower flex life.
Recognition of Computer Viruses by Detecting Their Gene of Self Replication
2006-03-01
etection A pproach ................................................................................................. 6 1.4.1 The syntactic analysis m...Therefore a group of instructions acting together in the right order have to be identified for the gene of self-replication to be obvious in a...its first system call NtCreateFile, while the outputs of NtWriteFile become its output arguments. These four blocks form the final structure - The Gene
Computational Methods for Control and Estimation of Distributed System
1988-08-01
prey example. [1987, August] Estimation of Nonlinearities in Parabolic Models for Growth, Predation and Dispersal of Populations. S a ON A VARIATIONAL ...NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP 19. ABSTRACT (Continue...techniques for infinite dimensional systems. (v) Control and stabilization of visco-elastic structures. (vi) Approximation in delay and Volterra type
Parallel structures in human and computer memory
NASA Astrophysics Data System (ADS)
Kanerva, Pentti
1986-08-01
If we think of our experiences as being recorded continuously on film, then human memory can be compared to a film library that is indexed by the contents of the film strips stored in it. Moreover, approximate retrieval cues suffice to retrieve information stored in this library: We recognize a familiar person in a fuzzy photograph or a familiar tune played on a strange instrument. This paper is about how to construct a computer memory that would allow a computer to recognize patterns and to recall sequences the way humans do. Such a memory is remarkably similar in structure to a conventional computer memory and also to the neural circuits in the cortex of the cerebellum of the human brain. The paper concludes that the frame problem of artificial intelligence could be solved by the use of such a memory if we were able to encode information about the world properly.
The hierarchical expert tuning of PID controllers using tools of soft computing.
Karray, F; Gueaieb, W; Al-Sharhan, S
2002-01-01
We present soft computing-based results pertaining to the hierarchical tuning process of PID controllers located within the control loop of a class of nonlinear systems. The results are compared with PID controllers implemented either in a stand alone scheme or as a part of conventional gain scheduling structure. This work is motivated by the increasing need in the industry to design highly reliable and efficient controllers for dealing with regulation and tracking capabilities of complex processes characterized by nonlinearities and possibly time varying parameters. The soft computing-based controllers proposed are hybrid in nature in that they integrate within a well-defined hierarchical structure the benefits of hard algorithmic controllers with those having supervisory capabilities. The controllers proposed also have the distinct features of learning and auto-tuning without the need for tedious and computationally extensive online systems identification schemes.
Parallel structures in human and computer memory
NASA Technical Reports Server (NTRS)
Kanerva, P.
1986-01-01
If one thinks of our experiences as being recorded continuously on film, then human memory can be compared to a film library that is indexed by the contents of the film strips stored in it. Moreover, approximate retrieval cues suffice to retrieve information stored in this library. One recognizes a familiar person in a fuzzy photograph or a familiar tune played on a strange instrument. A computer memory that would allow a computer to recognize patterns and to recall sequences the way humans do is constructed. Such a memory is remarkably similiar in structure to a conventional computer memory and also to the neural circuits in the cortex of the cerebellum of the human brain. It is concluded that the frame problem of artificial intelligence could be solved by the use of such a memory if one were able to encode information about the world properly.
NASA Astrophysics Data System (ADS)
Gokhale, Shreyas; Hima Nagamanasa, K.; Sood, A. K.; Ganapathy, Rajesh
2016-07-01
Elucidating the nature of the glass transition has been the holy grail of condensed matter physics and statistical mechanics for several decades. A phenomenological aspect that makes glass formation a conceptually formidable problem is that structural and dynamic correlations in glass-forming liquids are too subtle to be captured at the level of conventional two-point functions. As a consequence, a host of theoretical techniques, such as quenched amorphous configurations of particles, have been devised and employed in simulations and colloid experiments to gain insights into the mechanisms responsible for these elusive correlations. Very often, though, the analysis of spatio-temporal correlations is performed in the context of a single theoretical framework, and critical comparisons of microscopic predictions of competing theories are thereby lacking. Here, we address this issue by analysing the distribution of localized excitations, which are building blocks of relaxation as per the dynamical facilitation (DF) theory, in the presence of an amorphous wall, a construct motivated by the random first-order transition theory (RFOT). We observe that spatial profiles of the concentration of excitations exhibit complex features such as non-monotonicity and oscillations. Moreover, the smoothly varying part of the concentration profile yields a length scale {ξc} , which we compare with a previously computed length scale {ξ\\text{dyn}} . Our results suggest a method to assess the role of dynamical facilitation in governing structural relaxation in glass-forming liquids.
A data-management system for detailed areal interpretive data
Ferrigno, C.F.
1986-01-01
A data storage and retrieval system has been developed to organize and preserve areal interpretive data. This system can be used by any study where there is a need to store areal interpretive data that generally is presented in map form. This system provides the capability to grid areal interpretive data for input to groundwater flow models at any spacing and orientation. The data storage and retrieval system is designed to be used for studies that cover small areas such as counties. The system is built around a hierarchically structured data base consisting of related latitude-longitude blocks. The information in the data base can be stored at different levels of detail, with the finest detail being a block of 6 sec of latitude by 6 sec of longitude (approximately 0.01 sq mi). This system was implemented on a mainframe computer using a hierarchical data base management system. The computer programs are written in Fortran IV and PL/1. The design and capabilities of the data storage and retrieval system, and the computer programs that are used to implement the system are described. Supplemental sections contain the data dictionary, user documentation of the data-system software, changes that would need to be made to use this system for other studies, and information on the computer software tape. (Lantz-PTT)
An Accelerated Recursive Doubling Algorithm for Block Tridiagonal Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K
2014-01-01
Block tridiagonal systems of linear equations arise in a wide variety of scientific and engineering applications. Recursive doubling algorithm is a well-known prefix computation-based numerical algorithm that requires O(M^3(N/P + log P)) work to compute the solution of a block tridiagonal system with N block rows and block size M on P processors. In real-world applications, solutions of tridiagonal systems are most often sought with multiple, often hundreds and thousands, of different right hand sides but with the same tridiagonal matrix. Here, we show that a recursive doubling algorithm is sub-optimal when computing solutions of block tridiagonal systems with multiplemore » right hand sides and present a novel algorithm, called the accelerated recursive doubling algorithm, that delivers O(R) improvement when solving block tridiagonal systems with R distinct right hand sides. Since R is typically about 100 1000, this improvement translates to very significant speedups in practice. Detailed complexity analyses of the new algorithm with empirical confirmation of runtime improvements are presented. To the best of our knowledge, this algorithm has not been reported before in the literature.« less
Vascular tissue engineering by computer-aided laser micromachining.
Doraiswamy, Anand; Narayan, Roger J
2010-04-28
Many conventional technologies for fabricating tissue engineering scaffolds are not suitable for fabricating scaffolds with patient-specific attributes. For example, many conventional technologies for fabricating tissue engineering scaffolds do not provide control over overall scaffold geometry or over cell position within the scaffold. In this study, the use of computer-aided laser micromachining to create scaffolds for vascular tissue networks was investigated. Computer-aided laser micromachining was used to construct patterned surfaces in agarose or in silicon, which were used for differential adherence and growth of cells into vascular tissue networks. Concentric three-ring structures were fabricated on agarose hydrogel substrates, in which the inner ring contained human aortic endothelial cells, the middle ring contained HA587 human elastin and the outer ring contained human aortic vascular smooth muscle cells. Basement membrane matrix containing vascular endothelial growth factor and heparin was to promote proliferation of human aortic endothelial cells within the vascular tissue networks. Computer-aided laser micromachining provides a unique approach to fabricate small-diameter blood vessels for bypass surgery as well as other artificial tissues with complex geometries.
Sustainable management and utilisation of concrete slurry waste: A case study in Hong Kong.
Hossain, Md Uzzal; Xuan, Dongxing; Poon, Chi Sun
2017-03-01
With the promotion of environmental protection in the construction industry, the mission to achieve more sustainable use of resources during the production process of concrete is also becoming important. This study was conducted to assess the environmental sustainability of concrete slurry waste (CSW) management by life cycle assessment (LCA) techniques, with the aim of identifying a resource-efficient solution for utilisation of CSW in the production of partition wall blocks. CSW is the dewatered solid residues deposited in the sedimentation tank after washing out over-ordered/rejected fresh concrete and concrete trucks in concrete batching plants. The reuse of CSW as recycled aggregates or a cementitious binder for producing partition wall blocks, and the life cycle environmental impact of the blocks were assessed and compared with the conventional one designed with natural materials. The LCA results showed that the partition wall blocks prepared with fresh CSW and recycled concrete aggregates achieved higher sustainability as it consumed 59% lower energy, emitted 66% lower greenhouse gases, and produced lesser amount of other environmental impacts than that of the conventional one. When the mineral carbonation technology was further adopted for blocks curing using CO 2 , the global warming potential of the corresponding blocks production process was negligible, and hence the carbonated blocks may be considered as carbon neutral eco-product. Copyright © 2017 Elsevier Ltd. All rights reserved.
A novel approach to multiple sequence alignment using hadoop data grids.
Sudha Sadasivam, G; Baktavatchalam, G
2010-01-01
Multiple alignment of protein sequences helps to determine evolutionary linkage and to predict molecular structures. The factors to be considered while aligning multiple sequences are speed and accuracy of alignment. Although dynamic programming algorithms produce accurate alignments, they are computation intensive. In this paper we propose a time efficient approach to sequence alignment that also produces quality alignment. The dynamic nature of the algorithm coupled with data and computational parallelism of hadoop data grids improves the accuracy and speed of sequence alignment. The principle of block splitting in hadoop coupled with its scalability facilitates alignment of very large sequences.
Advanced information processing system: Local system services
NASA Technical Reports Server (NTRS)
Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter
1989-01-01
The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.
NASA Astrophysics Data System (ADS)
Vargantwar, Pruthesh Hariharrao
Block copolymers (BCs) have remained at the forefront of materials research due to their versatility in applications ranging from hot-melt/pressure-sensitive adhesives and impact modifiers to compatibilizing agents and vibration-dampening/nanotemplating media. Of particular interest are macromolecules composed of two or more chemically dissimilar blocks covalently linked together to form triblock or pentablock copolymers. If the blocks are sufficiently incompatible and the copolymer behaves as a thermoplastic elastomer, the molecules can spontaneously self-assemble to form nanostructured materials that exhibit shape memory due to the formation of a supramolecular network. The BCs of these types are termed as conventional. When BCs contain blocks having ionic moieties such as sulfonic acid groups, they are termed as block ionomers. Designing new systems based on either conventional or ionic BCs, characterizing their structure-property relationships and later using them as electroacive polymers form the essential objectives of this work. Electroactive polymers (EAPs) exhibit electromechanical actuation when stimulated by an external electric field. In the first part of this work, it is shown that BCs resolve some of the outstanding problems presently encountered in the design of two different classes of EAP actuators: dielectric elastomers (DEs) and ionic polymer metal composites (IPMCs). All-acrylic triblock copolymer gels used as DEs actuate with high efficacy without any requirement of mechanical prestrain and, thus, eliminate the need for bulky and heavy hardware essential with prestrained dielectric actuators, as well as material problems associated with stress relaxation. The dependence of actuation behavior on gel morphology as evaluated from mechanical and microstructure studies is observed. In the case of IPMCs, ionic BCs employed in this study greatly facilitate processing compared to other contenders such as NafionRTM, which is commonly used in this class of EAPs. The unique copolymer investigated here (i) retains its mechanical integrity when highly solvated by polar solvents, (ii) demonstrates a high degree of actuation when tested in a cantilever configuration, and (iii) avoids the shortcomings of back-relaxation/overshoot within the testing conditions when used in combination with an appropriate solvent. In the second part of this work, two chemical strategies to design midblock sulfonated block ionomers are explored. In one case, selective sulfonation of the midblocks in triblock copolymers is achieved via a dioxane:sulfur trioxide chemistry, while in the other acetyl sulfate is used for the same purpose. Excellent control on the degree of sulfonation (DOS) is achieved. The block ionomers swell in different solvents while retaining their mechanical integrity. They show disorder-order, order-order, and order-reduced order morphological transitions as DOS varies. These transitions in morphologies are reflected in their thermal behavior as well. The microstructures show periodicity, which is, again, a function of DOS. The transitions are explained in terms of the molar volume expansion and volume densification of the blocks on sulfonation. The ionic levels, morphology and periodicity in microstructure are important for applications such as actuators, sensors and fuel cell membranes. The ability to tune these aspects in the ionomers designed in this work make them potential candidates for these applications.
NASA Astrophysics Data System (ADS)
Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu
2018-06-01
Four-dimensional (4D) x-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose the use of a moving blocker (MB) during the 4D CBCT acquisition (‘4D MB’) and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the x-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics.
Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu
2018-05-03
Four-dimensional (4D) X-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose to use a moving blocker (MB) during the 4D CBCT acquisition ("4D MB") and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the X-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics. © 2018 Institute of Physics and Engineering in Medicine.
Rational design of alpha-helical tandem repeat proteins with closed architectures
Doyle, Lindsey; Hallinan, Jazmine; Bolduc, Jill; Parmeggiani, Fabio; Baker, David; Stoddard, Barry L.; Bradley, Philip
2015-01-01
Tandem repeat proteins, which are formed by repetition of modular units of protein sequence and structure, play important biological roles as macromolecular binding and scaffolding domains, enzymes, and building blocks for the assembly of fibrous materials1,2. The modular nature of repeat proteins enables the rapid construction and diversification of extended binding surfaces by duplication and recombination of simple building blocks3,4. The overall architecture of tandem repeat protein structures – which is dictated by the internal geometry and local packing of the repeat building blocks – is highly diverse, ranging from extended, super-helical folds that bind peptide, DNA, and RNA partners5–9, to closed and compact conformations with internal cavities suitable for small molecule binding and catalysis10. Here we report the development and validation of computational methods for de novo design of tandem repeat protein architectures driven purely by geometric criteria defining the inter-repeat geometry, without reference to the sequences and structures of existing repeat protein families. We have applied these methods to design a series of closed alpha-solenoid11 repeat structures (alpha-toroids) in which the inter-repeat packing geometry is constrained so as to juxtapose the N- and C-termini; several of these designed structures have been validated by X-ray crystallography. Unlike previous approaches to tandem repeat protein engineering12–20, our design procedure does not rely on template sequence or structural information taken from natural repeat proteins and hence can produce structures unlike those seen in nature. As an example, we have successfully designed and validated closed alpha-solenoid repeats with a left-handed helical architecture that – to our knowledge – is not yet present in the protein structure database21. PMID:26675735
The influence of the in situ camera calibration for direct georeferencing of aerial imagery
NASA Astrophysics Data System (ADS)
Mitishita, E.; Barrios, R.; Centeno, J.
2014-11-01
The direct determination of exterior orientation parameters (EOPs) of aerial images via GNSS/INS technologies is an essential prerequisite in photogrammetric mapping nowadays. Although direct sensor orientation technologies provide a high degree of automation in the process due to the GNSS/INS technologies, the accuracies of the obtained results depend on the quality of a group of parameters that models accurately the conditions of the system at the moment the job is performed. One sub-group of parameters (lever arm offsets and boresight misalignments) models the position and orientation of the sensors with respect to the IMU body frame due to the impossibility of having all sensors on the same position and orientation in the airborne platform. Another sub-group of parameters models the internal characteristics of the sensor (IOP). A system calibration procedure has been recommended by worldwide studies to obtain accurate parameters (mounting and sensor characteristics) for applications of the direct sensor orientation. Commonly, mounting and sensor characteristics are not stable; they can vary in different flight conditions. The system calibration requires a geometric arrangement of the flight and/or control points to decouple correlated parameters, which are not available in the conventional photogrammetric flight. Considering this difficulty, this study investigates the feasibility of the in situ camera calibration to improve the accuracy of the direct georeferencing of aerial images. The camera calibration uses a minimum image block, extracted from the conventional photogrammetric flight, and control point arrangement. A digital Vexcel UltraCam XP camera connected to POS AV TM system was used to get two photogrammetric image blocks. The blocks have different flight directions and opposite flight line. In situ calibration procedures to compute different sets of IOPs are performed and their results are analyzed and used in photogrammetric experiments. The IOPs from the in situ camera calibration improve significantly the accuracies of the direct georeferencing. The obtained results from the experiments are shown and discussed.
CFD Analysis and Design Optimization Using Parallel Computers
NASA Technical Reports Server (NTRS)
Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James
1997-01-01
A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.
Pourgiezis, N; Reddy, S P; Nankivell, M; Morrison, G; VanEssen, J
2016-08-01
To compare patient-matched instrumentation (PMI) with conventional total knee arthroplasty (TKA) in terms of limb alignment and component position. Nine men and 36 women (mean age, 69.5 years) who underwent PMI TKA were compared with 20 men and 25 women (mean age, 69.3 years) who underwent conventional TKA by the same team of surgeons with the same prosthesis and protocols in terms of limb alignment and component position using the Perth protocol computed tomography, as well as bone resection measurements, operating time, and the number of trays used. The PMI and conventional TKA groups were comparable in terms of age, body mass index, tourniquet time, operating time, and the number of trays used. For limb alignment and component position, the 2 groups differed significantly in sagittal femoral component position (2.4º vs. 0.9º, p=0.0008) and the percentage of knees with femoral component internally rotated ≥1° with respect to the transepicondylar axis (20% vs. 55%, p=0.001). The difference was not significant in terms of limb alignment, coronal and rotational femoral component position, or coronal and sagittal tibial component position. Intra-operatively, all patient-matched cutting blocks demonstrated acceptable fit and stability. No instrument-related adverse events or complications were encountered. One (2.2%) femur and 6 (13.3%) tibiae were recut 2 mm for optimal ligament balancing. Two femoral components were upsized to the next size, and 2 tibial components were upsized and 2 downsized to the next size. PMI was as accurate as conventional instrumentation in TKA. There was no significant difference in limb alignment or femoral and tibial component position in the coronal and sagittal planes between PMI and conventional TKA. PMI had a higher tendency to achieve correct femoral component rotation.
A Low Cost Structurally Optimized Design for Diverse Filter Types
Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar
2016-01-01
A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image processing applications especially in a constraint environment. PMID:27832133
Eigenvalue routines in NASTRAN: A comparison with the Block Lanczos method
NASA Technical Reports Server (NTRS)
Tischler, V. A.; Venkayya, Vipperla B.
1993-01-01
The NASA STRuctural ANalysis (NASTRAN) program is one of the most extensively used engineering applications software in the world. It contains a wealth of matrix operations and numerical solution techniques, and they were used to construct efficient eigenvalue routines. The purpose of this paper is to examine the current eigenvalue routines in NASTRAN and to make efficiency comparisons with a more recent implementation of the Block Lanczos algorithm by Boeing Computer Services (BCS). This eigenvalue routine is now available in the BCS mathematics library as well as in several commercial versions of NASTRAN. In addition, CRAY maintains a modified version of this routine on their network. Several example problems, with a varying number of degrees of freedom, were selected primarily for efficiency bench-marking. Accuracy is not an issue, because they all gave comparable results. The Block Lanczos algorithm was found to be extremely efficient, in particular, for very large size problems.
Nanopowder synthesis based on electric explosion technology
NASA Astrophysics Data System (ADS)
Kryzhevich, D. S.; Zolnikov, K. P.; Korchuganov, A. V.; Psakhie, S. G.
2017-10-01
A computer simulation of the bicomponent nanoparticle formation during the electric explosion of copper and nickel wires was carried out. The calculations were performed in the framework of the molecular dynamics method using many-body potentials of interatomic interaction. As a result of an electric explosion of dissimilar metal wires, bicomponent nanoparticles having different stoichiometry and a block structure can be formed. It is possible to control the process of destruction and the structure of the formed bicomponent nanoparticles by varying the distance between the wires and the loading parameters.
Performance of Ultra Wideband On-Body Communication Based on Statistical Channel Model
NASA Astrophysics Data System (ADS)
Wang, Qiong; Wang, Jianqing
Ultra wideband (UWB) on-body communication is attracting much attention in biomedical applications. In this paper, the performance of UWB on-body communication is investigated based on a statistically extracted on-body channel model, which provides detailed characteristics of the multi-path-affected channel with an emphasis on various body postures or body movement. The possible data rate, the possible communication distance, as well as the bit error rate (BER) performance are clarified via computer simulation. It is found that the conventional correlation receiver is incompetent in the multi-path-affected on-body channel, while the RAKE receiver outperforms the conventional correlation receiver at a cost of structure complexity. Different RAKE receiver structures are compared to show the improvement of the BER performance.
NASA Technical Reports Server (NTRS)
1981-01-01
The development of a coal gasification system design and mass and energy balance simulation program for the TVA and other similar facilities is described. The materials-process-product model (MPPM) and the advanced system for process engineering (ASPEN) computer program were selected from available steady state and dynamic models. The MPPM was selected to serve as the basis for development of system level design model structure because it provided the capability for process block material and energy balance and high-level systems sizing and costing. The ASPEN simulation serves as the basis for assessing detailed component models for the system design modeling program. The ASPEN components were analyzed to identify particular process blocks and data packages (physical properties) which could be extracted and used in the system design modeling program. While ASPEN physical properties calculation routines are capable of generating physical properties required for process simulation, not all required physical property data are available, and must be user-entered.
Lee, Junhwa; Lee, Kyoung-Chan; Cho, Soojin
2017-01-01
The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments. PMID:29019950
NASA Astrophysics Data System (ADS)
Nyoung Jang, Jin; Jong Lee, You; Jang, YunSung; Yun, JangWon; Yi, Seungjun; Hong, MunPyo
2016-06-01
In this study, we confirm that bombardment by high energy negative oxygen ions (NOIs) is the key origin of electro-optical property degradations in indium tin oxide (ITO) thin films formed by conventional plasma sputtering processes. To minimize the bombardment effect of NOIs, which are generated on the surface of the ITO targets and accelerated by the cathode sheath potential on the magnetron sputter gun (MSG), we introduce a magnetic field shielded sputtering (MFSS) system composed of a permanent magnetic array between the MSG and the substrate holder to block the arrival of energetic NOIs. The MFSS processed ITO thin films reveal a novel nanocrystal imbedded polymorphous structure, and present not only superior electro-optical characteristics but also higher gas diffusion barrier properties. To the best of our knowledge, no gas diffusion barrier composed of a single inorganic thin film formed by conventional plasma sputtering processes achieves such a low moisture permeability.
Development of ocular viscosity characterization method.
Shu-Hao Lu; Guo-Zhen Chen; Leung, Stanley Y Y; Lam, David C C
2016-08-01
Glaucoma is the second leading cause for blindness. Irreversible and progressive optic nerve damage results when the intraocular pressure (IOP) exceeds 21 mmHg. The elevated IOP is attributed to blocked fluid drainage from the eye. Methods to measure the IOP are widely available, but methods to measure the viscous response to blocked drainage has yet been developed. An indentation method to characterize the ocular flow is developed in this study. Analysis of the load-relaxation data from indentation tests on drainage-controlled porcine eyes showed that the blocked drainage is correlated with increases in ocular viscosity. Successful correlation of the ocular viscosity with drainage suggests that ocular viscosity maybe further developed as a new diagnostic parameter for assessment of normal tension glaucoma where nerve damage occurs without noticeable IOP elevation; and as a diagnostic parameter complimentary to conventional IOP in conventional diagnosis.
Sunada, Katsuhisa
2015-01-01
Background Conventional anesthetic nerve block injections into the mandibular foramen risk causing nerve damage. This study aimed to compare the efficacy and safety of the anterior technique (AT) of inferior alveolar nerve block using felypressin-propitocaine with a conventional nerve block technique (CT) using epinephrine and lidocaine for anesthesia via the mandibular foramen. Methods Forty healthy university students with no recent dental work were recruited as subjects and assigned to two groups: right side CT or right side AT. Anesthesia was evaluated in terms of success rate, duration of action, and injection pain. These parameters were assessed at the first incisor, premolar, and molar, 60 min after injection. Chi-square and unpaired t-tests were used for statistical comparisons, with a P value of < 0.05 designating significance. Results The two nerve block techniques generated comparable success rates for the right mandible, with rates of 65% (CT) and 60% (AT) at both the first molar and premolar, and rates of 60% (CT) and 50% (AT) at the lateral incisor. The duration of anesthesia using the CT was 233 ± 37 min, which was approximately 40 min shorter than using the AT. This difference was statistically significant (P < 0.05). Injection pain using the AT was rated as milder compared with the CT. This difference was also statistically significant (P < 0.05). Conclusions The AT is no less successful than the CT for inducing anesthesia, and has the added benefits of a significantly longer duration of action and significantly less pain. PMID:28879260
Shinzaki, Hazuki; Sunada, Katsuhisa
2015-06-01
Conventional anesthetic nerve block injections into the mandibular foramen risk causing nerve damage. This study aimed to compare the efficacy and safety of the anterior technique (AT) of inferior alveolar nerve block using felypressin-propitocaine with a conventional nerve block technique (CT) using epinephrine and lidocaine for anesthesia via the mandibular foramen. Forty healthy university students with no recent dental work were recruited as subjects and assigned to two groups: right side CT or right side AT. Anesthesia was evaluated in terms of success rate, duration of action, and injection pain. These parameters were assessed at the first incisor, premolar, and molar, 60 min after injection. Chi-square and unpaired t-tests were used for statistical comparisons, with a P value of < 0.05 designating significance. The two nerve block techniques generated comparable success rates for the right mandible, with rates of 65% (CT) and 60% (AT) at both the first molar and premolar, and rates of 60% (CT) and 50% (AT) at the lateral incisor. The duration of anesthesia using the CT was 233 ± 37 min, which was approximately 40 min shorter than using the AT. This difference was statistically significant (P < 0.05). Injection pain using the AT was rated as milder compared with the CT. This difference was also statistically significant (P < 0.05). The AT is no less successful than the CT for inducing anesthesia, and has the added benefits of a significantly longer duration of action and significantly less pain.
Diagnostic ability of computed tomography using DentaScan software in endodontics: case reports.
Siotia, Jaya; Gupta, Sunil K; Acharya, Shashi R; Saraswathi, Vidya
2011-01-01
Radiographic examination is essential in diagnosis and treatment planning in endodontics. Conventional radiographs depict structures in two dimensions only. The ability to assess the area of interest in three dimensions is advantageous. Computed tomography is an imaging technique which produces three-dimensional images of an object by taking a series of two-dimensional sectional X-ray images. DentaScan is a computed tomography software program that allows the mandible and maxilla to be imaged in three planes: axial, panoramic, and cross-sectional. As computed tomography is used in endodontics, DentaScan can play a wider role in endodontic diagnosis. It provides valuable information in the assessment of the morphology of the root canal, diagnosis of root fractures, internal and external resorptions, pre-operative assessment of anatomic structures etc. The aim of this article is to explore the clinical usefulness of computed tomography and DentaScan in endodontic diagnosis, through a series of four cases of different endodontic problems.
Usmani, Hammad; Dureja, G P; Andleeb, Roshan; Tauheed, Nazia; Asif, Naiyer
2018-01-10
Chronic nononcological perineal pain has been effectively managed by ganglion Impar block. Chemical neurolysis, cryoablation, and radiofrequency ablation have been the accepted methods of blockade. Recently, pulsed radiofrequency, a novel variant of conventional radiofrequency, has been used for this purpose. This was a prospective, randomized, double-blind study. Two different interventional pain management centers in India. To compare the efficacy of conventional radiofrequency and pulsed radiofrequency for gangliom Impar block. The patients were randomly allocated to one of two groups. In the conventional radiofrequency (CRF) group (N = 34), conventional radiofrequency ablation was done, and in the PRF pulsed radiofrequency (PRF) group (N = 31), pulsed radiofrequency ablation was done. After informed and written consent, fluoroscopy-guided ganglion Impar block was performed through the first intracoccygeal approach. The extent of pain relief was assessed by visual analog scale (VAS) at 24 hours, and at the first, third, and sixth weeks following the intervention. A questionnaire to evaluate subjective patient satisfaction was also used at each follow-up visit. In the CRF group, the mean VAS score decreased significantly from the baseline value at each follow-up visit. But in the PRF group, this decrease was insignificant except at 24-hour follow-up. Intergroup comparison also showed significantly better pain relief in the CRF group as compared with the PRF group. At the end of follow-up, 28 patients (82%) in the CRF group and four patients (13%) in the PRF group had excellent results, as assessed by the subjective patient satisfaction questionnaire. There was no complication in any patient of either study group, except for short-lived infection at the site of skin puncture in a few. Ganglion Impar block by conventional radiofrequency provided a significantly better quality of pain relief with no major side effects in patients with chronic nononcological perineal pain as compared with pulsed radiofrequency. The short-term follow-up period of only six weeks was a major drawback associated with this study. © 2018 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Park, Sang-Gon; Jeong, Dong-Seok
2000-12-01
In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.
A conceptual framework to support exposure science research ...
While knowledge of exposure is fundamental to assessing and mitigating risks, exposure information has been costly and difficult to generate. Driven by major scientific advances in analytical methods, biomonitoring, computational tools, and a newly articulated vision for a greater impact in public health, the field of exposure science is undergoing a rapid transition that allows it to be more agile, predictive, and data- and knowledge-driven. A necessary element of this evolved paradigm is an organizational and predictive framework for exposure science that furthers the application of systems-based approaches. To enable such systems-based approaches, we proposed the Aggregate Exposure Pathway (AEP) concept to organize data and information emerging from an invigorated and expanding field of exposure science. The AEP framework is a layered structure that describes the elements of an exposure pathway, as well as the relationship between those elements. The basic building blocks of an AEP adopt the naming conventions used for Adverse Outcome Pathways (AOPs): Key Events (KEs) to describe the measurable, obligate steps through the AEP; and Key Event Relationships (KERs) describe the linkages between KEs. Importantly, the AEP offers an intuitive approach to organize exposure information from sources to internal site of action, setting the stage for predicting stressor concentrations at an internal target site. These predicted concentrations can help inform the r
Al-Dwairi, Ziad N; Tahboub, Kawkab Y; Baba, Nadim Z; Goodacre, Charles J
2018-06-13
The introduction of computer-aided design/computer-aided manufacturing (CAD/CAM) technology to the field of removable prosthodontics has recently made it possible to fabricate complete dentures of prepolymerized polymethyl methacrylate (PMMA) blocks, which are claimed to be of better mechanical properties; however, no published reports that have evaluated mechanical properties of CAD/CAM PMMA. The purpose of this study was to compare flexural strength, impact strength, and flexural modulus of two brands of CAD/CAM PMMA and a conventional heat-cured PMMA. 45 rectangular specimens (65 mm × 10 mm × 3 mm) were fabricated (15 CAD/CAM AvaDent PMMA specimens from AvaDent, 15 CAD/CAM Tizian PMMA specimens from Shütz Dental, 15 conventional Meliodent PMMA specimens from Heraeus Kulzer) and stored in distilled water at (37 ± 1°C) for 7 days. Specimens (N = 15) in each group were subjected to the three-point bending test and impact strength test, employing the Charpy configuration on unnotched specimens. The morphology of the fractured specimens was studied under a scanning electron microscope (SEM). Statistical analysis was performed using one-way ANOVA and Tukey pairwise multiple comparisons with 95% confidence interval. The Schütz Dental specimens showed the highest mean flexural strength (130.67 MPa) and impact strength (29.56 kg/m 2 ). The highest mean flexural modulus was recorded in the AvaDent group (2519.6 MPa). The conventional heat-cured group showed the lowest mean flexural strength (93.33 MPa), impact strength (14.756 kg/m 2 ), and flexural modulus (2117.2 MPa). Differences in means of flexural properties between AvaDent and Schütz Dental specimens were not statistically significant (p > 0.05). As CAD/CAM PMMA specimens exhibited improved flexural strength, flexural modulus, and impact strength in comparison to the conventional heat-cured groups, CAD/CAM dentures are expected to be more durable. Different brands of CAD/CAM PMMA may have inherent variations in mechanical properties. © 2018 by the American College of Prosthodontists.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Accessing microfluidics through feature-based design software for 3D printing.
Shankles, Peter G; Millet, Larry J; Aufrecht, Jayde A; Retterer, Scott T
2018-01-01
Additive manufacturing has been a cornerstone of the product development pipeline for decades, playing an essential role in the creation of both functional and cosmetic prototypes. In recent years, the prospects for distributed and open source manufacturing have grown tremendously. This growth has been enabled by an expanding library of printable materials, low-cost printers, and communities dedicated to platform development. The microfluidics community has embraced this opportunity to integrate 3D printing into the suite of manufacturing strategies used to create novel fluidic architectures. The rapid turnaround time and low cost to implement these strategies in the lab makes 3D printing an attractive alternative to conventional micro- and nanofabrication techniques. In this work, the production of multiple microfluidic architectures using a hybrid 3D printing-soft lithography approach is demonstrated and shown to enable rapid device fabrication with channel dimensions that take advantage of laminar flow characteristics. The fabrication process outlined here is underpinned by the implementation of custom design software with an integrated slicer program that replaces less intuitive computer aided design and slicer software tools. Devices are designed in the program by assembling parameterized microfluidic building blocks. The fabrication process and flow control within 3D printed devices were demonstrated with a gradient generator and two droplet generator designs. Precise control over the printing process allowed 3D microfluidics to be printed in a single step by extruding bridge structures to 'jump-over' channels in the same plane. This strategy was shown to integrate with conventional nanofabrication strategies to simplify the operation of a platform that incorporates both nanoscale features and 3D printed microfluidics.
Accessing microfluidics through feature-based design software for 3D printing
Shankles, Peter G.; Millet, Larry J.; Aufrecht, Jayde A.
2018-01-01
Additive manufacturing has been a cornerstone of the product development pipeline for decades, playing an essential role in the creation of both functional and cosmetic prototypes. In recent years, the prospects for distributed and open source manufacturing have grown tremendously. This growth has been enabled by an expanding library of printable materials, low-cost printers, and communities dedicated to platform development. The microfluidics community has embraced this opportunity to integrate 3D printing into the suite of manufacturing strategies used to create novel fluidic architectures. The rapid turnaround time and low cost to implement these strategies in the lab makes 3D printing an attractive alternative to conventional micro- and nanofabrication techniques. In this work, the production of multiple microfluidic architectures using a hybrid 3D printing-soft lithography approach is demonstrated and shown to enable rapid device fabrication with channel dimensions that take advantage of laminar flow characteristics. The fabrication process outlined here is underpinned by the implementation of custom design software with an integrated slicer program that replaces less intuitive computer aided design and slicer software tools. Devices are designed in the program by assembling parameterized microfluidic building blocks. The fabrication process and flow control within 3D printed devices were demonstrated with a gradient generator and two droplet generator designs. Precise control over the printing process allowed 3D microfluidics to be printed in a single step by extruding bridge structures to ‘jump-over’ channels in the same plane. This strategy was shown to integrate with conventional nanofabrication strategies to simplify the operation of a platform that incorporates both nanoscale features and 3D printed microfluidics. PMID:29596418
Robustness of Ability Estimation to Multidimensionality in CAST with Implications to Test Assembly
ERIC Educational Resources Information Center
Zhang, Yanwei; Nandakumar, Ratna
2006-01-01
Computer Adaptive Sequential Testing (CAST) is a test delivery model that combines features of the traditional conventional paper-and-pencil testing and item-based computerized adaptive testing (CAT). The basic structure of CAST is a panel composed of multiple testlets adaptively administered to examinees at different stages. Current applications…
ERIC Educational Resources Information Center
Bruno, Sam J., Ed.; Pettit, John D., Jr., Ed.
These conference proceedings contain the following 23 presentations: "Development of a Communication Skill Model Using Interpretive Structural Modeling" (Karen S. Nantz and Linda Gammill); "The Coincidence of Needs: An Inventional Model for Audience Analysis" (Gina Burchard); "A Computer Algorithm for Measuring Readability" (Terry D. Lundgren);…
ERIC Educational Resources Information Center
Pulz, Michael; Lusti, Markus
PROJECTTUTOR is an intelligent tutoring system that enhances conventional classroom instruction by teaching problem solving in project planning. The domain knowledge covered by the expert module is divided into three functions. Structural analysis, identifies the activities that make up the project, time analysis, computes the earliest and latest…
Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao
2017-10-01
In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.
Computationally-Guided Synthetic Control over Pore Size in Isostructural Porous Organic Cages
Slater, Anna G.; Reiss, Paul S.; Pulido, Angeles; ...
2017-06-20
The physical properties of 3-D porous solids are defined by their molecular geometry. Hence, precise control of pore size, pore shape, and pore connectivity are needed to tailor them for specific applications. However, for porous molecular crystals, the modification of pore size by adding pore-blocking groups can also affect crystal packing in an unpredictable way. This precludes strategies adopted for isoreticular metal-organic frameworks, where addition of a small group, such as a methyl group, does not affect the basic framework topology. Here, we narrow the pore size of a cage molecule, CC3, in a systematic way by introducing methyl groupsmore » into the cage windows. Computational crystal structure prediction was used to anticipate the packing preferences of two homochiral methylated cages, CC14-R and CC15-R, and to assess the structure-energy landscape of a CC15-R/CC3-S cocrystal, designed such that both component cages could be directed to pack with a 3-D, interconnected pore structure. The experimental gas sorption properties of these three cage systems agree well with physical properties predicted by computational energy-structure-function maps.« less
Computationally-Guided Synthetic Control over Pore Size in Isostructural Porous Organic Cages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, Anna G.; Reiss, Paul S.; Pulido, Angeles
The physical properties of 3-D porous solids are defined by their molecular geometry. Hence, precise control of pore size, pore shape, and pore connectivity are needed to tailor them for specific applications. However, for porous molecular crystals, the modification of pore size by adding pore-blocking groups can also affect crystal packing in an unpredictable way. This precludes strategies adopted for isoreticular metal-organic frameworks, where addition of a small group, such as a methyl group, does not affect the basic framework topology. Here, we narrow the pore size of a cage molecule, CC3, in a systematic way by introducing methyl groupsmore » into the cage windows. Computational crystal structure prediction was used to anticipate the packing preferences of two homochiral methylated cages, CC14-R and CC15-R, and to assess the structure-energy landscape of a CC15-R/CC3-S cocrystal, designed such that both component cages could be directed to pack with a 3-D, interconnected pore structure. The experimental gas sorption properties of these three cage systems agree well with physical properties predicted by computational energy-structure-function maps.« less
Structural implications of hERG K+ channel block by a high-affinity minimally structured blocker
Helliwell, Matthew V.; Zhang, Yihong; El Harchi, Aziza; Du, Chunyun; Hancox, Jules C.; Dempsey, Christopher E.
2018-01-01
Cardiac potassium channels encoded by human ether-à-go-go–related gene (hERG) are major targets for structurally diverse drugs associated with acquired long QT syndrome. This study characterized hERG channel inhibition by a minimally structured high-affinity hERG inhibitor, Cavalli-2, composed of three phenyl groups linked by polymethylene spacers around a central amino group, chosen to probe the spatial arrangement of side chain groups in the high-affinity drug-binding site of the hERG pore. hERG current (IhERG) recorded at physiological temperature from HEK293 cells was inhibited with an IC50 of 35.6 nm with time and voltage dependence characteristic of blockade contingent upon channel gating. Potency of Cavalli-2 action was markedly reduced for attenuated inactivation mutants located near (S620T; 54-fold) and remote from (N588K; 15-fold) the channel pore. The S6 Y652A and F656A mutations decreased inhibitory potency 17- and 75-fold, respectively, whereas T623A and S624A at the base of the selectivity filter also decreased potency (16- and 7-fold, respectively). The S5 helix F557L mutation decreased potency 10-fold, and both F557L and Y652A mutations eliminated voltage dependence of inhibition. Computational docking using the recent cryo-EM structure of an open channel hERG construct could only partially recapitulate experimental data, and the high dependence of Cavalli-2 block on Phe-656 is not readily explainable in that structure. A small clockwise rotation of the inner (S6) helix of the hERG pore from its configuration in the cryo-EM structure may be required to optimize Phe-656 side chain orientations compatible with high-affinity block. PMID:29545312
NASA Technical Reports Server (NTRS)
Buchanan, H. J.
1983-01-01
Work performed in Large Space Structures Controls research and development program at Marshall Space Flight Center is described. Studies to develop a multilevel control approach which supports a modular or building block approach to the buildup of space platforms are discussed. A concept has been developed and tested in three-axis computer simulation utilizing a five-body model of a basic space platform module. Analytical efforts have continued to focus on extension of the basic theory and subsequent application. Consideration is also given to specifications to evaluate several algorithms for controlling the shape of Large Space Structures.
Simulating chemistry using quantum computers.
Kassal, Ivan; Whitfield, James D; Perdomo-Ortiz, Alejandro; Yung, Man-Hong; Aspuru-Guzik, Alán
2011-01-01
The difficulty of simulating quantum systems, well known to quantum chemists, prompted the idea of quantum computation. One can avoid the steep scaling associated with the exact simulation of increasingly large quantum systems on conventional computers, by mapping the quantum system to another, more controllable one. In this review, we discuss to what extent the ideas in quantum computation, now a well-established field, have been applied to chemical problems. We describe algorithms that achieve significant advantages for the electronic-structure problem, the simulation of chemical dynamics, protein folding, and other tasks. Although theory is still ahead of experiment, we outline recent advances that have led to the first chemical calculations on small quantum information processors.
Small-Molecule “BRCA1-Mimetics” Are Antagonists of Estrogen Receptor-α
Ma, Yongxian; Tomita, York; Preet, Anju; Clarke, Robert; Englund, Erikah; Grindrod, Scott; Nathan, Shyam; De Oliveira, Eliseu; Brown, Milton L.
2014-01-01
Context: Resistance to conventional antiestrogens is a major cause of treatment failure and, ultimately, death in breast cancer. Objective: The objective of the study was to identify small-molecule estrogen receptor (ER)-α antagonists that work differently from tamoxifen and other selective estrogen receptor modulators. Design: Based on in silico screening of a pharmacophore database using a computed model of the BRCA1-ER-α complex (with ER-α liganded to 17β-estradiol), we identified a candidate group of small-molecule compounds predicted to bind to a BRCA1-binding interface separate from the ligand-binding pocket and the coactivator binding site of ER-α. Among 40 candidate compounds, six inhibited estradiol-stimulated ER-α activity by at least 50% in breast carcinoma cells, with IC50 values ranging between 3 and 50 μM. These ER-α inhibitory compounds were further studied by molecular and cell biological techniques. Results: The compounds strongly inhibited ER-α activity at concentrations that yielded little or no nonspecific toxicity, but they produced only a modest inhibition of progesterone receptor activity. Importantly, the compounds blocked proliferation and inhibited ER-α activity about equally well in antiestrogen-sensitive and antiestrogen-resistant breast cancer cells. Representative compounds disrupted the interaction of BRCA1 and ER-α in the cultured cells and blocked the interaction of ER-α with the estrogen response element. However, the compounds had no effect on the total cellular ER-α levels. Conclusions: These findings suggest that we have identified a new class of ER-α antagonists that work differently from conventional antiestrogens (eg, tamoxifen and fulvestrant). PMID:25264941
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1990-01-01
Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.
Modelling of thick composites using a layerwise laminate theory
NASA Technical Reports Server (NTRS)
Robbins, D. H., Jr.; Reddy, J. N.
1993-01-01
The layerwise laminate theory of Reddy (1987) is used to develop a layerwise, two-dimensional, displacement-based, finite element model of laminated composite plates that assumes a piecewise continuous distribution of the tranverse strains through the laminate thickness. The resulting layerwise finite element model is capable of computing interlaminar stresses and other localized effects with the same level of accuracy as a conventional 3D finite element model. Although the total number of degrees of freedom are comparable in both models, the layerwise model maintains a 2D-type data structure that provides several advantages over a conventional 3D finite element model, e.g. simplified input data, ease of mesh alteration, and faster element stiffness matrix formulation. Two sample problems are provided to illustrate the accuracy of the present model in computing interlaminar stresses for laminates in bending and extension.
Novel Designs of Quantum Reversible Counters
NASA Astrophysics Data System (ADS)
Qi, Xuemei; Zhu, Haihong; Chen, Fulong; Zhu, Junru; Zhang, Ziyang
2016-11-01
Reversible logic, as an interesting and important issue, has been widely used in designing combinational and sequential circuits for low-power and high-speed computation. Though a significant number of works have been done on reversible combinational logic, the realization of reversible sequential circuit is still at premature stage. Reversible counter is not only an important part of the sequential circuit but also an essential part of the quantum circuit system. In this paper, we designed two kinds of novel reversible counters. In order to construct counter, the innovative reversible T Flip-flop Gate (TFG), T Flip-flop block (T_FF) and JK flip-flop block (JK_FF) are proposed. Based on the above blocks and some existing reversible gates, the 4-bit binary-coded decimal (BCD) counter and controlled Up/Down synchronous counter are designed. With the help of Verilog hardware description language (Verilog HDL), these counters above have been modeled and confirmed. According to the simulation results, our circuits' logic structures are validated. Compared to the existing ones in terms of quantum cost (QC), delay (DL) and garbage outputs (GBO), it can be concluded that our designs perform better than the others. There is no doubt that they can be used as a kind of important storage components to be applied in future low-power computing systems.
Queueing analysis of a canonical model of real-time multiprocessors
NASA Technical Reports Server (NTRS)
Krishna, C. M.; Shin, K. G.
1983-01-01
A logical classification of multiprocessor structures from the point of view of control applications is presented. A computation of the response time distribution for a canonical model of a real time multiprocessor is presented. The multiprocessor is approximated by a blocking model. Two separate models are derived: one created from the system's point of view, and the other from the point of view of an incoming task.
A VLSI-Based High-Performance Raster Image System.
1986-05-08
and data in broadcast form to the array of memory -hips in the frame buffer, shown in the bottom block. This is simply a physical structure to hold up...Principal Investigator: John Poulton Collaboration on algorithm development: Prof. Jack Goldfeather (Dept. of Mathematics, Carleton Collge ...1983) Cheng-Hong Hsieh (MS, Computer Science, May, 1985) Jeff P. Hultquist Susan Spach Undergraduate ResearLh Assistant: Sonya Holder (BS, Physics , May
NASA Technical Reports Server (NTRS)
Evans, A. B.; Lee, L. L.
1985-01-01
This User Guide provides a general introduction to the structure, use, and handling of magnetic tapes at Langley Research Center (LaRC). The topics covered are tape terminology, physical characteristics, error prevention and detection, and creating, using, and maintaining tapes. Supplementary documentation is referenced where it might be helpful. The documentation is included for the tape utility programs, BLOCK, UNBLOCK, and TAPEDMP, which are available at the Central Scientific Computing Complex at LaRC.
Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825
NASA Astrophysics Data System (ADS)
Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.
2010-11-01
We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.
The Numerical Simulation of Time Dependent Flow Structures Over a Natural Gravel Surface.
NASA Astrophysics Data System (ADS)
Hardy, R. J.; Lane, S. N.; Ferguson, R. I.; Parsons, D. R.
2004-05-01
Research undertaken over the last few years has demonstrated the importance of the structure of gravel river beds for understanding the interaction between fluid flow and sediment transport processes. This includes the observation of periodic high-speed fluid wedges interconnected by low-speed flow regions. Our understanding of these flows has been enhanced significantly through a series of laboratory experiments and supported by field observations. However, the potential of high resolution three dimensional Computational Fluid Dynamics (CFD) modeling has yet to be fully developed. This is largely the result of the problems of designing numerically stable meshes for use with complex bed topographies and that Reynolds averaged turbulence schemes are applied. This paper develops two novel techniques for dealing with these issues. The first is the development and validation of a method for representing the complex surface topography of gravel-bed rivers in high resolution three-dimensional computational fluid dynamic models. This is based upon a porosity treatment with a regular structured grid and the application of a porosity modification to the mass conservation equation in which: fully blocked cells are assigned a porosity of zero; fully unblocked cells are assigned a porosity of one; and partly blocked cells are assigned a porosity of between 0 and 1, according to the percentage of the cell volume that is blocked. The second is the application of Large Eddy Simulation (LES) which enables time dependent flow structures to be numerically predicted over the complex bed topographies. The regular structured grid with the embedded porosity algorithm maintains a constant grid cell size throughout the domain implying a constant filter scale for the LES simulation. This enables the prediction of coherent structures, repetitive quasi-cyclic large-scale turbulent motions, over the gravel surface which are of a similar magnitude and frequency to those previously observed in both flume and field studies. These structures are formed by topographic forcing within the domain and are scaled with the flow depth. Finally, this provides the numerical framework for the prediction of sediment transport within a time dependent framework. The turbulent motions make a significant contribution to the turbulent shear stress and the pressure fluctuations which significantly affect the forces acting on the bed and potentially control sediment motion.
Application of a simple cerebellar model to geologic surface mapping
Hagens, A.; Doveton, J.H.
1991-01-01
Neurophysiological research into the structure and function of the cerebellum has inspired computational models that simulate information processing associated with coordination and motor movement. The cerebellar model arithmetic computer (CMAC) has a design structure which makes it readily applicable as an automated mapping device that "senses" a surface, based on a sample of discrete observations of surface elevation. The model operates as an iterative learning process, where cell weights are continuously modified by feedback to improve surface representation. The storage requirements are substantially less than those of a conventional memory allocation, and the model is extended easily to mapping in multidimensional space, where the memory savings are even greater. ?? 1991.
Aerodynamic and structural studies of joined-wing aircraft
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Smith, Stephen; Gallman, John
1991-01-01
A method for rapidly evaluating the structural and aerodynamic characteristics of joined-wing aircraft was developed and used to study the fundamental advantages attributed to this concept. The technique involves a rapid turnaround aerodynamic analysis method for computing minimum trimmed drag combined with a simple structural optimization. A variety of joined-wing designs are compared on the basis of trimmed drag, structural weight, and, finally, trimmed drag with fixed structural weight. The range of joined-wing design parameters resulting in best cruise performance is identified. Structural weight savings and net drag reductions are predicted for certain joined-wing configurations compared with conventional cantilever-wing configurations.
The application of artificial intelligence in the optimal design of mechanical systems
NASA Astrophysics Data System (ADS)
Poteralski, A.; Szczepanik, M.
2016-11-01
The paper is devoted to new computational techniques in mechanical optimization where one tries to study, model, analyze and optimize very complex phenomena, for which more precise scientific tools of the past were incapable of giving low cost and complete solution. Soft computing methods differ from conventional (hard) computing in that, unlike hard computing, they are tolerant of imprecision, uncertainty, partial truth and approximation. The paper deals with an application of the bio-inspired methods, like the evolutionary algorithms (EA), the artificial immune systems (AIS) and the particle swarm optimizers (PSO) to optimization problems. Structures considered in this work are analyzed by the finite element method (FEM), the boundary element method (BEM) and by the method of fundamental solutions (MFS). The bio-inspired methods are applied to optimize shape, topology and material properties of 2D, 3D and coupled 2D/3D structures, to optimize the termomechanical structures, to optimize parameters of composites structures modeled by the FEM, to optimize the elastic vibrating systems to identify the material constants for piezoelectric materials modeled by the BEM and to identify parameters in acoustics problem modeled by the MFS.
A machine learning approach for classification of anatomical coverage in CT
NASA Astrophysics Data System (ADS)
Wang, Xiaoyong; Lo, Pechin; Ramakrishna, Bharath; Goldin, Johnathan; Brown, Matthew
2016-03-01
Automatic classification of anatomical coverage of medical images is critical for big data mining and as a pre-processing step to automatically trigger specific computer aided diagnosis systems. The traditional way to identify scans through DICOM headers has various limitations due to manual entry of series descriptions and non-standardized naming conventions. In this study, we present a machine learning approach where multiple binary classifiers were used to classify different anatomical coverages of CT scans. A one-vs-rest strategy was applied. For a given training set, a template scan was selected from the positive samples and all other scans were registered to it. Each registered scan was then evenly split into k × k × k non-overlapping blocks and for each block the mean intensity was computed. This resulted in a 1 × k3 feature vector for each scan. The feature vectors were then used to train a SVM based classifier. In this feasibility study, four classifiers were built to identify anatomic coverages of brain, chest, abdomen-pelvis, and chest-abdomen-pelvis CT scans. Each classifier was trained and tested using a set of 300 scans from different subjects, composed of 150 positive samples and 150 negative samples. Area under the ROC curve (AUC) of the testing set was measured to evaluate the performance in a two-fold cross validation setting. Our results showed good classification performance with an average AUC of 0.96.
Embracing the quantum limit in silicon computing.
Morton, John J L; McCamey, Dane R; Eriksson, Mark A; Lyon, Stephen A
2011-11-16
Quantum computers hold the promise of massive performance enhancements across a range of applications, from cryptography and databases to revolutionary scientific simulation tools. Such computers would make use of the same quantum mechanical phenomena that pose limitations on the continued shrinking of conventional information processing devices. Many of the key requirements for quantum computing differ markedly from those of conventional computers. However, silicon, which plays a central part in conventional information processing, has many properties that make it a superb platform around which to build a quantum computer. © 2011 Macmillan Publishers Limited. All rights reserved
Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas
2014-01-01
Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135
A 2d Block Model For Landslide Simulation: An Application To The 1963 Vajont Case
NASA Astrophysics Data System (ADS)
Tinti, S.; Zaniboni, F.; Manucci, A.; Bortolucci, E.
A 2D block model to study the motion of a sliding mass is presented. The slide is par- titioned into a matrix of blocks the basis of which are quadrilaterals. The blocks move on a specified sliding surface and follow a trajectory that is computed by the model. The forces acting on the blocks are gravity, basal friction, buoyancy in case of under- water motion, and interaction with neighbouring blocks. At any time step, the position of the blocks on the sliding surface is determined in curvilinear (local) co-ordinates by computing the position of the vertices of the quadrilaterals and the position of the block centre of mass. Mathematically, the topology of the system is invariant during the motion, which means that the number of blocks is constant and that each block has always the same neighbours. Physically, this means that blocks are allowed to change form, but not to penetrate into each other, not to coalesce, not to split. The change of form is compensated by the change of height, under the computational assumption that the block volume is constant during motion: consequently lateral expansion or contraction yield respectively height reduction or increment of the blocks. This model is superior to the analogous 1D model where the mass is partitioned into a chain of interacting blocks. 1D models require the a-priori specification of the sliding path, that is of the trajectory of the blocks, which the 2D block model supplies as one of its output. In continuation of previous studies on the catastrophic slide of Vajont that occurred in 1963 in northern Italy and caused more than 2000 victims, the 2D block model has been applied to the Vajont case. The results are compared to the outcome of the 1D model, and more importantly to the observational data concerning the deposit position and morphology. The agreement between simulation and data is found to be quite good.
3DGRAPE - THREE DIMENSIONAL GRIDS ABOUT ANYTHING BY POISSON'S EQUATION
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1994-01-01
The ability to treat arbitrary boundary shapes is one of the most desirable characteristics of a method for generating grids. 3DGRAPE is designed to make computational grids in or about almost any shape. These grids are generated by the solution of Poisson's differential equations in three dimensions. The program automatically finds its own values for inhomogeneous terms which give near-orthogonality and controlled grid cell height at boundaries. Grids generated by 3DGRAPE have been applied to both viscous and inviscid aerodynamic problems, and to problems in other fluid-dynamic areas. 3DGRAPE uses zones to solve the problem of warping one cube into the physical domain in real-world computational fluid dynamics problems. In a zonal approach, a physical domain is divided into regions, each of which maps into its own computational cube. It is believed that even the most complicated physical region can be divided into zones, and since it is possible to warp a cube into each zone, a grid generator which is oriented to zones and allows communication across zonal boundaries (where appropriate) solves the problem of topological complexity. 3DGRAPE expects to read in already-distributed x,y,z coordinates on the bodies of interest, coordinates which will remain fixed during the entire grid-generation process. The 3DGRAPE code makes no attempt to fit given body shapes and redistribute points thereon. Body-fitting is a formidable problem in itself. The user must either be working with some simple analytical body shape, upon which a simple analytical distribution can be easily effected, or must have available some sophisticated stand-alone body-fitting software. 3DGRAPE does not require the user to supply the block-to-block boundaries nor the shapes of the distribution of points. 3DGRAPE will typically supply those block-to-block boundaries simply as surfaces in the elliptic grid. Thus at block-to-block boundaries the following conditions are obtained: (1) grids lines will match up as they approach the block-to-block boundary from either side, (2) grid lines will cross the boundary with no slope discontinuity, (3) the spacing of points along the line piercing the boundary will be continuous, (4) the shape of the boundary will be consistent with the surrounding grid, and (5) the distribution of points on the boundary will be reasonable in view of the surrounding grid. 3DGRAPE offers a powerful building-block approach to complex 3-D grid generation, but is a low-level tool. Users may build each face of each block as they wish, from a wide variety of resources. 3DGRAPE uses point-successive-over-relaxation (point-SOR) to solve the Poisson equations. This method is slow, although it does vectorize nicely. Any number of sophisticated graphics programs may be used on the stored output file of 3DGRAPE though it lacks interactive graphics. Versatility was a prominent consideration in developing the code. The block structure allows a great latitude in the problems it can treat. As the acronym implies, this program should be able to handle just about any physical region into which a computational cube or cubes can be warped. 3DGRAPE was written in FORTRAN 77 and should be machine independent. It was originally developed on a Cray under COS and tested on a MicroVAX 3200 under VMS 5.1.
2D automatic body-fitted structured mesh generation using advancing extraction method
NASA Astrophysics Data System (ADS)
Zhang, Yaoxin; Jia, Yafei
2018-01-01
This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.
Understanding Diffusion in Hierarchical Zeolites with House-of-Cards Nanosheets.
Bai, Peng; Haldoupis, Emmanuel; Dauenhauer, Paul J; Tsapatsis, Michael; Siepmann, J Ilja
2016-08-23
Introducing mesoporosity to conventional microporous sorbents or catalysts is often proposed as a solution to enhance their mass transport rates. Here, we show that diffusion in these hierarchical materials is more complex and exhibits non-monotonic dependence on sorbate loading. Our atomistic simulations of n-hexane in a model system containing microporous nanosheets and mesopore channels indicate that diffusivity can be smaller than in a conventional zeolite with the same micropore structure, and this observation holds true even if we confine the analysis to molecules completely inside the microporous nanosheets. Only at high sorbate loadings or elevated temperatures, when the mesopores begin to be sufficiently populated, does the overall diffusion in the hierarchical material exceed that in conventional microporous zeolites. Our model system is free of structural defects, such as pore blocking or surface disorder, that are typically invoked to explain slower-than-expected diffusion phenomena in experimental measurements. Examination of free energy profiles and visualization of molecular diffusion pathways demonstrates that the large free energy cost (mostly enthalpic in origin) for escaping from the microporous region into the mesopores leads to more tortuous diffusion paths and causes this unusual transport behavior in hierarchical nanoporous materials. This knowledge allows us to re-examine zero-length-column chromatography data and show that these experimental measurements are consistent with the simulation data when the crystallite size instead of the nanosheet thickness is used for the nominal diffusional length.
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
NASA Astrophysics Data System (ADS)
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
Direction of Arrival Estimation for MIMO Radar via Unitary Nuclear Norm Minimization
Wang, Xianpeng; Huang, Mengxing; Wu, Xiaoqin; Bi, Guoan
2017-01-01
In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to double the virtual array aperture, and the real-valued data are obtained by utilizing unitary transformation. Then a real-valued block sparse model is established based on a novel over-complete dictionary, and a UNNM algorithm is formulated for recovering the block-sparse matrix. In addition, the real-valued NC-MUSIC spectrum is used to design a weight matrix for reweighting the nuclear norm minimization to achieve the enhanced sparsity of solutions. Finally, the DOA is estimated by searching the non-zero blocks of the recovered matrix. Because of using the noncircular properties of signals to extend the virtual array aperture and an additional real structure to suppress the noise, the proposed method provides better performance compared with the conventional sparse recovery based algorithms. Furthermore, the proposed method can handle the case of underdetermined DOA estimation. Simulation results show the effectiveness and advantages of the proposed method. PMID:28441770
LINCS: Livermore's network architecture. [Octopus computing network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fletcher, J.G.
1982-01-01
Octopus, a local computing network that has been evolving at the Lawrence Livermore National Laboratory for over fifteen years, is currently undergoing a major revision. The primary purpose of the revision is to consolidate and redefine the variety of conventions and formats, which have grown up over the years, into a single standard family of protocols, the Livermore Interactive Network Communication Standard (LINCS). This standard treats the entire network as a single distributed operating system such that access to a computing resource is obtained in a single way, whether that resource is local (on the same computer as the accessingmore » process) or remote (on another computer). LINCS encompasses not only communication but also such issues as the relationship of customer to server processes and the structure, naming, and protection of resources. The discussion includes: an overview of the Livermore user community and computing hardware, the functions and structure of each of the seven layers of LINCS protocol, the reasons why we have designed our own protocols and why we are dissatisfied by the directions that current protocol standards are taking.« less
Wang, Hao; Liu, Kan; Chen, Kuan-Ju; Lu, Yujie; Wang, Shutao; Lin, Wei-Yu; Guo, Feng; Kamei, Ken-ichiro; Chen, Yi-Chun; Ohashi, Minori; Wang, Mingwei; Garcia, Mitch André; Zhao, Xing-Zhong; Shen, Clifton K.-F.; Tseng, Hsian-Rong
2010-01-01
Nanoparticles are regarded as promising transfection reagents for effective and safe delivery of nucleic acids into specific type of cells or tissues providing an alternative manipulation/therapy strategy to viral gene delivery. However, the current process of searching novel delivery materials is limited due to conventional low-throughput and time-consuming multistep synthetic approaches. Additionally, conventional approaches are frequently accompanied with unpredictability and continual optimization refinements, impeding flexible generation of material diversity creating a major obstacle to achieving high transfection performance. Here we have demonstrated a rapid developmental pathway toward highly efficient gene delivery systems by leveraging the powers of a supramolecular synthetic approach and a custom-designed digital microreactor. Using the digital microreactor, broad structural/functional diversity can be programmed into a library of DNA-encapsulated supramolecular nanoparticles (DNA⊂SNPs) by systematically altering the mixing ratios of molecular building blocks and a DNA plasmid. In vitro transfection studies with DNA⊂SNPs library identified the DNA⊂SNPs with the highest gene transfection efficiency, which can be attributed to cooperative effects of structures and surface chemistry of DNA⊂SNPs. We envision such a rapid developmental pathway can be adopted for generating nanoparticle-based vectors for delivery of a variety of loads. PMID:20925389
Composite Structure Modeling and Analysis of Advanced Aircraft Fuselage Concepts
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek; Sorokach, Michael R.
2015-01-01
NASA Environmentally Responsible Aviation (ERA) project and the Boeing Company are collabrating to advance the unitized damage arresting composite airframe technology with application to the Hybrid-Wing-Body (HWB) aircraft. The testing of a HWB fuselage section with Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) construction is presently being conducted at NASA Langley. Based on lessons learned from previous HWB structural design studies, improved finite-element models (FEM) of the HWB multi-bay and bulkhead assembly are developed to evaluate the performance of the PRSEUS construction. In order to assess the comparative weight reduction benefits of the PRSEUS technology, conventional cylindrical skin-stringer-frame models of a cylindrical and a double-bubble section fuselage concepts are developed. Stress analysis with design cabin-pressure load and scenario based case studies are conducted for design improvement in each case. Alternate analysis with stitched composite hat-stringers and C-frames are also presented, in addition to the foam-core sandwich frame and pultruded rod-stringer construction. The FEM structural stress, strain and weights are computed and compared for relative weight/strength benefit assessment. The structural analysis and specific weight comparison of these stitched composite advanced aircraft fuselage concepts demonstrated that the pressurized HWB fuselage section assembly can be structurally as efficient as the conventional cylindrical fuselage section with composite stringer-frame and PRSEUS construction, and significantly better than the conventional aluminum construction and the double-bubble section concept.
Conformal blocks from Wilson lines with loop corrections
NASA Astrophysics Data System (ADS)
Hikida, Yasuaki; Uetoko, Takahiro
2018-04-01
We compute the conformal blocks of the Virasoro minimal model or its WN extension with large central charge from Wilson line networks in a Chern-Simons theory including loop corrections. In our previous work, we offered a prescription to regularize divergences from loops attached to Wilson lines. In this paper, we generalize our method with the prescription by dealing with more general operators for N =3 and apply it to the identity W3 block. We further compute general light-light blocks and heavy-light correlators for N =2 with the Wilson line method and compare the results with known ones obtained using a different prescription. We briefly discuss general W3 blocks.
Singh, Preet Mohinder; Borle, Anuradha; Kaur, Manpreet; Trikha, Anjan; Sinha, Ashish
2018-01-01
Thoracic interfascial plane blocks and modification (PECS) have recently gained popularity for analgesic potential during breast surgery. We evaluate/consolidate the evidence on opioid-sparing effect of PECS blocks in comparison with conventional intravenous analgesia (IVA) and paravertebral block (PVB). Prospective, randomized controlled trials comparing PECS block to conventional IVA or PVB in patients undergoing breast surgery published till June 2017 were searched in the medical database. Comparisons were made for 24-h postoperative morphine consumption and intraoperative fentanyl-equivalent consumption. Final analysis included nine trials (PECS vs. IVA 4 trials and PECS vs. PVB 5 trials). PECS block showed a decreased intraoperative fentanyl consumption over IVA by 49.20 mcg (95% confidence interval [CI] =42.67-55.74) ( I 2 = 98.47%, P < 0.001) and PVB by 15.88 mcg (95% CI = 12.95-18.81) ( I 2 = 95.51%, P < 0.001). Postoperative, 24-h morphine consumption with PECS block was lower than IVA by 7.66 mg (95% CI being 6.23-9.10) ( I 2 = 63.15, P < 0.001) but was higher than PVB group by 1.26 mg (95% CI being 0.91-1.62) ( I 2 = 99.53%, P < 0.001). Two cases of pneumothorax were reported with PVB, and no complication was reported in any other group. Use of PECS block and its modifications with general anesthesia for breast surgery has significant opioid-sparing effect intraoperatively and during the first 24 h after surgery. It also has higher intraoperative opioid-sparing effect when compared to PVB. During the 1 st postoperative day, PVB has slightly more morphine sparing potential that may however be associated with higher complication rates. The present PECS block techniques show marked interstudy variations and need standardization.
Diagnostic efficacy of cell block method for vitreoretinal lymphoma.
Kase, Satoru; Namba, Kenichi; Iwata, Daiju; Mizuuchi, Kazuomi; Kitaichi, Nobuyoshi; Tagawa, Yoshiaki; Okada-Kanno, Hiromi; Matsuno, Yoshihiro; Ishida, Susumu
2016-03-17
Vitreoretinal lymphoma (VRL) is a life- and sight-threatening disorder. The aim of this study was to analyze the usefulness of the cell block method for diagnosis of VRL. Sixteen eyes in 12 patients with VRL, and 4 eyes in 4 patients with idiopathic uveitis presenting with vitreous opacity were enrolled in this study. Both undiluted vitreous and diluted fluids were isolated during micro-incision vitrectomy. Cell block specimens were prepared in 19 eyes from diluted fluid containing shredding vitreous. These specimens were then submitted for HE staining as well as immunocytological analyses with antibodies against the B-cell marker CD20, the T-cell marker CD3, and cell proliferation marker Ki67. Conventional smear cytology was applied in 14 eyes with VRL using undiluted vitreous samples. The diagnosis of VRL was made based on the results of cytology, concentrations of interleukin (IL)-10 and IL-6 in undiluted vitreous, and immunoglobulin heavy chain gene rearrangement analysis. Atypical lymphoid cells were identified in 14 out of 15 cell block specimens of VRL (positive rate: 93.3 %), but in 5 out of 14 eyes in conventional smear cytology (positive rate: 35.7 %). Atypical lymphoid cells showed immunoreactivity for CD20 and Ki67. Seven cell block specimens were smear cytology-negative and cell block-positive. The cell block method showed no atypical lymphoid cells in any patient with idiopathic uveitis. Cell block specimens using diluted vitreous fluid demonstrated a high diagnostic sensitivity and a low pseudo-positive rate for the cytological diagnosis of VRL. The cell block method contributed to clear differentiation between VRL and idiopathic uveitis with vitreous opacity.
NASA Astrophysics Data System (ADS)
Poikselkä, Katja; Leinonen, Mikko; Palosaari, Jaakko; Vallivaara, Ilari; Röning, Juha; Juuti, Jari
2017-09-01
This paper introduces a new type of piezoelectric actuator, Mikbal. The Mikbal was developed from a Cymbal by adding steel structures around the steel cap to increase displacement and reduce the amount of piezoelectric material used. Here the parameters of the steel cap of Mikbal and Cymbal actuators were optimised by using genetic algorithms in combination with Comsol Multiphysics FEM modelling software. The blocking force of the actuator was maximised for different values of displacement by optimising the height and the top diameter of the end cap profile so that their effect on displacement, blocking force and stresses could be analysed. The optimisation process was done for five Mikbal- and two Cymbal-type actuators with different diameters varying between 15 and 40 mm. A Mikbal with a Ø 25 mm piezoceramic disc and a Ø 40 mm steel end cap was produced and the performances of unclamped measured and modelled cases were found to correspond within 2.8% accuracy. With a piezoelectric disc of Ø 25 mm, the Mikbal created 72% greater displacement while blocking force was decreased 57% compared with a Cymbal with the same size disc. Even with a Ø 20 mm piezoelectric disc, the Mikbal was able to generate ∼10% higher displacement than a Ø 25 mm Cymbal. Thus, the introduced Mikbal structure presents a way to extend the displacement capabilities of a conventional Cymbal actuator for low-to-moderate force applications.
Ma, Zhenling; Wu, Xiaoliang; Yan, Li; Xu, Zhenliang
2017-01-26
With the development of space technology and the performance of remote sensors, high-resolution satellites are continuously launched by countries around the world. Due to high efficiency, large coverage and not being limited by the spatial regulation, satellite imagery becomes one of the important means to acquire geospatial information. This paper explores geometric processing using satellite imagery without ground control points (GCPs). The outcome of spatial triangulation is introduced for geo-positioning as repeated observation. Results from combining block adjustment with non-oriented new images indicate the feasibility of geometric positioning with the repeated observation. GCPs are a must when high accuracy is demanded in conventional block adjustment; the accuracy of direct georeferencing with repeated observation without GCPs is superior to conventional forward intersection and even approximate to conventional block adjustment with GCPs. The conclusion is drawn that taking the existing oriented imagery as repeated observation enhances the effective utilization of previous spatial triangulation achievement, which makes the breakthrough for repeated observation to improve accuracy by increasing the base-height ratio and redundant observation. Georeferencing tests using data from multiple sensors and platforms with the repeated observation will be carried out in the follow-up research.
Optimized collectives using a DMA on a parallel computer
Chen, Dong [Croton On Hudson, NY; Gabor, Dozsa [Ardsley, NY; Giampapa, Mark E [Irvington, NY; Heidelberger,; Phillip, [Cortlandt Manor, NY
2011-02-08
Optimizing collective operations using direct memory access controller on a parallel computer, in one aspect, may comprise establishing a byte counter associated with a direct memory access controller for each submessage in a message. The byte counter includes at least a base address of memory and a byte count associated with a submessage. A byte counter associated with a submessage is monitored to determine whether at least a block of data of the submessage has been received. The block of data has a predetermined size, for example, a number of bytes. The block is processed when the block has been fully received, for example, when the byte count indicates all bytes of the block have been received. The monitoring and processing may continue for all blocks in all submessages in the message.
Novel Material Integration for Reliable and Energy-Efficient NEM Relay Technology
NASA Astrophysics Data System (ADS)
Chen, I.-Ru
Energy-efficient switching devices have become ever more important with the emergence of ubiquitous computing. NEM relays are promising to complement CMOS transistors as circuit building blocks for future ultra-low-power information processing, and as such have recently attracted significant attention from the semiconductor industry and researchers. Relay technology potentially can overcome the energy efficiency limit for conventional CMOS technology due to several key characteristics, including zero OFF-state leakage, abrupt switching behavior, and potentially very low active energy consumption. However, two key issues must be addressed for relay technology to reach its full potential: surface oxide formation at the contacting surfaces leading to increased ON-state resistance after switching, and high switching voltages due to strain gradient present within the relay structure. This dissertation advances NEM relay technology by investigating solutions to both of these pressing issues. Ruthenium, whose native oxide is conductive, is proposed as the contacting material to improve relay ON-state resistance stability. Ruthenium-contact relays are fabricated after overcoming several process integration challenges, and show superior ON-state resistance stability in electrical measurements and extended device lifetime. The relay structural film is optimized via stress matching among all layers within the structure, to provide lower strain gradient (below 10E-3/microm -1) and hence lower switching voltage. These advancements in relay technology, along with the integration of a metallic interconnect layer, enable complex relay-based circuit demonstration. In addition to the experimental efforts, this dissertation theoretically analyzes the energy efficiency limit of a NEM switch, which is generally believed to be limited by the surface adhesion energy. New compact (<1 microm2 footprint), low-voltage (<0.1 V) switch designs are proposed to overcome this limit. The results pave a pathway to scaled energy-efficient electronic device technology.
The role of ultra-fast solvent evaporation on the directed self-assembly of block polymer thin films
NASA Astrophysics Data System (ADS)
Drapes, Chloe; Nelson, G.; Grant, M.; Wong, J.; Baruth, A.
The directed self-assembly of nano-structures in block polymer thin films viasolvent vapor annealing is complicated by several factors, including evaporation rate. Solvent vapor annealing exposes a disordered film to solvent(s) in the vapor phase, increasing mobility and tuning surface energy, with the intention of producing an ordered structure. Recent theoretical predictions reveal the solvent evaporation affects the resultant nano-structuring. In a competition between phase separation and kinetic trapping during drying, faster solvent removal can enhance the propagation of a given morphology into the bulk of the thin film down to the substrate. Recent construction of a purpose-built, computer controlled solvent vapor annealing chamber provides control over forced solvent evaporation down to 15 ms. This is accomplished using pneumatically actuated nitrogen flow into and out of the chamber. Furthermore, in situ spectral reflectance, with 10 ms temporal resolution, monitors the swelling and evaporation. Presently, cylinder-forming polystyrene-block-polylactide thin films were swollen with 40% (by volume) tetrahydrofuran, followed by immediate evaporation under a variety of designed conditions. This includes various evaporation times, ranging from 15 ms to several seconds, and four unique rate trajectories, including linear, exponential, and combinations. Atomic force microscopy reveals specific surface, free and substrate, morphologies of the resultant films, dependent on specific evaporation conditions. Funded by the Clare Boothe Luce Foundation and Nebraska EPSCoR.
The Multiple-Minima Problem in Protein Folding
NASA Astrophysics Data System (ADS)
Scheraga, Harold A.
1991-10-01
The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hannon, Kevin P.; Li, Chenyang; Evangelista, Francesco A., E-mail: francesco.evangelista@emory.edu
2016-05-28
We report an efficient implementation of a second-order multireference perturbation theory based on the driven similarity renormalization group (DSRG-MRPT2) [C. Li and F. A. Evangelista, J. Chem. Theory Comput. 11, 2097 (2015)]. Our implementation employs factorized two-electron integrals to avoid storage of large four-index intermediates. It also exploits the block structure of the reference density matrices to reduce the computational cost to that of second-order Møller–Plesset perturbation theory. Our new DSRG-MRPT2 implementation is benchmarked on ten naphthyne isomers using basis sets up to quintuple-ζ quality. We find that the singlet-triplet splittings (Δ{sub ST}) of the naphthyne isomers strongly depend onmore » the equilibrium structures. For a consistent set of geometries, the Δ{sub ST} values predicted by the DSRG-MRPT2 are in good agreements with those computed by the reduced multireference coupled cluster theory with singles, doubles, and perturbative triples.« less
Balasubramanian, Sasikala; Paneerselvam, Elavenil; Guruprasad, T; Pathumai, M; Abraham, Simin; Krishnakumar Raja, V B
2017-01-01
The aim of this randomized clinical trial was to assess the efficacy of exclusive lingual nerve block (LNB) in achieving selective lingual soft-tissue anesthesia in comparison with conventional inferior alveolar nerve block (IANB). A total of 200 patients indicated for the extraction of lower premolars were recruited for the study. The samples were allocated by randomization into control and study groups. Lingual soft-tissue anesthesia was achieved by IANB and exclusive LNB in the control and study group, respectively. The primary outcome variable studied was anesthesia of ipsilateral lingual mucoperiosteum, floor of mouth and tongue. The secondary variables assessed were (1) taste sensation immediately following administration of local anesthesia and (2) mouth opening and lingual nerve paresthesia on the first postoperative day. Data analysis for descriptive and inferential statistics was performed using SPSS (IBM SPSS Statistics for Windows, Version 22.0, Armonk, NY: IBM Corp. Released 2013) and a P < 0.05 was considered statistically significant. In comparison with the control group, the study group (LNB) showed statistically significant anesthesia of the lingual gingiva of incisors, molars, anterior floor of the mouth, and anterior tongue. Exclusive LNB is superior to IAN nerve block in achieving selective anesthesia of lingual soft tissues. It is technically simple and associated with minimal complications as compared to IAN block.
Balasubramanian, Sasikala; Paneerselvam, Elavenil; Guruprasad, T; Pathumai, M; Abraham, Simin; Krishnakumar Raja, V. B.
2017-01-01
Objective: The aim of this randomized clinical trial was to assess the efficacy of exclusive lingual nerve block (LNB) in achieving selective lingual soft-tissue anesthesia in comparison with conventional inferior alveolar nerve block (IANB). Materials and Methods: A total of 200 patients indicated for the extraction of lower premolars were recruited for the study. The samples were allocated by randomization into control and study groups. Lingual soft-tissue anesthesia was achieved by IANB and exclusive LNB in the control and study group, respectively. The primary outcome variable studied was anesthesia of ipsilateral lingual mucoperiosteum, floor of mouth and tongue. The secondary variables assessed were (1) taste sensation immediately following administration of local anesthesia and (2) mouth opening and lingual nerve paresthesia on the first postoperative day. Results: Data analysis for descriptive and inferential statistics was performed using SPSS (IBM SPSS Statistics for Windows, Version 22.0, Armonk, NY: IBM Corp. Released 2013) and a P < 0.05 was considered statistically significant. In comparison with the control group, the study group (LNB) showed statistically significant anesthesia of the lingual gingiva of incisors, molars, anterior floor of the mouth, and anterior tongue. Conclusion: Exclusive LNB is superior to IAN nerve block in achieving selective anesthesia of lingual soft tissues. It is technically simple and associated with minimal complications as compared to IAN block. PMID:29264294
Control of hierarchical polymer mechanics with bioinspired metal-coordination dynamics
Grindy, Scott C.; Learsch, Robert; Mozhdehi, Davoud; Cheng, Jing; Barrett, Devin G.; Guan, Zhibin; Messersmith, Phillip B.; Holten-Andersen, Niels
2015-01-01
In conventional polymer materials, mechanical performance is traditionally engineered via material structure, using motifs such as polymer molecular weight, polymer branching, or copolymer-block design1. Here, by means of a model system of 4-arm poly(ethylene glycol) hydrogels crosslinked with multiple, kinetically distinct dynamic metal-ligand coordinate complexes, we show that polymer materials with decoupled spatial structure and mechanical performance can be designed. By tuning the relative concentration of two types of metal-ligand crosslinks, we demonstrate control over the material’s mechanical hierarchy of energy-dissipating modes under dynamic mechanical loading, and therefore the ability to engineer a priori the viscoelastic properties of these materials by controlling the types of crosslinks rather than by modifying the polymer itself. This strategy to decouple material mechanics from structure may inform the design of soft materials for use in complex mechanical environments. PMID:26322715
Zhou, Li; Collins, Sarah; Morgan, Stephen J.; Zafar, Neelam; Gesner, Emily J.; Fehrenbach, Martin; Rocha, Roberto A.
2016-01-01
Structured clinical documentation is an important component of electronic health records (EHRs) and plays an important role in clinical care, administrative functions, and research activities. Clinical data elements serve as basic building blocks for composing the templates used for generating clinical documents (such as notes and forms). We present our experience in creating and maintaining data elements for three different EHRs (one home-grown and two commercial systems) across different clinical settings, using flowsheet data elements as examples in our case studies. We identified basic but important challenges (including naming convention, links to standard terminologies, and versioning and change management) and possible solutions to address them. We also discussed more complicated challenges regarding governance, documentation vs. structured data capture, pre-coordination vs. post-coordination, reference information models, as well as monitoring, communication and training. PMID:28269927