Science.gov

Sample records for based fast layout

  1. Fast layout processing methodologies for scalable distributed computing applications

    NASA Astrophysics Data System (ADS)

    Kang, Chang-woo; Shin, Jae-pil; Durvasula, Bhardwaj; Seo, Sang-won; Jung, Dae-hyun; Lee, Jong-bae; Park, Young-kwan

    2012-06-01

    As the feature size shrinks to sub-20nm, more advanced OPC technologies such as ILT and the new lithographic resolution by EUV become the key solutions for device fabrication. These technologies leads to the file size explosion of up to hundreds of gigabytes of GDSII and OASIS files mainly due to the addition of complicated scattering bars and flattening of the design to compensate for long range effects. Splitting and merging layout files have been done sequentially in typical distributed computing layout applications. This portion becomes the bottle neck, causing the scalability to become poor. According to the Amdahl's law, minimizing the portion of sequential part is the key to get the maximum speed up. In this paper, we present scalable layout dividing and merging methodologies: Skeleton file based querying and direct OASIS file merging. These methods not only use a very minimum memory footprint but also achieve remarkable speed improvement. The skeleton file concept is very novel for a distributed application requiring geometrical processing, as it allows almost pseudo-random access into the input GDSII or OASIS file. Client machines can make use of the random access and perform fast query operations. The skeleton concept also works very well for flat input layouts, which is often the case of post-OPC data. Also, our OASIS file merging scheme is a smart approach which is equivalent of a binary file concatenation scheme. The merging method for OASIS files concatenates shape information in binary format with basic interpretation of bits with very low memory usage. We have observed that the skeleton file concept achieved 13.5 times speed improvement and used only 3.78% of memory on the master, over the conventional concept of converting into an internal format. Also, the merging speed is very fast, 28MB/sec and it is 44.5 times faster than conventional method. On top of the fast merging speed, it is very scalable since the merging time grows in linear fashion

  2. Compaction-based VLSI layout

    SciTech Connect

    Xiong, Xiao-Ming.

    1989-01-01

    Generally speaking, a compaction based VLSI layout system consists of two major parts: (1) a symbolic editor which maintains explicit connectivity and structural information about the circuit; (2) a compactor which translates the high level description of a circuit to the detailed layout needed for fabrication and tries to make as compact a layout as Possible without violating any design rules. Instead of developing a complete compaction based VLSI layout system, this thesis presents some theoretical concepts and several new compaction techniques, such as scan line based approach, which can either cooperate with a symbolic editor to form a layout system or work as a post-process step to improve the results obtained by an existing layout system. Also, some compaction related problems are solved and proposed. Based on the special property of channel routing, the author presents a geometric method for channel compaction. For a given channel routing topology, the minimum channel height is always achieved with the incorporation of sliding contacts and automatically inserting necessary jogs. The geometric compaction approach is then generalized and applied to compact the entire VLSI chip at the building-block level. With a systematic way of automatic jog insertion, he proves that under the given layout topology and design rules, the lower bound of one dimensional compaction with automatic jog insertion is achieved by the geometric compaction algorithm. A new simultaneous two-dimensional compaction algorithm is developed primarily for placement refinement of building-block layout. The algorithm is based on a set of defined graph operations on a mixed adjacency graph for a given placement. The mixed-adjacency graph can be updated efficiently if the placement is represented by tiles in the geometric domain.

  3. Fast symbolic layout translocation for custom VLSI integrated circuits

    SciTech Connect

    Eichenberger, P.A.

    1986-01-01

    Symbolic layout tools have enormous potential for easing the task of custom integrated circuit layout by allowing the designer to work at a higher level of abstraction, hiding some of the complexity of full custom design. Unfortunately, the practicality of symbolic layout tools has been limited for several reasons. Most important, the CPU resources required to compute a full-size integrated circuit from a symbolic description are prohibitively large; this problem has been avoided either by restricting the range of applicability to a narrow class of integrated circuits, or by using a simpler translation algorithm, which reduces the quality of the output. Other problems include: producing poor quality layouts, insufficient user control of the generated output, and inability to cooperate with other layout tools. These problems make symbolic design of complete chips difficult. This thesis presents an approach to the symbolic layout problems that produces high-quality layout for an arbitrary circuit without requiring excessive CPU time. The key to this approach includes the use of hierarchy to improve CPU time, the use of wire-length minimization to improve quality, a good balance between optimization of the layout and optimization of CPU time, and a smooth transition over varying degrees of automation. The result has been a symbolic layout tool that has been successfully used to lay out several chips from a design-rule-independent input.

  4. Issues in Text Design and Layout for Computer Based Communications.

    ERIC Educational Resources Information Center

    Andresen, Lee W.

    1991-01-01

    Discussion of computer-based communications (CBC) focuses on issues involved with screen design and layout for electronic text, based on experiences with electronic messaging, conferencing, and publishing within the Australian Open Learning Information Network (AOLIN). Recommendations for research on design and layout for printed text are also…

  5. Directional 2D functions as models for fast layout pattern transfer verification

    NASA Astrophysics Data System (ADS)

    Torres, J. Andres; Hofmann, Mark; Otto, Oberdan

    2009-03-01

    As advanced manufacturing processes become more stable, the need to adapt new designs to fully utilize the available manufacturing technology becomes a key technologic differentiator. However, many times such gains can only be realized and evaluated during full chip analysis. It has been demonstrated that the most accurate layout verification methods require application of the actual OPC recipes along with most of the mask data preparation that defines the pattern transfer characteristics of the process. Still, this method in many instances is not sufficiently fast to be used in a layout creation environment which undergoes constant updates. By doing an analysis of typical mask data processing, it is possible to determine that the most CPUintensive computations are the OPC and contour simulation steps needed to perform layout printability checks. Several researchers have tried to reduce the time it takes to compute the OPC mask by introducing matrix convolutions of the layout with empirically calibrated two-dimensional functions. However, most of these approaches do not provide a sufficient speed-up since they only replace the OPC computation and still require a full contour computation. Another alternative is to try to find effective ways of pattern matching those topologies that will exhibit transfer difficulties4, but such methods lack the ability to be predictive beyond their calibration data. In this paper we present a methodology that includes common resolution enhancement techniques, such as retargeting and sub-resolution assist feature insertion, and which replaces the OPC computation and subsequent contour calculation with an edge bias function based on an empirically-calibrated, directional, two-dimensional function. Because the edge bias function does not provide adequate control over the corner locations, a spline-based smoothing process is applied. The outcome is a piecewise-linear curve similar to those obtained by full lithographic simulations. Our

  6. 7. Photographic copy of Base Basic Layout Plan, dated 12 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. Photographic copy of Base Basic Layout Plan, dated 12 February 1945, in possession of Selfridge Base Museum, Mt. Clemens, Michigan. - Selfridge Field, North of North River Road, east of Irwin Road, Mount Clemens, Macomb County, MI

  7. Pitch-based pattern splitting for 1D layout

    NASA Astrophysics Data System (ADS)

    Nakayama, Ryo; Ishii, Hiroyuki; Mikami, Koji; Tsujita, Koichiro; Yaegashi, Hidetami; Oyama, Kenichi; Smayling, Michael C.; Axelrad, Valery

    2015-07-01

    The pattern splitting algorithm for 1D Gridded-Design-Rules layout (1D layout) for sub-10 nm node logic devices is shown. It is performed with integer linear programming (ILP) based on the conflict graph created from a grid map for each designated pitch. The relation between the number of times for patterning and the minimum pitch is shown systematically with a sample pattern of contact layer for each node. From the result, the number of times for patterning for 1D layout is fewer than that for conventional 2D layout. Moreover, an experimental result including SMO and total integrated process with hole repair technique is presented with the sample pattern of contact layer whose pattern density is relatively high among critical layers (fin, gate, local interconnect, contact, and metal).

  8. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization

    PubMed Central

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-01-01

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500

  9. [Land layout for lake tourism based on ecological restraint].

    PubMed

    Wang, Jian-Ying; Li, Jiang-Feng; Zou, Li-Lin; Liu, Shi-Bin

    2012-10-01

    To avoid the decrease and deterioration of lake wetlands and the other ecological issues such as lake water pollution that were caused by the unreasonable exploration of lake tourism, a land layout for the tourism development of Liangzi Lake with the priority of ecological security pattern was proposed, based on the minimal cumulative resistance model and by using GIS technology. The study area was divided into four ecological function zones, i. e., core protection zone, ecological buffer zone, ecotone zone, and human activity zone. The core protection zone was the landscape region of ecological source. In the protection zone, new tourism land was forbidden to be increased, and some of the existing fundamental tourism facilities should be removed while some of them should be upgraded. The ecological buffer zone was the landscape region with resistance value ranged from 0 to 4562. In the buffer zone, expansion of tourism land should be forbidden, the existing tourism land should be downsized, and human activities should be isolated from ecological source by converting the human environment to the natural environment as far as possible. The ecotone zone was the landscape region with resistance value ranged from 4562 to 30797. In this zone, the existing tourism land was distributed in patches, tourism land could be expanded properly, and the lake forestry ecological tourism should be developed widely. The human activity zone was the landscape region with resistance value ranged from 30797 to 97334, which would be the key area for the land layout of lake tourism. It was suggested that the land layout for tourism with the priority of landscape ecological security pattern would be the best choice for the lake sustainable development.

  10. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  11. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  12. Native conflict awared layout decomposition in triple patterning lithography using bin-based library matching method

    NASA Astrophysics Data System (ADS)

    Ke, Xianhua; Jiang, Hao; Lv, Wen; Liu, Shiyuan

    2016-03-01

    Triple patterning (TP) lithography becomes a feasible technology for manufacturing as the feature size further scale down to sub 14/10 nm. In TP, a layout is decomposed into three masks followed with exposures and etches/freezing processes respectively. Previous works mostly focus on layout decomposition with minimal conflicts and stitches simultaneously. However, since any existence of native conflict will result in layout re-design/modification and reperforming the time-consuming decomposition, the effective method that can be aware of native conflicts (NCs) in layout is desirable. In this paper, a bin-based library matching method is proposed for NCs detection and layout decomposition. First, a layout is divided into bins and the corresponding conflict graph in each bin is constructed. Then, we match the conflict graph in a prebuilt colored library, and as a result the NCs can be located and highlighted quickly.

  13. Surrogate based wind farm layout optimization using manifold mapping

    NASA Astrophysics Data System (ADS)

    Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester

    2016-09-01

    High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.

  14. Graph-based layout analysis for PDF documents

    NASA Astrophysics Data System (ADS)

    Xu, Canhui; Tang, Zhi; Tao, Xin; Li, Yun; Shi, Cao

    2013-03-01

    To increase the flexibility and enrich the reading experience of e-book on small portable screens, a graph based method is proposed to perform layout analysis on Portable Document Format (PDF) documents. Digital born document has its inherent advantages like representing texts and fractional images in explicit form, which can be straightforwardly exploited. To integrate traditional image-based document analysis and the inherent meta-data provided by PDF parser, the page primitives including text, image and path elements are processed to produce text and non text layer for respective analysis. Graph-based method is developed in superpixel representation level, and page text elements corresponding to vertices are used to construct an undirected graph. Euclidean distance between adjacent vertices is applied in a top-down manner to cut the graph tree formed by Kruskal's algorithm. And edge orientation is then used in a bottom-up manner to extract text lines from each sub tree. On the other hand, non-textual objects are segmented by connected component analysis. For each segmented text and non-text composite, a 13-dimensional feature vector is extracted for labelling purpose. The experimental results on selected pages from PDF books are presented.

  15. Optimization of Orchestral Layouts Based on Instrument Directivity Patterns

    NASA Astrophysics Data System (ADS)

    Stroud, Nathan Paul

    The experience of hearing an exceptional symphony orchestra perform in an excel- lent concert hall can be profound and moving, causing a level of excitement not often reached for listeners. Romantic period style orchestral music, recognized for validating the use of intense emotion for aesthetic pleasure, was the last significant development in the history of the orchestra. In an age where orchestral popularity is waning, the possibil- ity of evolving the orchestral sound in our modern era exists through the combination of our current understanding of instrument directivity patterns and their interaction with architectural acoustics. With the aid of wave field synthesis (WFS), newly proposed variations on orchestral layouts are tested virtually using a 64-channel WFS array. Each layout is objectively and subjectively compared for determination of which layout could optimize the sound of the orchestra and revitalize the excitement of the performance.

  16. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    NASA Astrophysics Data System (ADS)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  17. Layout-based substitution tree indexing and retrieval for mathematical expressions

    NASA Astrophysics Data System (ADS)

    Schellenberg, Thomas; Yuan, Bo; Zanibbi, Richard

    2012-01-01

    We introduce a new system for layout-based (LATEX) indexing and retrieval of mathematical expressions using substitution trees. Substitution trees can efficiently store and find expressions based on the similarity of their symbols, symbol layout, sub-expressions and size. We describe our novel implementation and some of our modifications to the substitution tree indexing and retrieval algorithms. We provide an experiment testing our system against the TF-IDF keyword-based system of Zanibbi and Yuan and demonstrate that, in many cases, the quality of search results returned by both systems is comparable (overall means, substitution tree vs. keywordbased: 100% vs. 89% for top 1; 48% vs. 51% for top 5; 22% vs. 28% for top 20). Overall, we present a promising first attempt at layout-based substitution tree indexing and retrieval for mathematical expressions and believe that this method will prove beneficial to the field of mathematical information retrieval.

  18. The 3Ls of Introductory Web-Based Instructional Design: Linking, Layout, and Learner Support.

    ERIC Educational Resources Information Center

    Dunlap, Joanna C.

    This paper presents guidelines for World Wide Web-based instructional design, based on the 3Ls (i.e., linking, layout, and learner support). The first section, focusing on macro level design, discusses nodes and links, including how nodes work, determining nodes, node size, presentation format, characteristics of links, and kinds of links. The…

  19. LithoScope: Simulation Based Mask Layout Verification with Physical Resist Model

    NASA Astrophysics Data System (ADS)

    Qian, Qi-De

    2002-12-01

    Simulation based mask layout verification and optimization is a cost effective way to ensure high mask performance in wafer lithography. Because mask layout verification serves as a gateway to the expensive manufacturing process, the model used for verification must have superior accuracy than models used upstream. In this paper, we demonstrate, for the first time, a software system for mask layout verification and optical proximity correction that employs a physical resist development model. The new system, LithoScope, predicts wafer patterning by solving optical and resist processing equations on a scale that is until recently considered unpractical. Leveraging the predictive capability of the physical model, LithoScope can perform mask layout verification and optical proximity correction under a wide range of processing conditions and for any reticle enhancement technology without the need for multiple model development. We show the ability for physical resist model to change iso-focal bias by optimizing resist parameters, which is critical for matching the experimental process window. We present line width variation statistics and chip level process window predictions using a practical cell layout. We show that LithoScope model can accurately describe the resist-intensive poly gate layer patterning. This system can be used to pre-screen mask data problems before manufacturing to reduce the overall cost of the mask and the product.

  20. Enforcement of Mask Rule Compliance in Model-Based OPC'ed Layouts during Data Preparation

    NASA Astrophysics Data System (ADS)

    Meyer, Dirk H.; Vuletic, Radovan; Seidl, Alexander

    2002-12-01

    Currently available commercial model-based OPC tools do not always generate layouts which are mask rule compliant. Additional processing is required to remove mask rule violations, which are often too numerous for manual patching. Although physical verification tools can be used to remove simple mask rule violations, the results are often unsatisfactory for more complicated geometrical configurations. The subject of this paper is the development and application of a geometrical processing engine that automatically enforces mask rule compliance of the OPC'ed layout. It is designed as an add-on to a physical verification tool. The engine constructs patches, which remove mask rule violations such as notches or width violations. By employing a Mixed Integer Programming (MIP) optimization method, the edges of each patch are placed in a way that avoids secondary violations while modifying the OPC'ed layout as little as possible. A sequence of enforcement steps is applied to the layout to remove all types of mask rule violations. This approach of locally confined minimal layout modifications retains OPC corrections to a maximum amount. This method has been used successfully in production on a variety of DRAM designs for the non-array regions.

  1. Accelerator-based conversion (ABC) of weapons plutonium: Plant layout study and related design issues

    SciTech Connect

    Cowell, B.S.; Fontana, M.H.; Krakowski, R.A.; Beard, C.A.; Buksa, J.J.; Davidson, J.W.; Sailor, W.C.; Williamson, M.A.

    1995-04-01

    In preparation for and in support of a detailed R and D Plan for the Accelerator-Based Conversion (ABC) of weapons plutonium, an ABC Plant Layout Study was conducted at the level of a pre-conceptual engineering design. The plant layout is based on an adaptation of the Molten-Salt Breeder Reactor (MSBR) detailed conceptual design that was completed in the early 1070s. Although the ABC Plant Layout Study included the Accelerator Equipment as an essential element, the engineering assessment focused primarily on the Target; Primary System (blanket and all systems containing plutonium-bearing fuel salt); the Heat-Removal System (secondary-coolant-salt and supercritical-steam systems); Chemical Processing; Operation and Maintenance; Containment and Safety; and Instrumentation and Control systems. Although constrained primarily to a reflection of an accelerator-driven (subcritical) variant of MSBR system, unique features and added flexibilities of the ABC suggest improved or alternative approaches to each of the above-listed subsystems; these, along with the key technical issues in need of resolution through a detailed R&D plan for ABC are described on the bases of the ``strawman`` or ``point-of-departure`` plant layout that resulted from this study.

  2. Layout design-based research on optimization and assessment method for shipbuilding workshop

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Meng, Mei; Liu, Shuang

    2013-06-01

    The research study proposes to examine a three-dimensional visualization program, emphasizing on improving genetic algorithms through the optimization of a layout design-based standard and discrete shipbuilding workshop. By utilizing a steel processing workshop as an example, the principle of minimum logistic costs will be implemented to obtain an ideological equipment layout, and a mathematical model. The objectiveness is to minimize the total necessary distance traveled between machines. An improved control operator is implemented to improve the iterative efficiency of the genetic algorithm, and yield relevant parameters. The Computer Aided Tri-Dimensional Interface Application (CATIA) software is applied to establish the manufacturing resource base and parametric model of the steel processing workshop. Based on the results of optimized planar logistics, a visual parametric model of the steel processing workshop is constructed, and qualitative and quantitative adjustments then are applied to the model. The method for evaluating the results of the layout is subsequently established through the utilization of AHP. In order to provide a mode of reference to the optimization and layout of the digitalized production workshop, the optimized discrete production workshop will possess a certain level of practical significance.

  3. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA.

    PubMed

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method.

  4. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA

    PubMed Central

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method. PMID:26884745

  5. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA.

    PubMed

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method. PMID:26884745

  6. Integrated layout based Monte-Carlo simulation for design arc optimization

    NASA Astrophysics Data System (ADS)

    Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James

    2016-03-01

    Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533

  7. Towards a more accurate extraction of the SPICE netlist from MAGIC based layouts

    SciTech Connect

    Geronimo, G.D.

    1998-08-01

    The extraction of the SPICE netlist form MAGIC based layouts is investigated. It is assumed that the layout is fully coherent with the corresponding mask representation. The process of the extraction can be made in three steps: (1) extraction of .EXT file from layout, through MAGIC command extract; (2) extraction of the netlist from .EXT file through ext2spice extractor; and (3) correction of the netlist through ext2spice.corr program. Each of these steps introduces some approximations, most of which can be optimized, and some errors, most of which can be corrected. Aim of this work is the description of each step, of the approximations and errors on each step, and of the corresponding optimizations and corrections to be made in order to improve the accuracy of the extraction. The HP AMOS14TB 0.5 {micro}m process with linear capacitor and silicide block options and the corresponding SCN3MLC{_}SUBM.30.tech27 technology file will be used in the following examples.

  8. Virtual reality based support system for layout planning and programming of an industrial robotic work cell.

    PubMed

    Yap, Hwa Jen; Taha, Zahari; Dawal, Siti Zawiah Md; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell.

  9. Virtual Reality Based Support System for Layout Planning and Programming of an Industrial Robotic Work Cell

    PubMed Central

    Yap, Hwa Jen; Taha, Zahari; Md Dawal, Siti Zawiah; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell. PMID:25360663

  10. Virtual reality based support system for layout planning and programming of an industrial robotic work cell.

    PubMed

    Yap, Hwa Jen; Taha, Zahari; Dawal, Siti Zawiah Md; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell. PMID:25360663

  11. Optical path layout and moving mirrors of wavemeter based on Michelson interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Peng, Yuexiang; Gao, Fengyou; He, Yongli; Zheng, Hongxing

    2008-03-01

    General wavemeters based on Michelson interferometer only have a moving arm, which cann't more multiply optical paths' differences, and is unable to avoid dispersion from a beamsplitter. Commonly, the moving mirror driven by a direct current motor and a ball screw have some disadvantage, such as heavy weight, unstable motion. In the paper, a better optical layout, and configuration and a driving method of moving mirrors are proposed. A newly optical paths layout of a wavemeter based on Michelson Interferometer is present, including two moving mirrors for forming optical paths' differences, a beamsplitter for splitting a light into a transmitted light and a reflected light, two reflectors, and a reference laser. It has two moving arms and can eliminate dispersion from the beamsplitter. According to Doppler effect, how to form the interference fringes in the photodiodes is analyzed and formulated. The Doppler effect appears with motion of the moving mirrors. Consequently, alternately dark and bright interference fringes are generated, then received and converted into the electronic signals by the photodiodes. It is concluded that the electronic signals involves the wavelength of a light and the velocity of the moving mirror by investigating the Doppler effect. The structure of the moving mirrors is clarified. The moving mirrors are made of the two pyramid prisms which are placed symmetrically on the driving motor. A controlling system for keeping the moving mirrors in constant velocity is designed. In order to make frequencies of electronic signals from interference fringes stable, the moving mirrors must move in a uniform speed. The voice coil motor (VCM) drags the moving mirror to and fro. VCM in uniform motion is realized by an optical-mechanical-electrical closed-loop feedback system. The Doppler frequency difference of the reference laser is the standard of the system. The PID controller comprising parallel proportional-integral-differential operational circuit

  12. Timing variability analysis for layout-dependent-effects in 28nm custom and standard cell-based designs

    NASA Astrophysics Data System (ADS)

    Hurat, Philippe; Topaloglu, Rasit O.; Nachman, Ramez; Pathak, Piyush; Condella, Jac; Madhavan, Sriram; Capodieci, Luigi

    2011-04-01

    We identify most recent sources of transistor layout dependent effects (LDE) such as stress, lithography, and well proximity effects (WPE), and outline modeling and analysis methods for 28 nm. These methods apply to custom layout, standard cell designs, and context-aware post-route analysis. We show how IC design teams can use a model-based approach to quantify and analyze variability induced by LDE. We reduce the need for guard-bands that negate the performance advantages that stress brings to advanced process technologies.

  13. High-Quality Ultra-Compact Grid Layout of Grouped Networks.

    PubMed

    Yoghourdjian, Vahan; Dwyer, Tim; Gange, Graeme; Kieffer, Steve; Klein, Karsten; Marriott, Kim

    2016-01-01

    Prior research into network layout has focused on fast heuristic techniques for layout of large networks, or complex multi-stage pipelines for higher quality layout of small graphs. Improvements to these pipeline techniques, especially for orthogonal-style layout, are difficult and practical results have been slight in recent years. Yet, as discussed in this paper, there remain significant issues in the quality of the layouts produced by these techniques, even for quite small networks. This is especially true when layout with additional grouping constraints is required. The first contribution of this paper is to investigate an ultra-compact, grid-like network layout aesthetic that is motivated by the grid arrangements that are used almost universally by designers in typographical layout. Since the time when these heuristic and pipeline-based graph-layout methods were conceived, generic technologies (MIP, CP and SAT) for solving combinatorial and mixed-integer optimization problems have improved massively. The second contribution of this paper is to reassess whether these techniques can be used for high-quality layout of small graphs. While they are fast enough for graphs of up to 50 nodes we found these methods do not scale up. Our third contribution is a large-neighborhood search meta-heuristic approach that is scalable to larger networks.

  14. High-Quality Ultra-Compact Grid Layout of Grouped Networks.

    PubMed

    Yoghourdjian, Vahan; Dwyer, Tim; Gange, Graeme; Kieffer, Steve; Klein, Karsten; Marriott, Kim

    2016-01-01

    Prior research into network layout has focused on fast heuristic techniques for layout of large networks, or complex multi-stage pipelines for higher quality layout of small graphs. Improvements to these pipeline techniques, especially for orthogonal-style layout, are difficult and practical results have been slight in recent years. Yet, as discussed in this paper, there remain significant issues in the quality of the layouts produced by these techniques, even for quite small networks. This is especially true when layout with additional grouping constraints is required. The first contribution of this paper is to investigate an ultra-compact, grid-like network layout aesthetic that is motivated by the grid arrangements that are used almost universally by designers in typographical layout. Since the time when these heuristic and pipeline-based graph-layout methods were conceived, generic technologies (MIP, CP and SAT) for solving combinatorial and mixed-integer optimization problems have improved massively. The second contribution of this paper is to reassess whether these techniques can be used for high-quality layout of small graphs. While they are fast enough for graphs of up to 50 nodes we found these methods do not scale up. Our third contribution is a large-neighborhood search meta-heuristic approach that is scalable to larger networks. PMID:26390477

  15. A Cut-Based Procedure For Document-Layout Modelling And Automatic Document Analysis

    NASA Astrophysics Data System (ADS)

    Dengel, Andreas R.

    1989-03-01

    With the growing degree of office automation and the decreasing costs of storage devices, it becomes more and more attractive to store optically scanned documents like letters or reports in an electronic form. Therefore the need of a good paper-computer interface becomes increasingly important. This interface must convert paper documents into an electronic representation that not only captures their contents, but also their layout and logical structure. We propose a procedure to describe the layout of a document page by dividing it recursively into nested rectangular areas. A semantic meaning to each one will be assigned by means of logical labels. The procedure is used as a basis for modelling a hierarchical document layout onto the semantic meaning of the parts in the document. We analyse the layout of a document using a best-first search in this tesselation structure. The search is directed by a measure of similarity between the layout pattern in the model and the layout of the actual document. The validity of a hypothesis for the semantic labelling of a layout block can then be verified. It either supports the hypothesis or initiates the generation of a new one. The method has been implemented in Common Lisp on a SUN 3/60 Workstation and has run for a large population of office docu-ments. The results obtained have been very encouraging and have convincingly confirmed the soundness of the approach.

  16. Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments

    NASA Astrophysics Data System (ADS)

    Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin

    The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.

  17. HOLA: Human-like Orthogonal Network Layout.

    PubMed

    Kieffer, Steve; Dwyer, Tim; Marriott, Kim; Wybrow, Michael

    2016-01-01

    Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new "human-centred" methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.

  18. WFST-based ground truth alignment for difficult historical documents with text modification and layout variations

    NASA Astrophysics Data System (ADS)

    Al Azawi, Mayce; Liwicki, Marcus; Breuel, Thomas M.

    2013-01-01

    This work proposes several approaches that can be used for generating correspondences between real scanned books and their transcriptions which might have different modifications and layout variations, also taking OCR errors into account. Our approaches for the alignment between the manuscript and the transcription are based on weighted finite state transducers (WFST). In particular, we propose adapted WFSTs to represent the transcription to be aligned with the OCR lattices. The character-level alignment has edit rules to allow edit operations (insertion, deletion, substitution). Those edit operations allow the transcription model to deal with OCR segmentation and recognition errors, and also with the task of aligning with different text editions. We implemented an alignment model with a hyphenation model, so it can adapt the non-hyphenated transcription. Our models also work with Fraktur ligatures, which are typically found in historical Fraktur documents. We evaluated our approach on Fraktur documents from Wanderungen durch die Mark Brandenburg" volumes (1862-1889) and observed the performance of those models under OCR errors. We compare the performance of our model for three different scenarios: having no information about the correspondence at the word (i), line (ii), sentence (iii) or page (iv) level.

  19. Optimal multi-floor plant layout based on the mathematical programming and particle swarm optimization

    PubMed Central

    LEE, Chang Jun

    2015-01-01

    In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study. PMID:26027708

  20. Optimal multi-floor plant layout based on the mathematical programming and particle swarm optimization.

    PubMed

    Lee, Chang Jun

    2015-01-01

    In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study.

  1. Intelligent Graph Layout Using Many Users' Input.

    PubMed

    Yuan, Xiaoru; Che, Limei; Hu, Yifan; Zhang, Xin

    2012-12-01

    In this paper, we propose a new strategy for graph drawing utilizing layouts of many sub-graphs supplied by a large group of people in a crowd sourcing manner. We developed an algorithm based on Laplacian constrained distance embedding to merge subgraphs submitted by different users, while attempting to maintain the topological information of the individual input layouts. To facilitate collection of layouts from many people, a light-weight interactive system has been designed to enable convenient dynamic viewing, modification and traversing between layouts. Compared with other existing graph layout algorithms, our approach can achieve more aesthetic and meaningful layouts with high user preference.

  2. Fast dual graph-based hotspot detection

    NASA Astrophysics Data System (ADS)

    Kahng, Andrew B.; Park, Chul-Hong; Xu, Xu

    2006-10-01

    As advanced technologies in wafer manufacturing push patterning processes toward lower-k I subwavelength printing, lithography for mass production potentially suffers from decreased patterning fidelity. This results in generation of many hotspots, which are actual device patterns with relatively large CD and image errors with respect to on-wafer targets. Hotspots can be formed under a variety of conditions such as the original design being unfriendly to the RET that is applied, unanticipated pattern combinations in rule-based OPC, or inaccuracies in model-based OPC. When these hotspots fall on locations that are critical to the electrical performance of a device, device performance and parametric yield can be significantly degraded. Previous rule-based hotspot detection methods suffer from long runtimes for complicated patterns. Also, the model generation process that captures process variation within simulation-based approaches brings significant overheads in terms of validation, measurement and parameter calibration. In this paper, we first describe a novel detection algorithm for hotspots induced by lithographic uncertainty. Our goal is to rapidly detect all lithographic hotspots without significant accuracy degradation. In other words, we propose a filtering method: as long as there are no "false negatives", i.e., we successfully have a superset of actual hotspots, then our method can dramatically reduce the layout area for golden hotspot analysis. The first step of our hotspot detection algorithm is to build a layout graph which reflects pattern-related CD variation. Given a layout L, the layout graph G = (V, E c union E p) consists of nodes V, corner edges E c and proximity edges E p. A face in the layout graph includes several close features and the edges between them. Edge weight can be calculated from a traditional 2-D model or a lookup table. We then apply a three-level hotspot detection: (1) edge-level detection finds the hotspot caused by two close

  3. Beam transport and focusing layout based on adaptive optics for the SQS scientific instrument at the European XFEL

    NASA Astrophysics Data System (ADS)

    Mazza, Tommaso; Signorato, Riccardo; Meyer, Michael; La Civita, Daniele; Vannoni, Maurizio; Sinn, Harald

    2014-09-01

    The SQS scientific instrument at the European XFEL is dedicated to investigations in the soft X-rays regime, in particular to studies of non-linear and ultrafast processes in atoms, molecules and clusters using a variety of spectroscopic techniques. It will be equipped with a Kirkpatrick-Baez (KB) adaptive mirror system enabling submicron focusing and access to variable focal distances. In this paper we describe the conceptual design of the beam transport and focusing layout based on the KB system. The design includes a study of feasibility based on the comparison between the required source and image positions and the theoretical limits for the accessible mirror profiles.

  4. PXIE Optics and Layout

    SciTech Connect

    Lebedev, V.A.; Nagaitsev, S.; Ostiguy, J.-F.; Shemyakin, A.V.; Shteynas, B.G.; Solyak, N.; Solyak, N.; /Fermilab

    2012-05-01

    The Project X Injector Experiment (PXIE) will serve as a prototype for the Project X front end. The aim is to validate the Project-X design and to decrease technical risks mainly related to the front end. The paper discusses the main requirements and constraints motivating the facility layout and optics. Final adjustments to the Project X front end design, if needed, will be based on operational experience gained with PXIE.

  5. Underground Layout Configuration

    SciTech Connect

    A. Linden

    2003-09-25

    The purpose of this analysis was to develop an underground layout to support the license application (LA) design effort. In addition, the analysis will be used as the technical basis for the underground layout general arrangement drawings.

  6. Programmable RET Mask Layout Verification

    NASA Astrophysics Data System (ADS)

    Beale, Daniel F.; Mayhew, Jeffrey P.; Rieger, Michael L.; Tang, Zongwu

    2002-12-01

    Emerging resolution enhancement techniques (RET) and OPC are dramatically increasing the complexity of mask layouts and, in turn, mask verification. Mask shapes needed to achieve required results on the wafer diverge significantly from corresponding shapes in the physical design, and in some cases a single chip layer may be decomposed into two masks used in multiple exposures. The mask verification challenge is to certify that a RET-synthesized mask layout will produce an acceptable facsimile of the design intent expressed in the design layout. Furthermore costs, tradeoffs between mask-complexity, design intent, targeted process latitude, and other factors are playing a growing role in helping to control rising mask costs. All of these considerations must in turn be incorporated into the mask layout verification strategy needed for data prep sign-off. In this paper we describe a technique for assessing the lithographic quality of mask layouts for diverse RET methods while effectively accommodating various manufacturing objectives and specifications. It leverages the familiar DRC paradigm for identifying errors and producing DRC-like error shapes in its output layout. It integrates a unique concept of "check figures" - layer-based geometries that dictate where and how simulations of shapes on the wafer are to be compared to the original desired layout. We will show how this provides a highly programmable environment that makes it possible to engage in "compound" check strategies that vary based on design intent and adaptive simulation with multiple checks. Verification may be applied at the "go/no go" level or can be used to build a body of data for quantitative analysis of lithographic behavior at multiple process conditions or for specific user-defined critical features. In addition, we will outline automated methods that guide the selection of input parameters controlling specific verification strategies.

  7. ESPRESSO instrument control electronics: a PLC based distributed layout for a second generation instrument at ESO VLT

    NASA Astrophysics Data System (ADS)

    Baldini, V.; Cirami, R.; Coretti, I.; Cristiani, S.; Di Marcantonio, P.; Mannetta, M.; Santin, P.; Mégevand, D.; Zerbi, F.

    2014-07-01

    ESPRESSO is an ultra-stable fiber-fed spectrograph designed to combine incoherently the light coming from up to 4 Unit Telescopes of the ESO VLT. From the Nasmyth focus of each telescope the light, through an optical path, is fed by the Coudé Train subsystems to the Front End Unit placed in the Combined Coudé Laboratory. The Front End is composed by one arm for each telescope and its task is to convey the incoming light, after a calibration process, into the spectrograph fibers. To perform these operations a large number of functions are foreseen, like motorized stages, lamps, digital and analog sensors that, coupled with dedicated Technical CCDs (two per arms), allow to stabilize the incoming beam up to the level needed to exploit the ESPRESSO scientific requirements. The Instrument Control Electronics goal is to properly control all the functions in the Combined Coudé Laboratory and the spectrograph itself. It is fully based on a distributed PLC architecture, abandoning in this way the VME-based technology previously adopted for the ESO VLT instruments. In this paper we will describe the ESPRESSO Instrument Control Electronics architecture, focusing on the distributed layout and its interfaces with the other ESPRESSO subsystems.

  8. Feasibility study, software design, layout and simulation of a two-dimensional fast Fourier transform machine for use in optical array interferometry

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin; Chen, Wei

    1990-01-01

    The NASA-Cornell Univ.-Worcester Polytechnic Institute Fast Fourier Transform (FFT) chip based on the architecture of the systolic FFT computation as presented by Boriakoff is implemented into an operating device design. The kernel of the system, a systolic inner product floating point processor, was designed to be assembled into a systolic network that would take incoming data streams in pipeline fashion and provide an FFT output at the same rate, word by word. It was thoroughly simulated for proper operation, and it has passed a comprehensive set of tests showing no operational errors. The black box specifications of the chip, which conform to the initial requirements of the design as specified by NASA, are given. The five subcells are described and their high level function description, logic diagrams, and simulation results are presented. Some modification of the Read Only Memory (ROM) design were made, since some errors were found in it. Because a four stage pipeline structure was used, simulating such a structure is more difficult than an ordinary structure. Simulation methods are discussed. Chip signal protocols and chip pinout are explained.

  9. A Rule Based Approach to ISS Interior Volume Control and Layout

    NASA Technical Reports Server (NTRS)

    Peacock, Brian; Maida, Jim; Fitts, David; Dory, Jonathan

    2001-01-01

    Traditional human factors design involves the development of human factors requirements based on a desire to accommodate a certain percentage of the intended user population. As the product is developed human factors evaluation involves comparison between the resulting design and the specifications. Sometimes performance metrics are involved that allow leniency in the design requirements given that the human performance result is satisfactory. Clearly such approaches may work but they give rise to uncertainty and negotiation. An alternative approach is to adopt human factors design rules that articulate a range of each design continuum over which there are varying outcome expectations and interactions with other variables, including time. These rules are based on a consensus of human factors specialists, designers, managers and customers. The International Space Station faces exactly this challenge in interior volume control, which is based on anthropometric, performance and subjective preference criteria. This paper describes the traditional approach and then proposes a rule-based alternative. The proposed rules involve spatial, temporal and importance dimensions. If successful this rule-based concept could be applied to many traditional human factors design variables and could lead to a more effective and efficient contribution of human factors input to the design process.

  10. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization.

    PubMed

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499

  11. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization

    PubMed Central

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499

  12. Optimal spinneret layout in Von Koch curves of fractal theory based needleless electrospinning process

    NASA Astrophysics Data System (ADS)

    Yang, Wenxiu; Liu, Yanbo; Zhang, Ligai; Cao, Hong; Wang, Yang; Yao, Jinbo

    2016-06-01

    Needleless electrospinning technology is considered as a better avenue to produce nanofibrous materials at large scale, and electric field intensity and its distribution play an important role in controlling nanofiber diameter and quality of the nanofibrous web during electrospinning. In the current study, a novel needleless electrospinning method was proposed based on Von Koch curves of Fractal configuration, simulation and analysis on electric field intensity and distribution in the new electrospinning process were performed with Finite element analysis software, Comsol Multiphysics 4.4, based on linear and nonlinear Von Koch fractal curves (hereafter called fractal models). The result of simulation and analysis indicated that Second level fractal structure is the optimal linear electrospinning spinneret in terms of field intensity and uniformity. Further simulation and analysis showed that the circular type of Fractal spinneret has better field intensity and distribution compared to spiral type of Fractal spinneret in the nonlinear Fractal electrospinning technology. The electrospinning apparatus with the optimal Von Koch fractal spinneret was set up to verify the theoretical analysis results from Comsol simulation, achieving more uniform electric field distribution and lower energy cost, compared to the current needle and needleless electrospinning technologies.

  13. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method. PMID:20968351

  14. Suggestions for Layout and Functional Behavior of Software-Based Voice Switch Keysets

    NASA Technical Reports Server (NTRS)

    Scott, David W.

    2010-01-01

    Marshall Space Flight Center (MSFC) provides communication services for a number of real time environments, including Space Shuttle Propulsion support and International Space Station (ISS) payload operations. In such settings, control team members speak with each other via multiple voice circuits or loops. Each loop has a particular purpose and constituency, and users are assigned listen and/or talk capabilities for a given loop based on their role in fulfilling the purpose. A voice switch is a given facility's hardware and software that supports such communication, and may be interconnected with other facilities switches to create a large network that, from an end user perspective, acts like a single system. Since users typically monitor and/or respond to several voice loops concurrently for hours on end and real time operations can be very dynamic and intense, it s vital that a control panel or keyset for interfacing with the voice switch be a servant that reduces stress, not a master that adds it. Implementing the visual interface on a computer screen provides tremendous flexibility and configurability, but there s a very real risk of overcomplication. (Remember how office automation made life easier, which led to a deluge of documents that made life harder?) This paper a) discusses some basic human factors considerations related to keysets implemented as application software windows, b) suggests what to standardize at the facility level and what to leave to the user's preference, and c) provides screen shot mockups for a robust but reasonably simple user experience. Concepts apply to keyset needs in almost any type of operations control or support center.

  15. Layout pattern analysis using the Voronoi diagram of line segments

    NASA Astrophysics Data System (ADS)

    Dey, Sandeep Kumar; Cheilaris, Panagiotis; Gabrani, Maria; Papadopoulou, Evanthia

    2016-01-01

    Early identification of problematic patterns in very large scale integration (VLSI) designs is of great value as the lithographic simulation tools face significant timing challenges. To reduce the processing time, such a tool selects only a fraction of possible patterns which have a probable area of failure, with the risk of missing some problematic patterns. We introduce a fast method to automatically extract patterns based on their structure and context, using the Voronoi diagram of line-segments as derived from the edges of VLSI design shapes. Designers put line segments around the problematic locations in patterns called "gauges," along which the critical distance is measured. The gauge center is the midpoint of a gauge. We first use the Voronoi diagram of VLSI shapes to identify possible problematic locations, represented as gauge centers. Then we use the derived locations to extract windows containing the problematic patterns from the design layout. The problematic locations are prioritized by the shape and proximity information of the design polygons. We perform experiments for pattern selection in a portion of a 22-nm random logic design layout. The design layout had 38,584 design polygons (consisting of 199,946 line segments) on layer Mx, and 7079 markers generated by an optical rule checker (ORC) tool. The optical rules specify requirements for printing circuits with minimum dimension. Markers are the locations of some optical rule violations in the layout. We verify our approach by comparing the coverage of our extracted patterns to the ORC-generated markers. We further derive a similarity measure between patterns and between layouts. The similarity measure helps to identify a set of representative gauges that reduces the number of patterns for analysis.

  16. Feasibility study, software design, layout and simulation of a two-dimensional Fast Fourier Transform machine for use in optical array interferometry

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin

    1994-01-01

    The goal of this project was the feasibility study of a particular architecture of a digital signal processing machine operating in real time which could do in a pipeline fashion the computation of the fast Fourier transform (FFT) of a time-domain sampled complex digital data stream. The particular architecture makes use of simple identical processors (called inner product processors) in a linear organization called a systolic array. Through computer simulation the new architecture to compute the FFT with systolic arrays was proved to be viable, and computed the FFT correctly and with the predicted particulars of operation. Integrated circuits to compute the operations expected of the vital node of the systolic architecture were proven feasible, and even with a 2 micron VLSI technology can execute the required operations in the required time. Actual construction of the integrated circuits was successful in one variant (fixed point) and unsuccessful in the other (floating point).

  17. Fast Beam-Based BPM Calibration

    SciTech Connect

    Bertsche, K.; Loos, H.; Nuhn, H.-D.; Peters, F.; /SLAC

    2012-10-15

    The Alignment Diagnostic System (ADS) of the LCLS undulator system indicates that the 33 undulator quadrupoles have extremely high position stability over many weeks. However, beam trajectory straightness and lasing efficiency degrade more quickly than this. A lengthy Beam Based Alignment (BBA) procedure must be executed every two to four weeks to re-optimize the X-ray beam parameters. The undulator system includes RF cavity Beam Position Monitors (RFBPMs), several of which are utilized by an automatic feedback system to align the incoming electron-beam trajectory to the undulator axis. The beam trajectory straightness degradation has been traced to electronic drifts of the gain and offset of the BPMs used in the beam feedback system. To quickly recover the trajectory straightness, we have developed a fast beam-based procedure to recalibrate the BPMs. This procedure takes advantage of the high-precision monitoring capability of the ADS, which allows highly repeatable positioning of undulator quadrupoles. This report describes the ADS, the position stability of the LCLS undulator quadrupoles, and some results of the new recovery procedure.

  18. Research study: Device technology STAR router user's guide. [automated layout of large scale integration discretionary interconnection masks

    NASA Technical Reports Server (NTRS)

    Wright, R. A.

    1979-01-01

    The STAR Router program developed to perform automated layout of LSI discretionary interconnection masks is described. The input and output for the router are standard PR2D data files. A state-of-the-art cellular path-finding procedure, based on Lee's algorithm, which produces fast, shortest distance routing of microcircuit net data is included.

  19. Fast diffraction computation algorithms based on FFT

    NASA Astrophysics Data System (ADS)

    Logofatu, Petre Catalin; Nascov, Victor; Apostol, Dan

    2010-11-01

    The discovery of the Fast Fourier transform (FFT) algorithm by Cooley and Tukey meant for diffraction computation what the invention of computers meant for computation in general. The computation time reduction is more significant for large input data, but generally FFT reduces the computation time with several orders of magnitude. This was the beginning of an entire revolution in optical signal processing and resulted in an abundance of fast algorithms for diffraction computation in a variety of situations. The property that allowed the creation of these fast algorithms is that, as it turns out, most diffraction formulae contain at their core one or more Fourier transforms which may be rapidly calculated using the FFT. The key in discovering a new fast algorithm is to reformulate the diffraction formulae so that to identify and isolate the Fourier transforms it contains. In this way, the fast scaled transformation, the fast Fresnel transformation and the fast Rayleigh-Sommerfeld transform were designed. Remarkable improvements were the generalization of the DFT to scaled DFT which allowed freedom to choose the dimensions of the output window for the Fraunhofer-Fourier and Fresnel diffraction, the mathematical concept of linearized convolution which thwarts the circular character of the discrete Fourier transform and allows the use of the FFT, and last but not least the linearized discrete scaled convolution, a new concept of which we claim priority.

  20. A GIS-based approach: Influence of the ventilation layout to the environmental conditions in an underground mine.

    PubMed

    Bascompta, Marc; Castañón, Ana María; Sanmiquel, Lluís; Oliva, Josep

    2016-11-01

    Gases such as CO, CO2 or NOx are constantly generated by the equipment in any underground mine and the ventilation layout can play an important role in keeping low concentrations in the working faces. Hence, a method able to control the workplace environment is crucial. This paper proposes a geographical information system (GIS) for such goal. The system created provides the necessary tools to manage and analyse an underground environment, connecting pollutants and temperatures with the ventilation characteristics over time. Data concerning the ventilation system, in a case study, has been taken every month since 2009 and integrated into the management system, which has quantified the gasses concentration throughout the mine due to the characteristics and evolution of the ventilation layout. Three different zones concerning CO, CO2, NOx and effective temperature have been found as well as some variations among workplaces within the same zone that suggest local airflow recirculations. The system proposed could be a useful tool to improve the workplace conditions and efficiency levels. PMID:27538248

  1. A GIS-based approach: Influence of the ventilation layout to the environmental conditions in an underground mine.

    PubMed

    Bascompta, Marc; Castañón, Ana María; Sanmiquel, Lluís; Oliva, Josep

    2016-11-01

    Gases such as CO, CO2 or NOx are constantly generated by the equipment in any underground mine and the ventilation layout can play an important role in keeping low concentrations in the working faces. Hence, a method able to control the workplace environment is crucial. This paper proposes a geographical information system (GIS) for such goal. The system created provides the necessary tools to manage and analyse an underground environment, connecting pollutants and temperatures with the ventilation characteristics over time. Data concerning the ventilation system, in a case study, has been taken every month since 2009 and integrated into the management system, which has quantified the gasses concentration throughout the mine due to the characteristics and evolution of the ventilation layout. Three different zones concerning CO, CO2, NOx and effective temperature have been found as well as some variations among workplaces within the same zone that suggest local airflow recirculations. The system proposed could be a useful tool to improve the workplace conditions and efficiency levels.

  2. A CANDU-Based Fast Irradiation Reactor

    SciTech Connect

    Shatilla, Youssef

    2006-07-01

    A new steady-state fast neutron reactor is needed to satisfy the testing needs of Generation IV reactors, the Space Propulsion Program, and the Advanced Fuel Cycle Initiative. This paper presents a new concept for a CANDU-based fast irradiation reactor that is horizontal in orientation, with individual pressure tubes running the entire length of the scattering-medium tank (Calandria) filled with Lead-Bismuth-Eutectic (LBE). This approach for a test reactor will provide more flexibility in refueling, sample removal, and ability to completely re-configure the core to meet different users' requirements. Full core neutronic analysis of several fuel/coolant/geometry combinations showed a small hexagonal, LBE-cooled, U-Pu-10Zr fuel, with a core power of 100 MW{sub th} produced a fast flux (>0.1 MeV) of 1.5 x 10{sup 15} n/cm{sup 2} sec averaged over the whole length of six irradiation channels with a total testing volume of more than 77 liters. In-core breeding allowed the Pu-239 enrichment to be 15.3% which should result in core continuous operation for 180 effective full power days. Other coolants investigated included high pressure water steam and helium. An innovative shutdown/control system which consisted of the six outermost fuel channels was proven to be effective in shutting the core down when flooded with boric acid as a neutron absorber. The new shutdown/control system has the advantage of causing the minimum perturbation of the axial flux shape when the control channels are partially flooded with boric acid. This is because the acid is injected homogeneously along the control channel in contrast to regular control rods that are injected partially causing an axial perturbation in the core flux which in turn reduces safety analysis margins. The new shutdown/control system is not required to penetrate the core in a direction vertical to the fuel channels which allowed the freedom of changing core pitch as deemed necessary. A preliminary thermal hydraulic analysis

  3. Site Recommendation Subsurface Layout

    SciTech Connect

    C.L. Linden

    2000-06-28

    The purpose of this analysis is to develop a Subsurface Facility layout that is capable of accommodating the statutory capacity of 70,000 metric tons of uranium (MTU), as well as an option to expand the inventory capacity, if authorized, to 97,000 MTU. The layout configuration also requires a degree of flexibility to accommodate potential changes in site conditions or program requirements. The objective of this analysis is to provide a conceptual design of the Subsurface Facility sufficient to support the development of the Subsurface Facility System Description Document (CRWMS M&O 2000e) and the ''Emplacement Drift System Description Document'' (CRWMS M&O 2000i). As well, this analysis provides input to the Site Recommendation Consideration Report. The scope of this analysis includes: (1) Evaluation of the existing facilities and their integration into the Subsurface Facility design. (2) Identification and incorporation of factors influencing Subsurface Facility design, such as geological constraints, thermal loading, constructibility, subsurface ventilation, drainage control, radiological considerations, and the Test and Evaluation Facilities. (3) Development of a layout showing an available area in the primary area sufficient to support both the waste inventories and individual layouts showing the emplacement area required for 70,000 MTU and, if authorized, 97,000 MTU.

  4. Electrostatic Levitator Layout

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Electrostatic Levitator (ESL) general layout with captions. The ESL uses static electricity to suspend an object (about 2-3 mm in diameter) inside a vacuum chamber while a laser heats the sample until it melts. This lets scientists record a wide range of physical properties without the sample contacting the container or any instruments, conditions that would alter the readings. The Electrostatic Levitator is one of several tools used in NASA's microgravity materials science program.

  5. A fast dynamic mode in rare earth based glasses

    NASA Astrophysics Data System (ADS)

    Zhao, L. Z.; Xue, R. J.; Zhu, Z. G.; Ngai, K. L.; Wang, W. H.; Bai, H. Y.

    2016-05-01

    Metallic glasses (MGs) usually exhibit only slow β-relaxation peak, and the signature of the fast dynamic is challenging to be observed experimentally in MGs. We report a general and unusual fast dynamic mode in a series of rare earth based MGs manifested as a distinct fast β'-relaxation peak in addition to slow β-relaxation and α-relaxation peaks. We show that the activation energy of the fast β'-relaxation is about 12RTg and is equivalent to the activation of localized flow event. The coupling of these dynamic processes as well as their relationship with glass transition and structural heterogeneity is discussed.

  6. Automating the layout of network diagrams with specified visual organization

    SciTech Connect

    Kosak, C.; Marks, J.; Shieber, S.

    1994-03-01

    Network diagrams are a familiar graphic form that can express many different kinds of information. The problem of automating network-diagram layout has therefore received much attention. Previous research on network-diagram layout has focused on the problem of aesthetically optimal layout, using such criteria as the number of link crossings, the sum of all link lengths, and total diagram area. In this paper we propose a restatement of the network-diagram layout problem in which layout-aesthetic concerns are subordinated to perceptual-organization concerns. We present a notation for describing the visual organization of a network diagram. This notation is used in reformulating the layout task as a constrained-optimization problem in which constraints are derived from a visual-organization specification and optimality criteria are derived from layout-aesthetic considerations. Two new heuristic algorithms are presented for this version of the layout problem: one algorithm uses a rule-based strategy for computing a layout; the other is a massively parallel genetic algorithm. We demonstrate the capabilities of the two algorithms by testing them on a variety of network-diagram layout problems. 30 refs.

  7. Terrace Layout Using a Computer Assisted System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  8. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  9. Dr.L: Distributed Recursive (Graph) Layout

    2007-11-19

    Dr. L provides two-dimensional visualizations of very large abstract graph structures. it can be used for data mining applications including biology, scientific literature, and social network analysis. Dr. L is a graph layout program that uses a multilevel force-directed algorithm. A graph is input and drawn using a force-directed algorithm based on simulated annealing. The resulting layout is clustered using a single link algorithm. This clustering is used to produce a coarsened graph (fewer nodes)more » which is then re-drawn. this process is repeated until a sufficiently small graph is produced. The smallest graph is drawn and then used as a basis for drawing the original graph by refining the series of coarsened graphs that were produced. The layout engine can be run in serial or in parallel.« less

  10. Mental Layout Extrapolations Prime Spatial Processing of Scenes

    ERIC Educational Resources Information Center

    Gottesman, Carmela V.

    2011-01-01

    Four experiments examined whether scene processing is facilitated by layout representation, including layout that was not perceived but could be predicted based on a previous partial view (boundary extension). In a priming paradigm (after Sanocki, 2003), participants judged objects' distances in photographs. In Experiment 1, full scenes (target),…

  11. Scintillator-based fast ion loss measurements in the EAST

    NASA Astrophysics Data System (ADS)

    Chang, J. F.; Isobe, M.; Ogawa, K.; Huang, J.; Wu, C. R.; Xu, Z.; Jin, Z.; Lin, S. Y.; Hu, L. Q.

    2016-11-01

    A new scintillator-based fast ion loss detector (FILD) has been installed on Experimental Advanced Superconducting Tokamak (EAST) to investigate the fast ion loss behavior in high performance plasma with neutral beam injection (NBI) and ion cyclotron resonance heating (ICRH). A two dimensional 40 mm × 40 mm scintillator-coated (ZnS:Ag) stainless plate is mounted in the front of the detector, capturing the escaping fast ions. Photons from the scintillator plate are imaged with a Phantom V2010 CCD camera. The lost fast ions can be measured with the pitch angle from 60° to 120° and the gyroradius from 10 mm to 180 mm. This paper will describe the details of FILD diagnostic on EAST and describe preliminary measurements during NBI and ICRH heating.

  12. 2D design rule and layout analysis using novel large-area first-principles-based simulation flow incorporating lithographic and stress effects

    NASA Astrophysics Data System (ADS)

    Prins, Steven L.; Blatchford, James; Olubuyide, Oluwamuyiwa; Riley, Deborah; Chang, Simon; Hong, Qi-Zhong; Kim, T. S.; Borges, Ricardo; Lin, Li

    2009-03-01

    As design rules and corresponding logic standard cell layouts continue to shrink node-on-node in accordance with Moore's law, complex 2D interactions, both intra-cell and between cells, become much more prominent. For example, in lithography, lack of scaling of λ/NA implies aggressive use of resolution enhancement techniques to meet logic scaling requirements-resulting in adverse effects such as 'forbidden pitches'-and also implies an increasing range of optical influence relative to cell size. These adverse effects are therefore expected to extend well beyond the cell boundary, leading to lithographic marginalities that occur only when a given cell is placed "in context" with other neighboring cells in a variable design environment [1]. This context dependence is greatly exacerbated by increased use of strain engineering techniques such as SiGe and dual-stress liners (DSL) to enhance transistor performance, both of which also have interaction lengths on the order of microns. The use of these techniques also breaks the formerly straightforward connection between lithographic 'shapes' and end-of-line electrical performance, thus making the formulation of design rules that are robust to process variations and complex 2D interactions more difficult. To address these issues, we have developed a first-principles-based simulation flow to study contextdependent electrical effects in layout, arising not only from lithography, but also from stress and interconnect parasitic effects. This flow is novel in that it can be applied to relatively large layout clips- required for context-dependent analysis-without relying on semi-empirical or 'black-box' models for the fundamental electrical effects. The first-principles-based approach is ideal for understanding contextdependent effects early in the design phase, so that they can be mitigated through restrictive design rules. The lithographic simulations have been discussed elsewhere [1] and will not be presented in detail. The

  13. Interactive layout mechanisms for image database retrieval

    SciTech Connect

    MacCuish, J.; McPherson, A.; Barros, J.; Kelly, P.

    1996-01-29

    In this paper we present a user interface, CANDID Camera, for image retrieval using query-by-example technology. Included in the interface are several new layout algorithms based on multidimensional scaling techniques that visually display global and local relationships between images within a large image database. We use the CANDID project algorithms to create signatures of the images, and then measure the dissimilarity between the signatures. The layout algorithms are of two types. The first are those that project the all-pairs dissimilarities to two dimensions, presenting a many-to-many relationship for a global view of the entire database. The second are those that relate a query image to a small set of matched images for a one-to-many relationship that provides a local inspection of the image relationships. Both types are based on well-known multidimensional scaling techniques that have been modified and used together for efficiency and effectiveness. They include nonlinear projection and classical projection. The global maps are hybrid algorithms using classical projection together with nonlinear projection. We have developed several one-to-many layouts based on a radial layout, also using modified nonlinear and classical projection.

  14. Interactive layout mechanisms for image database retrieval

    NASA Astrophysics Data System (ADS)

    MacCuish, John; McPherson, Allen; Barros, Julio E.; Kelly, Patrick M.

    1996-03-01

    In this paper we present a user interface, CANDID Camera, for image retrieval using query- by-example technology. Included in the interface are several new layout algorithms based on multidimensional scaling techniques that visually display global and local relationships between images within a large image database. We use the CANDID project algorithms to create signatures of the images, and then measure the dissimilarity between the signatures. The layout algorithms are of two types. The first are those that project the all-pairs dissimilarities to two dimensions, presenting a many-to-many relationship for a global view of the entire database. The second are those that relate a query image to a small set of matched images for a one-to-many relationship that provides a local inspection of the image relationships. Both types are based on well-known multidimensional scaling techniques that have been modified and used together for efficiency and effectiveness. They include nonlinear projection and classical projection. The global maps are hybrid algorithms using classical projection together with nonlinear projection. We have developed several one-to-many layouts based on a radial layout, also using modified nonlinear and classical projection.

  15. Game level layout generation using evolved cellular automata

    NASA Astrophysics Data System (ADS)

    Pech, Andrew; Masek, Martin; Lam, Chiou-Peng; Hingston, Philip

    2016-01-01

    Design of level layouts typically involves the production of a set of levels which are different, yet display a consistent style based on the purpose of a particular level. In this paper, a new approach to the generation of unique level layouts, based on a target set of attributes, is presented. These attributes, which are learned automatically from an example layout, are used for the off-line evolution of a set of cellular automata rules. These rules can then be used for the real-time generation of level layouts that meet the target parameters. The approach is demonstrated on a set of maze-like level layouts. Results are presented to show the effect of various CA parameters and rule representation.

  16. Fast Algorithms for Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  17. A novel algorithm for automatic arrays detection in a layout

    NASA Astrophysics Data System (ADS)

    Shafee, Marwah; Park, Jea-Woo; Aslyan, Ara; Torres, Andres; Madkour, Kareem; ElManhawy, Wael

    2013-03-01

    Integrated circuits suffer from serious layout printability issues associated to the lithography manufacturing process. Regular layout designs are emerging as alternative solutions to help reducing these systematic sub-wavelength lithography variations. From CAD point of view, regular layouts can be treated as repeated patterns that are arranged in arrays. In most modern mask synthesis and verification tools, cell based hierarchical processing has been able to identify repeating cells by analyzing the design's cell placement; however, there are some routing levels which are not inside the cell and yet they create an array-like structure because of the underlying topologies which could be exploited by detecting repeated patterns in layout thus reducing simulation run-time by simulating only the representing cells and then restore all the simulation results in their corresponding arrays. The challenge is to make the array detection and restoration of the results a very lightweight operation to fully realize the benefits of the approach. A novel methodology for detecting repeated patterns in a layout is proposed. The main idea is based on translating the layout patterns into string of symbols and construct a "Symbolic Layout". By finding repetitions in the symbolic layout, repeated patterns in the drawn layout are detected. A flow for layout reduction based on arrays-detection followed by pattern-matching is discussed. Run time saving comes from doing all litho simulations on the base-patterns only. The pattern matching is then used to restore all the simulation results over the arrays. The proposed flow shows 1.4x to 2x run time enhancement over the regular litho simulation flow. An evaluation for the proposed flow in terms of coverage and run-time is drafted.

  18. GPU-based fast gamma index calculation

    NASA Astrophysics Data System (ADS)

    Gu, Xuejun; Jia, Xun; Jiang, Steve B.

    2011-03-01

    The γ-index dose comparison tool has been widely used to compare dose distributions in cancer radiotherapy. The accurate calculation of γ-index requires an exhaustive search of the closest Euclidean distance in the high-resolution dose-distance space. This is a computational intensive task when dealing with 3D dose distributions. In this work, we combine a geometric method (Ju et al 2008 Med. Phys. 35 879-87) with a radial pre-sorting technique (Wendling et al 2007 Med. Phys. 34 1647-54) and implement them on computer graphics processing units (GPUs). The developed GPU-based γ-index computational tool is evaluated on eight pairs of IMRT dose distributions. The γ-index calculations can be finished within a few seconds for all 3D testing cases on one single NVIDIA Tesla C1060 card, achieving 45-75× speedup compared to CPU computations conducted on an Intel Xeon 2.27 GHz processor. We further investigated the effect of various factors on both CPU and GPU computation time. The strategy of pre-sorting voxels based on their dose difference values speeds up the GPU calculation by about 2.7-5.5 times. For n-dimensional dose distributions, γ-index calculation time on CPU is proportional to the summation of γn over all voxels, while that on GPU is affected by γn distributions and is approximately proportional to the γn summation over all voxels. We found that increasing the resolution of dose distributions leads to a quadratic increase of computation time on CPU, while less-than-quadratic increase on GPU. The values of dose difference and distance-to-agreement criteria also have an impact on γ-index calculation time.

  19. [Fast spectral modeling based on Voigt peaks].

    PubMed

    Li, Jin-rong; Dai, Lian-kui

    2012-03-01

    Indirect hard modeling (IHM) is a recently introduced method for quantitative spectral analysis, which was applied to the analysis of nonlinear relation between mixture spectrum and component concentration. In addition, IHM is an effectual technology for the analysis of components of mixture with molecular interactions and strongly overlapping bands. Before the establishment of regression model, IHM needs to model the measured spectrum as a sum of Voigt peaks. The precision of the spectral model has immediate impact on the accuracy of the regression model. A spectrum often includes dozens or even hundreds of Voigt peaks, which mean that spectral modeling is a optimization problem with high dimensionality in fact. So, large operation overhead is needed and the solution would not be numerically unique due to the ill-condition of the optimization problem. An improved spectral modeling method is presented in the present paper, which reduces the dimensionality of optimization problem by determining the overlapped peaks in spectrum. Experimental results show that the spectral modeling based on the new method is more accurate and needs much shorter running time than conventional method. PMID:22582612

  20. User Preferences for Web-Based Module Design Layout and Design Impact on Information Recall Considering Age

    ERIC Educational Resources Information Center

    Pomales-García, Cristina; Rivera-Nivar, Mericia

    2015-01-01

    Research in design of Web-based modules should incorporate aging as an important factor given the diversity of the current workforce. This work aims to understand how Web-Based Learning modules can be designed to accommodate young (25-35 years) as well as older (55-65 years) users by: (1) identifying how information sources (instructor video,…

  1. Spacecraft Component Adaptive Layout Environment (SCALE): An efficient optimization tool

    NASA Astrophysics Data System (ADS)

    Fakoor, Mahdi; Ghoreishi, Seyed Mohammad Navid; Sabaghzadeh, Hossein

    2016-11-01

    For finding the optimum layout of spacecraft subsystems, important factors such as the center of gravity, moments of inertia, thermal distribution, natural frequencies, etc. should be taken into account. This large number of effective parameters makes the optimum layout process of spacecraft subsystems complex and time consuming. In this paper, an automatic tool, based on multi-objective optimization methods, is proposed for a three dimensional layout of spacecraft subsystems. In this regard, an efficient Spacecraft Component Adaptive Layout Environment (SCALE) is produced by integration of some modeling, FEM, and optimization software. SCALE automatically provides optimal solutions for a three dimensional layout of spacecraft subsystems with considering important constraints such as center of gravity, moment of inertia, thermal distribution, natural frequencies and structural strength. In order to show the superiority and efficiency of SCALE, layout of a telecommunication spacecraft and a remote sensing spacecraft are performed. The results show that, the objective functions values for obtained layouts by using SCALE are in a much better condition than traditional one i.e. Reference Baseline Solution (RBS) which is proposed by the engineering system team. This indicates the good performance and ability of SCALE for finding the optimal layout of spacecraft subsystems.

  2. Location selection and layout for LB10, a lunar base at the Lunar North Pole with a liquid mirror observatory

    NASA Astrophysics Data System (ADS)

    Detsis, Emmanouil; Doule, Ondrej; Ebrahimi, Aliakbar

    2013-04-01

    We present the site selection process and urban planning of a Lunar Base for a crew of 10 (LB10), with an infrared astronomical telescope, based on the concept of the Lunar LIquid Mirror Telescope. LB10 is a base designated for permanent human presence on the Moon. The base architecture is based on utilization of inflatable, rigid and regolith structures for different purposes. The location for the settlement is identified through a detailed analysis of surface conditions and terrain parameters around the Lunar North and South Poles. A number of selection criteria were defined regarding construction, astronomical observations, landing and illumination conditions. The location suggested for the settlement is in the vicinity of the North Pole, utilizing the geographical morphology of the area. The base habitat is on a highly illuminated and relatively flat plateau. The observatory in the vicinity of the base, approximately 3.5 kilometers from the Lunar North Pole, inside a crater to shield it from Sunlight. An illustration of the final form of the habitat is also depicted, inspired by the baroque architectural form.

  3. Fast Electromechanical Switches Based on Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Kaul, Anupama; Wong, Eric; Epp, Larry

    2008-01-01

    Electrostatically actuated nanoelectromechanical switches based on carbon nanotubes have been fabricated and tested in a continuing effort to develop high-speed switches for a variety of stationary and portable electronic equipment. As explained below, these devices offer advantages over electrostatically actuated microelectromechanical switches, which, heretofore, have represented the state of the art of rapid, highly miniaturized electromechanical switches. Potential applications for these devices include computer memories, cellular telephones, communication networks, scientific instrumentation, and general radiation-hard electronic equipment. A representative device of the present type includes a single-wall carbon nanotube suspended over a trench about 130 nm wide and 20 nm deep in an electrically insulating material. The ends of the carbon nanotube are connected to metal electrodes, denoted the source and drain electrodes. At bottom of the trench is another metal electrode, denoted the pull electrode (see figure). In the off or open switch state, no voltage is applied, and the nanotube remains out of contact with the pull electrode. When a sufficiently large electric potential (switching potential) is applied between the pull electrode and either or both of the source and drain electrodes, the resulting electrostatic attraction bends and stretches the nanotube into contact with the pull electrode, thereby putting the switch into the "on" or "closed" state, in which substantial current (typically as much as hundreds of nanoamperes) is conducted. Devices of this type for use in initial experiments were fabricated on a thermally oxidized Si wafer, onto which Nb was sputter-deposited for use as the pull-electrode layer. Nb was chosen because its refractory nature would enable it to withstand the chemical and thermal conditions to be subsequently imposed for growing carbon nanotubes. A 200- nm-thick layer of SiO2 was formed on top of the Nb layer by plasma

  4. A fast quantum mechanics based contour extraction algorithm

    NASA Astrophysics Data System (ADS)

    Lan, Tian; Sun, Yangguang; Ding, Mingyue

    2009-02-01

    A fast algorithm was proposed to decrease the computational cost of the contour extraction approach based on quantum mechanics. The contour extraction approach based on quantum mechanics is a novel method proposed recently by us, which will be presented on the same conference by another paper of us titled "a statistical approach to contour extraction based on quantum mechanics". In our approach, contour extraction was modeled as the locus of a moving particle described by quantum mechanics, which is obtained by the most probable locus of the particle simulated in a large number of iterations. In quantum mechanics, the probability that a particle appears at a point is equivalent to the square amplitude of the wave function. Furthermore, the expression of the wave function can be derived from digital images, making the probability of the locus of a particle available. We employed the Markov Chain Monte Carlo (MCMC) method to estimate the square amplitude of the wave function. Finally, our fast quantum mechanics based contour extraction algorithm (referred as our fast algorithm hereafter) was evaluated by a number of different images including synthetic and medical images. It was demonstrated that our fast algorithm can achieve significant improvements in accuracy and robustness compared with the well-known state-of-the-art contour extraction techniques and dramatic reduction of time complexity compared to the statistical approach to contour extraction based on quantum mechanics.

  5. Highly reliable data layout schemes for very large scale storage systems

    NASA Astrophysics Data System (ADS)

    Luo, Dongjian; Zhong, Haifeng; Wu, Wei

    2009-08-01

    In this paper, we investigate data layout schemes and their impact on system reliability in a petabyte scale storage system built from thousands of Object-Based Storage Devices. We delve in two underlying data layout schemes: RAID 5 and RAID 5 mirroring. To accelerate data reconstruction, Fast Mirroring Copy is employed where the reconstructed objects are stored on different OBSDs throughout the system. In order to improve the system reliability, SMART Reliability Mechanism (SRM) is introduced for enhancing the reliability in very large-scale storage system. Analysis results show that they can be used to assure the reliability of data storage and efficiently utilize the disk resource while exert minimum impact on the whole systems performance.

  6. Toward new design-rule-check of silicon photonics for automated layout physical verifications

    NASA Astrophysics Data System (ADS)

    Ismail, Mohamed; El Shamy, Raghi S.; Madkour, Kareem; Hammouda, Sherif; Swillam, Mohamed A.

    2015-02-01

    A simple analytical model is developed to estimate the power loss and time delay in photonic integrated circuits fabricated using SOI standard wafers. This model is simple and can be utilized in physical verification of the circuit layout to verify its feasibility for fabrication using certain foundry specifications. This model allows for providing new design rules for the layout physical verification process in any electronic design automation (EDA) tool. The model is accurate and compared with finite element based full wave electromagnetic EM solver. The model is closed form and circumvents the need to utilize any EM solver for verification process. As such it dramatically reduces the time of verification process and allows fast design rule check.

  7. Basic concepts underlying fast-neutron-based contraband interrogation technology

    SciTech Connect

    Fink, C.L.; Guenther, P.T.; Smith, D.L.

    1992-01-01

    All accelerator-based fast-neutron contraband interrogation systems have many closely interrelated subsystems, whose performance parameters will be critically interdependent. For optimal overall performance, a systems analysis design approach is required. This paper provides a general overview of the interrelationships and the tradeoffs to be considered for optimization of nonaccelerator subsystems.

  8. Device Independent Layout and Style Editing Using Multi-Level Style Sheets

    NASA Astrophysics Data System (ADS)

    Dees, Walter

    This paper describes a layout and styling framework that is based on the multi-level style sheets approach. It shows some of the techniques that can be used to add layout and style information to a UI in a device-independent manner, and how to reuse the layout and style information to create user interfaces for different devices

  9. Facility Layout Problems Using Bays: A Survey

    NASA Astrophysics Data System (ADS)

    Davoudpour, Hamid; Jaafari, Amir Ardestani; Farahani, Leila Najafabadi

    2010-06-01

    Layout design is one of the most important activities done by industrial Engineers. Most of these problems have NP hard Complexity. In a basic layout design, each cell is represented by a rectilinear, but not necessarily convex polygon. The set of fully packed adjacent polygons is known as a block layout (Asef-Vaziri and Laporte 2007). Block layout is divided by slicing tree and bay layout. In bay layout, departments are located in vertical columns or horizontal rows, bays. Bay layout is used in real worlds especially in concepts such as semiconductor and aisles. There are several reviews in facility layout; however none of them focus on bay layout. The literature analysis given here is not limited to specific considerations about bay layout design. We present a state of art review for bay layout considering some issues such as the used objectives, the techniques of solving and the integration methods in bay.

  10. FastME 2.0: A Comprehensive, Accurate, and Fast Distance-Based Phylogeny Inference Program.

    PubMed

    Lefort, Vincent; Desper, Richard; Gascuel, Olivier

    2015-10-01

    FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/).

  11. FastME 2.0: A Comprehensive, Accurate, and Fast Distance-Based Phylogeny Inference Program

    PubMed Central

    Lefort, Vincent; Desper, Richard; Gascuel, Olivier

    2015-01-01

    FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/). PMID:26130081

  12. FastME 2.0: A Comprehensive, Accurate, and Fast Distance-Based Phylogeny Inference Program.

    PubMed

    Lefort, Vincent; Desper, Richard; Gascuel, Olivier

    2015-10-01

    FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/). PMID:26130081

  13. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    PubMed

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Gao, Yang; Chen, Yang; Feng, Qianjin; Chen, Wufan; Lu, Zhentai

    2014-01-01

    This study aims to develop content-based image retrieval (CBIR) system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR) images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW) model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML) is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). Using the BoVW model with partition learning, the mean average precision (mAP) of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  14. Compact, fiber-based, fast-light enhanced optical gyroscope

    NASA Astrophysics Data System (ADS)

    Christensen, Caleb A.; Zavriyev, Anton; Bashkansky, Mark; Beal, A. Craig

    2013-05-01

    It has been proposed that fast-light optical phenomena can increase the sensitivity of a Ring Laser Gyroscope (RLG) of a given size by several orders of magnitude. MagiQ is developing a compact fully-fibered fast light RLG using Stimulated Brillouin Scattering (SBS) in commercial optical fiber. We will discuss our experimental results on SBS pumped lasing in commercial fibers and analyze their implications to the fast light generation. Based on these results, we envision a fast light enhanced Ring Laser Gyroscope (RLG) that will use only a few meters of fiber and require moderate pump power (only a few 100's of mW). We will present the design that is based on proven, commercially available technologies. By using photonic integrated circuits and telecom-grade fiber components, we created a design that is appropriate for mass production in the near term. We eliminated all free-space optical elements (such as atomic vapor cells), in order to enable a compact, high sensitivity RLG stable against environmental disturbances. Results of this effort will have benefits in existing applications of RLGs (such as inertial navigation units, gyrocompasses, and stabilization techniques), and will allow wider use of RLGs in spacecraft, unmanned aerial vehicles or sensors, where the current size and weight of optical gyros are prohibitive.

  15. GEM-based detectors for thermal and fast neutrons

    NASA Astrophysics Data System (ADS)

    Croci, G.; Claps, G.; Cazzaniga, C.; Foggetta, L.; Muraro, A.; Valente, P.

    2015-06-01

    Lately the problem of 3He replacement for neutron detection stimulated an intense activity research on alternative technologies based on alternative neutron converters. This paper presents briefly the results obtained with new GEM detectors optimized for fast and thermal neutrons. For thermal neutrons, we realized a side-on GEM detector based on a series of boron-coated alumina sheets placed perpendicularly to the incident neutron beam direction. This prototype has been tested at n@BTF photo-production neutron facilty in order to test its effectiveness under a very high flux gamma background. For fast neutrons, we developed new GEM detectors (called nGEM) for the CNESM diagnostic system of the SPIDER NBI prototype for ITER (RFX-Consortium, Italy) and as beam monitor for fast neutrons lines at spallation sources. The nGEM is a Triple GEM gaseous detector equipped with a polyethylene layer used to convert fast neutrons into recoil protons through the elastic scattering process. This paper describes the results obtained by testing a medium size (30 × 25 cm2 active area) nGEM detector at the ISIS spallation source on the VESUVIO beam line.

  16. Matrix-Vector Based Fast Fourier Transformations on SDR Architectures

    NASA Astrophysics Data System (ADS)

    He, Y.; Hueske, K.; Götze, J.; Coersmeier, E.

    2008-05-01

    Today Discrete Fourier Transforms (DFTs) are applied in various radio standards based on OFDM (Orthogonal Frequency Division Multiplex). It is important to gain a fast computational speed for the DFT, which is usually achieved by using specialized Fast Fourier Transform (FFT) engines. However, in face of the Software Defined Radio (SDR) development, more general (parallel) processor architectures are often desirable, which are not tailored to FFT computations. Therefore, alternative approaches are required to reduce the complexity of the DFT. Starting from a matrix-vector based description of the FFT idea, we will present different factorizations of the DFT matrix, which allow a reduction of the complexity that lies between the original DFT and the minimum FFT complexity. The computational complexities of these factorizations and their suitability for implementation on different processor architectures are investigated.

  17. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  18. A Fast MEANSHIFT Algorithm-Based Target Tracking System

    PubMed Central

    Sun, Jian

    2012-01-01

    Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397

  19. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  20. Library API for Z-Order Memory Layout

    2015-02-01

    This library provides a simple-to-use API for implementing an altnerative to traditional row-major order in-memory layout, one based on a Morton- order space filling curve (SFC) , specifically, a Z-order variant of the Morton order curve. The library enables programmers to, after a simple initialization step, to convert a multidimensional array from row-major to Z- order layouts, then use a single, generic API call to access data from any arbitrary (i,j,k) location from within themore » array, whether it it be stored in row- major or z-order format. The motivation for using a SFC in-memory layout is for improved spatial locality, which results in increased use of local high speed cache memory. The basic idea is that with row-major order layouts, a data access to some location that is nearby in index space is likely far away in physical memory, resulting in poor spatial locality and slow runtime. On the other hand, with a SFC-based layout, accesses that are nearby in index space are much more likely to also be nearby in physical memory, resulting in much better spatial locality, and better runtime performance. Numerous studies over the years have shown significant runtime performance gains are realized by using a SFC-based memory layout compared to a row-major layout, sometimes by as much as 50%, which result from the better use of the memory and cache hierarchy that are attendant with a SFC-based layout (see, for example, [Beth2012]). This library implementation is intended for use with codes that work with structured, array-based data in 2 or 3 dimensions. It is not appropriate for use with unstructured or point-based data.« less

  1. Library API for Z-Order Memory Layout

    SciTech Connect

    Bethel, E. Wes

    2015-02-01

    This library provides a simple-to-use API for implementing an altnerative to traditional row-major order in-memory layout, one based on a Morton- order space filling curve (SFC) , specifically, a Z-order variant of the Morton order curve. The library enables programmers to, after a simple initialization step, to convert a multidimensional array from row-major to Z- order layouts, then use a single, generic API call to access data from any arbitrary (i,j,k) location from within the array, whether it it be stored in row- major or z-order format. The motivation for using a SFC in-memory layout is for improved spatial locality, which results in increased use of local high speed cache memory. The basic idea is that with row-major order layouts, a data access to some location that is nearby in index space is likely far away in physical memory, resulting in poor spatial locality and slow runtime. On the other hand, with a SFC-based layout, accesses that are nearby in index space are much more likely to also be nearby in physical memory, resulting in much better spatial locality, and better runtime performance. Numerous studies over the years have shown significant runtime performance gains are realized by using a SFC-based memory layout compared to a row-major layout, sometimes by as much as 50%, which result from the better use of the memory and cache hierarchy that are attendant with a SFC-based layout (see, for example, [Beth2012]). This library implementation is intended for use with codes that work with structured, array-based data in 2 or 3 dimensions. It is not appropriate for use with unstructured or point-based data.

  2. Urban drain layout optimization using PBIL algorithm

    NASA Astrophysics Data System (ADS)

    Wan, Shanshan; Hao, Ying; Qiu, Dongwei; Zhao, Xu

    2008-10-01

    Strengthen the environmental protection is one of the basic national policies in China. The optimization of urban drain layout plays an important role to the protection of water ecosystem and urban environment. The paper puts forward a method to properly locate urban drain using population based incremental learning (PBIL) algorithm. The main factors such as regional containing sewage capacity, sewage disposal capacity quantity limit of drains within specific area are considered as constraint conditions. Analytic hierarchy process is used to obtain weight of each factor, and spatial analysis of environmental influencing factors is carried on Based on GIS. Penalty function method is put forward to model the problem and object function is to guarantee economy benefit. The algorithm is applied to the drain layout engineering of Nansha District, Guangzhou City, China. The drain layout obtained though PBIL algorithm excels traditional method and it can protect the urban environment more efficiently and ensure the healthy development of water ecosystem more successfully. The result has also proved that PBIL algorithm is a good method in solving this question because of its robust performance and stability which supplied strong technologic support to the sustainable development of environment.

  3. Electrical studies on silver based fast ion conducting glassy materials

    SciTech Connect

    Rao, B. Appa Kumar, E. Ramesh Kumari, K. Rajani Bhikshamaiah, G.

    2014-04-24

    Among all the available fast ion conductors, silver based glasses exhibit high conductivity. Further, glasses containing silver iodide enhances fast ion conducting behavior at room temperature. Glasses of various compositions of silver based fast ion conductors in the AgI−Ag{sub 2}O−[(1−x)B{sub 2}O{sub 3}−xTeO{sub 2}] (x=0 to1 mol% in steps of 0.2) glassy system have been prepared by melt quenching method. The glassy nature of the compounds has been confirmed by X-ray diffraction. The electrical conductivity (AC) measurements have been carried out in the frequency range of 1 KHz–3MHz by Impedance Analyzer in the temperature range 303–423K. The DC conductivity measurements were also carried out in the temperature range 300–523K. From both AC and DC conductivity studies, it is found that the conductivity increases and activation energy decreases with increasing the concentration of TeO{sub 2} as well as with temperature. The conductivity of the present glass system is found to be of the order of 10{sup −2} S/cm at room temperature. The ionic transport number of these glasses is found to be 0.999 indicating that these glasses can be used as electrolyte in batteries.

  4. Learning Layouts for Single-Page Graphic Designs.

    PubMed

    O'Donovan, Peter; Agarwala, Aseem; Hertzmann, Aaron

    2014-08-01

    This paper presents an approach for automatically creating graphic design layouts using a new energy-based model derived from design principles. The model includes several new algorithms for analyzing graphic designs, including the prediction of perceived importance, alignment detection, and hierarchical segmentation. Given the model, we use optimization to synthesize new layouts for a variety of single-page graphic designs. Model parameters are learned with Nonlinear Inverse Optimization (NIO) from a small number of example layouts. To demonstrate our approach, we show results for applications including generating design layouts in various styles, retargeting designs to new sizes, and improving existing designs. We also compare our automatic results with designs created using crowdsourcing and show that our approach performs slightly better than novice designers. PMID:26357371

  5. Learning Layouts for Single-Page Graphic Designs.

    PubMed

    O'Donovan, Peter; Agarwala, Aseem; Hertzmann, Aaron

    2014-08-01

    This paper presents an approach for automatically creating graphic design layouts using a new energy-based model derived from design principles. The model includes several new algorithms for analyzing graphic designs, including the prediction of perceived importance, alignment detection, and hierarchical segmentation. Given the model, we use optimization to synthesize new layouts for a variety of single-page graphic designs. Model parameters are learned with Nonlinear Inverse Optimization (NIO) from a small number of example layouts. To demonstrate our approach, we show results for applications including generating design layouts in various styles, retargeting designs to new sizes, and improving existing designs. We also compare our automatic results with designs created using crowdsourcing and show that our approach performs slightly better than novice designers.

  6. Biosensor-based fragment screening using FastStep injections.

    PubMed

    Rich, Rebecca L; Quinn, John G; Morton, Tom; Stepp, J David; Myszka, David G

    2010-12-15

    We have developed a novel analyte injection method for the SensíQ Pioneer surface plasmon resonance-based biosensor referred to as "FastStep." By merging buffer and sample streams immediately prior to the reaction flow cells, the instrument is capable of automatically generating a two- or threefold dilution series (of seven or five concentrations, respectively) from a single analyte sample. Using sucrose injections, we demonstrate that the production of each concentration within the step gradient is highly reproducible. For kinetic studies, we developed analysis software that utilizes the sucrose responses to automatically define the concentration of analyte at any point during the association phase. To validate this new approach, we compared the results of standard and FastStep injections for ADP binding to a target kinase and a panel of compounds binding to carbonic anhydrase II. Finally, we illustrate how FastStep can be used in a primary screening mode to obtain a full concentration series of each compound in a fragment library.

  7. Biosensor-based fragment screening using FastStep injections

    PubMed Central

    Rich, Rebecca L.; Quinn, John G.; Morton, Tom; Stepp, J. David; Myszka, David G.

    2010-01-01

    We have developed a novel analyte injection method for the SensíQ Pioneer surface plasmon resonance-based biosensor referred to as ‘FastStep™’. By merging buffer and sample streams immediately prior to the reaction flow cells, the instrument is capable of automatically generating a two- or three-fold dilution series (of seven or five concentrations, respectively) from a single analyte sample. Using sucrose injections, we demonstrate that the production of each concentration within the step gradient is highly reproducible. For kinetic studies, we developed analysis software that utilizes the sucrose responses to automatically define the concentration of analyte at any point during the association phase. To validate this new approach, we compared the results of standard and FastStep injections for ADP binding to a target kinase and a panel of compounds binding to carbonic anhydrase II. Finally, we illustrate how FastStep can be used in a primary screening mode to obtain a full concentration series of each compound in a fragment library. PMID:20800052

  8. The Aurora Project: A new sail layout

    NASA Astrophysics Data System (ADS)

    Genta, Giancarlo; Brusa, Eugenio

    1999-05-01

    Aurora spacecraft is a scientific probe propelled by a "fast" solar sail whose first goal is to perform a technology assessment mission. The main characteristic of the sail is its low mass, which implies the absence of a plastic backing of the aluminum film and the lightness of the whole structure. In previous structural studies the limiting factor has been shown to be the elastic stability of a number of structural members subject to compressive loads. An alternative structural layout is here suggested: an inflatable beam, which is kept pressurized also after the deployment, relieves all compressive stresses, allowing a very simple configuration and a straightforward deployment procedure. However, as the mission profile requires a trajectory passing close to the Sun, a configuration different from the 'parachute' sail proposed in another paper, must be used.

  9. Cache-oblivious mesh layouts

    SciTech Connect

    Yoon, Sung-Eui; Lindstrom, Peter; Pascucci, Valerio; Manocha, Dinesh

    2005-07-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications.

  10. Fast Waves at the Base of the Cochlea.

    PubMed

    Recio-Spinoso, Alberto; Rhode, William S

    2015-01-01

    Georg von Békésy observed that the onset times of responses to brief-duration stimuli vary as a function of distance from the stapes, with basal regions starting to move earlier than apical ones. He noticed that the speed of signal propagation along the cochlea is slow when compared with the speed of sound in water. Fast traveling waves have been recorded in the cochlea, but their existence is interpreted as the result of an experiment artifact. Accounts of the timing of vibration onsets at the base of the cochlea generally agree with Békésy's results. Some authors, however, have argued that the measured delays are too short for consistency with Békésy's theory. To investigate the speed of the traveling wave at the base of the cochlea, we analyzed basilar membrane (BM) responses to clicks recorded at several locations in the base of the chinchilla cochlea. The initial component of the BM response matches remarkably well the initial component of the stapes response, after a 4-μs delay of the latter. A similar conclusion is reached by analyzing onset times of time-domain gain functions, which correspond to BM click responses normalized by middle-ear input. Our results suggest that BM responses to clicks arise from a combination of fast and slow traveling waves. PMID:26062000

  11. Fast Waves at the Base of the Cochlea.

    PubMed

    Recio-Spinoso, Alberto; Rhode, William S

    2015-01-01

    Georg von Békésy observed that the onset times of responses to brief-duration stimuli vary as a function of distance from the stapes, with basal regions starting to move earlier than apical ones. He noticed that the speed of signal propagation along the cochlea is slow when compared with the speed of sound in water. Fast traveling waves have been recorded in the cochlea, but their existence is interpreted as the result of an experiment artifact. Accounts of the timing of vibration onsets at the base of the cochlea generally agree with Békésy's results. Some authors, however, have argued that the measured delays are too short for consistency with Békésy's theory. To investigate the speed of the traveling wave at the base of the cochlea, we analyzed basilar membrane (BM) responses to clicks recorded at several locations in the base of the chinchilla cochlea. The initial component of the BM response matches remarkably well the initial component of the stapes response, after a 4-μs delay of the latter. A similar conclusion is reached by analyzing onset times of time-domain gain functions, which correspond to BM click responses normalized by middle-ear input. Our results suggest that BM responses to clicks arise from a combination of fast and slow traveling waves.

  12. Parameter tuning for the NFFT based fast Ewald summation

    NASA Astrophysics Data System (ADS)

    Nestler, Franziska

    2016-07-01

    The computation of the Coulomb potentials and forces in charged particle systems under 3d-periodic boundary conditions is possible in an efficient way by utilizing the Ewald summation formulas and applying the fast Fourier transform (FFT). In this paper we consider the particle-particle NFFT (P(2) NFFT) approach, which is based on the fast Fourier transform for nonequispaced data (NFFT) and compare the error behaviors regarding different window functions, which are used in order to approximate the given continuous charge distribution by a mesh based charge density. Typically B-splines are applied in the scope of particle mesh methods, as for instance within the well known particle-particle particle-mesh (P(3) M) algorithm. The publicly available P(2) NFFT algorithm allows the application of an oversampled FFT as well as the usage of different window functions. We consider for the first time also an approximation by Bessel functions and show how the resulting root mean square errors in the forces can be predicted precisely and efficiently. The results show that, if the parameters are tuned appropriately, the Bessel window function is in many cases even the better choice in terms of computational costs. Moreover, the results indicate that it is often advantageous in terms of efficiency to spend some oversampling within the NFFT while using a window function with a smaller support.

  13. A fast image encryption algorithm based on chaotic map

    NASA Astrophysics Data System (ADS)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  14. Fast model-based estimation of ancestry in unrelated individuals.

    PubMed

    Alexander, David H; Novembre, John; Lange, Kenneth

    2009-09-01

    Population stratification has long been recognized as a confounding factor in genetic association studies. Estimated ancestries, derived from multi-locus genotype data, can be used to perform a statistical correction for population stratification. One popular technique for estimation of ancestry is the model-based approach embodied by the widely applied program structure. Another approach, implemented in the program EIGENSTRAT, relies on Principal Component Analysis rather than model-based estimation and does not directly deliver admixture fractions. EIGENSTRAT has gained in popularity in part owing to its remarkable speed in comparison to structure. We present a new algorithm and a program, ADMIXTURE, for model-based estimation of ancestry in unrelated individuals. ADMIXTURE adopts the likelihood model embedded in structure. However, ADMIXTURE runs considerably faster, solving problems in minutes that take structure hours. In many of our experiments, we have found that ADMIXTURE is almost as fast as EIGENSTRAT. The runtime improvements of ADMIXTURE rely on a fast block relaxation scheme using sequential quadratic programming for block updates, coupled with a novel quasi-Newton acceleration of convergence. Our algorithm also runs faster and with greater accuracy than the implementation of an Expectation-Maximization (EM) algorithm incorporated in the program FRAPPE. Our simulations show that ADMIXTURE's maximum likelihood estimates of the underlying admixture coefficients and ancestral allele frequencies are as accurate as structure's Bayesian estimates. On real-world data sets, ADMIXTURE's estimates are directly comparable to those from structure and EIGENSTRAT. Taken together, our results show that ADMIXTURE's computational speed opens up the possibility of using a much larger set of markers in model-based ancestry estimation and that its estimates are suitable for use in correcting for population stratification in association studies.

  15. Fast Field Calibration of MIMU Based on the Powell Algorithm

    PubMed Central

    Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang

    2014-01-01

    The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801

  16. EM-Based Multiuser Detection in Fast Fading Multipath Environments

    NASA Astrophysics Data System (ADS)

    Borran, Mohammad Jaber; Aazhang, Behnaam

    2002-12-01

    We address the problem of multiuser detection in fast fading multipath environments for DS-CDMA systems. In fast fading scenarios, temporal variations of the channel cause significant performance degradation even with the Rake receiver. We use a previously introduced time-frequency (TF) Rake receiver based on a canonical formulation of the channel and signals to simultaneously combat fading and multipath effects. This receiver uses the Doppler spread caused by rapid time-varying channel as another means of diversity. In dealing with multiaccess interference and as an attempt to avoid the prohibitive computational complexity of the optimum maximum-likelihood (ML) detector, we use the expectation maximization (EM) algorithm to derive an approximate ML detector. The new detector turns out to have an iterative structure very similar to the well-known multistage detector with some extra parameters. At the two extreme values of these parameters, the EM detector reduces to either one-shot TF Rake or generalized multistage detector. For the intermediate values of the parameters, it combines the two estimates to obtain a better decision for the bits of the users. Because of using the EM algorithm, this detector has better convergence properties than the multistage detector; the bit estimates always converge, and if an appropriate initial vector is used, they converge to the global maximizer of the likelihood function. As a result, the new detector provides significantly improved performance while maintaining the low complexity of the multistage detector. Our simulation results confirm the expected performance improvements compared to the base case of the TF Rake as well as the multistage detector used with the TF Rake.

  17. Fast background subtraction for moving cameras based on nonparametric models

    NASA Astrophysics Data System (ADS)

    Sun, Feng; Qin, Kaihuai; Sun, Wei; Guo, Huayuan

    2016-05-01

    In this paper, a fast background subtraction algorithm for freely moving cameras is presented. A nonparametric sample consensus model is employed as the appearance background model. The as-similar-as-possible warping technique, which obtains multiple homographies for different regions of the frame, is introduced to robustly estimate and compensate the camera motion between the consecutive frames. Unlike previous methods, our algorithm does not need any preprocess step for computing the dense optical flow or point trajectories. Instead, a superpixel-based seeded region growing scheme is proposed to extend the motion cue based on the sparse optical flow to the entire image. Then, a superpixel-based temporal coherent Markov random field optimization framework is built on the raw segmentations from the background model and the motion cue, and the final background/foreground labels are obtained using the graph-cut algorithm. Extensive experimental evaluations show that our algorithm achieves satisfactory accuracy, while being much faster than the state-of-the-art competing methods.

  18. An online planning tool for designing terrace layouts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A web-based conservation planning tool, WebTERLOC (web-based Terrace Location Program), was developed to provide multiple terrace layout options using digital elevation model (DEM) and geographic information systems (GIS). Development of a terrace system is complicated by the time-intensive manual ...

  19. A flexible layout design method for passive micromixers.

    PubMed

    Deng, Yongbo; Liu, Zhenyu; Zhang, Ping; Liu, Yongshun; Gao, Qingyong; Wu, Yihui

    2012-10-01

    This paper discusses a flexible layout design method of passive micromixers based on the topology optimization of fluidic flows. Being different from the trial and error method, this method obtains the detailed layout of a passive micromixer according to the desired mixing performance by solving a topology optimization problem. Therefore, the dependence on the experience of the designer is weaken, when this method is used to design a passive micromixer with acceptable mixing performance. Several design disciplines for the passive micromixers are considered to demonstrate the flexibility of the layout design method for passive micromixers. These design disciplines include the approximation of the real 3D micromixer, the manufacturing feasibility, the spacial periodic design, and effects of the Péclet number and Reynolds number on the designs obtained by this layout design method. The capability of this design method is validated by several comparisons performed between the obtained layouts and the optimized designs in the recently published literatures, where the values of the mixing measurement is improved up to 40.4% for one cycle of the micromixer. PMID:22736305

  20. Biased Randomized Algorithm for Fast Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Williams, Colin; Vartan, Farrokh

    2005-01-01

    A biased randomized algorithm has been developed to enable the rapid computational solution of a propositional- satisfiability (SAT) problem equivalent to a diagnosis problem. The closest competing methods of automated diagnosis are described in the preceding article "Fast Algorithms for Model-Based Diagnosis" and "Two Methods of Efficient Solution of the Hitting-Set Problem" (NPO-30584), which appears elsewhere in this issue. It is necessary to recapitulate some of the information from the cited articles as a prerequisite to a description of the present method. As used here, "diagnosis" signifies, more precisely, a type of model-based diagnosis in which one explores any logical inconsistencies between the observed and expected behaviors of an engineering system. The function of each component and the interconnections among all the components of the engineering system are represented as a logical system. Hence, the expected behavior of the engineering system is represented as a set of logical consequences. Faulty components lead to inconsistency between the observed and expected behaviors of the system, represented by logical inconsistencies. Diagnosis - the task of finding the faulty components - reduces to finding the components, the abnormalities of which could explain all the logical inconsistencies. One seeks a minimal set of faulty components (denoted a minimal diagnosis), because the trivial solution, in which all components are deemed to be faulty, always explains all inconsistencies. In the methods of the cited articles, the minimal-diagnosis problem is treated as equivalent to a minimal-hitting-set problem, which is translated from a combinatorial to a computational problem by mapping it onto the Boolean-satisfiability and integer-programming problems. The integer-programming approach taken in one of the prior methods is complete (in the sense that it is guaranteed to find a solution if one exists) and slow and yields a lower bound on the size of the

  1. [Fast discrimination of edible vegetable oil based on Raman spectroscopy].

    PubMed

    Zhou, Xiu-Jun; Dai, Lian-Kui; Li, Sheng

    2012-07-01

    A novel method to fast discriminate edible vegetable oils by Raman spectroscopy is presented. The training set is composed of different edible vegetable oils with known classes. Based on their original Raman spectra, baseline correction and normalization were applied to obtain standard spectra. Two characteristic peaks describing the unsaturated degree of vegetable oil were selected as feature vectors; then the centers of all classes were calculated. For an edible vegetable oil with unknown class, the same pretreatment and feature extraction methods were used. The Euclidian distances between the feature vector of the unknown sample and the center of each class were calculated, and the class of the unknown sample was finally determined by the minimum distance. For 43 edible vegetable oil samples from seven different classes, experimental results show that the clustering effect of each class was more obvious and the class distance was much larger with the new feature extraction method compared with PCA. The above classification model can be applied to discriminate unknown edible vegetable oils rapidly and accurately.

  2. DUK - A Fast and Efficient Kmer Based Sequence Matching Tool

    SciTech Connect

    Li, Mingkun; Copeland, Alex; Han, James

    2011-03-21

    A new tool, DUK, is developed to perform matching task. Matching is to find whether a query sequence partially or totally matches given reference sequences or not. Matching is similar to alignment. Indeed many traditional analysis tasks like contaminant removal use alignment tools. But for matching, there is no need to know which bases of a query sequence matches which position of a reference sequence, it only need know whether there exists a match or not. This subtle difference can make matching task much faster than alignment. DUK is accurate, versatile, fast, and has efficient memory usage. It uses Kmer hashing method to index reference sequences and Poisson model to calculate p-value. DUK is carefully implemented in C++ in object oriented design. The resulted classes can also be used to develop other tools quickly. DUK have been widely used in JGI for a wide range of applications such as contaminant removal, organelle genome separation, and assembly refinement. Many real applications and simulated dataset demonstrate its power.

  3. Fast recognition of musical sounds based on timbre.

    PubMed

    Agus, Trevor R; Suied, Clara; Thorpe, Simon J; Pressnitzer, Daniel

    2012-05-01

    Human listeners seem to have an impressive ability to recognize a wide variety of natural sounds. However, there is surprisingly little quantitative evidence to characterize this fundamental ability. Here the speed and accuracy of musical-sound recognition were measured psychophysically with a rich but acoustically balanced stimulus set. The set comprised recordings of notes from musical instruments and sung vowels. In a first experiment, reaction times were collected for three target categories: voice, percussion, and strings. In a go/no-go task, listeners reacted as quickly as possible to members of a target category while withholding responses to distractors (a diverse set of musical instruments). Results showed near-perfect accuracy and fast reaction times, particularly for voices. In a second experiment, voices were recognized among strings and vice-versa. Again, reaction times to voices were faster. In a third experiment, auditory chimeras were created to retain only spectral or temporal features of the voice. Chimeras were recognized accurately, but not as quickly as natural voices. Altogether, the data suggest rapid and accurate neural mechanisms for musical-sound recognition based on selectivity to complex spectro-temporal signatures of sound sources. PMID:22559384

  4. Fast CEUS image segmentation based on self organizing maps

    NASA Astrophysics Data System (ADS)

    Paire, Julie; Sauvage, Vincent; Albouy-Kissi, Adelaïde; Ladam Marcus, Viviane; Marcus, Claude; Hoeffel, Christine

    2014-03-01

    Contrast-enhanced ultrasound (CEUS) has recently become an important technology for lesion detection and characterization. CEUS is used to investigate the perfusion kinetics in tissue over time, which relates to tissue vascularization. In this paper, we present an interactive segmentation method based on the neural networks, which enables to segment malignant tissue over CEUS sequences. We use Self-Organizing-Maps (SOM), an unsupervised neural network, to project high dimensional data to low dimensional space, named a map of neurons. The algorithm gathers the observations in clusters, respecting the topology of the observations space. This means that a notion of neighborhood between classes is defined. Adjacent observations in variables space belong to the same class or related classes after classification. Thanks to this neighborhood conservation property and associated with suitable feature extraction, this map provides user friendly segmentation tool. It will assist the expert in tumor segmentation with fast and easy intervention. We implement SOM on a Graphics Processing Unit (GPU) to accelerate treatment. This allows a greater number of iterations and the learning process to converge more precisely. We get a better quality of learning so a better classification. Our approach allows us to identify and delineate lesions accurately. Our results show that this method improves markedly the recognition of liver lesions and opens the way for future precise quantification of contrast enhancement.

  5. Impact of data layouts on the efficiency of GPU-accelerated IDW interpolation.

    PubMed

    Mei, Gang; Tian, Hong

    2016-01-01

    This paper focuses on evaluating the impact of different data layouts on the computational efficiency of GPU-accelerated Inverse Distance Weighting (IDW) interpolation algorithm. First we redesign and improve our previous GPU implementation that was performed by exploiting the feature of CUDA dynamic parallelism (CDP). Then we implement three versions of GPU implementations, i.e., the naive version, the tiled version, and the improved CDP version, based upon five data layouts, including the Structure of Arrays (SoA), the Array of Structures (AoS), the Array of aligned Structures (AoaS), the Structure of Arrays of aligned Structures (SoAoS), and the Hybrid layout. We also carry out several groups of experimental tests to evaluate the impact. Experimental results show that: the layouts AoS and AoaS achieve better performance than the layout SoA for both the naive version and tiled version, while the layout SoA is the best choice for the improved CDP version. We also observe that: for the two combined data layouts (the SoAoS and the Hybrid), there are no notable performance gains when compared to other three basic layouts. We recommend that: in practical applications, the layout AoaS is the best choice since the tiled version is the fastest one among three versions. The source code of all implementations are publicly available.

  6. Layout and Design. Module 2. Commercial Art. Instructor's Guide.

    ERIC Educational Resources Information Center

    Benke, Tom; And Others

    This module is the second of five in the Commercial Art series. The curriculum guide is designed for competency-based teaching and testing. Within this module on layout and design are eight instructional units. A cross-reference table reveals how the instructional components of the module relate to Missouri competencies. Each unit includes some or…

  7. Optimized layout generator for microgyroscope

    NASA Astrophysics Data System (ADS)

    Tay, Francis E.; Li, Shifeng; Logeeswaran, V. J.; Ng, David C.

    2000-10-01

    This paper presents an optimized out-of-plane microgyroscope layout generator using AutoCAD R14 and MS ExcelTM as a first attempt to automating the design of resonant micro- inertial sensors. The out-of-plane microgyroscope with two degrees of freedom lumped parameter model was chosen as the synthesis topology. Analytical model for the open loop operating has been derived for the gyroscope performance characteristics. Functional performance parameters such as sensitivity are ensured to be satisfied while simultaneously optimizing a design objective such as minimum area. A single algorithm will optimize the microgyroscope dimensions, while simultaneously maximizing or minimizing the objective functions: maximum sensitivity and minimum area. The multi- criteria objective function and optimization methodology was implemented using the Generalized Reduced Gradient algorithm. For data conversion a DXF to GDS converter was used. The optimized theoretical design performance parameters show good agreement with finite element analysis.

  8. Economics of wind farm layout

    SciTech Connect

    Germain, A.C.; Bain, D.A.

    1997-12-31

    The life cycle cost of energy (COE) is the primary determinant of the economic viability of a wind energy generation facility. The cost of wind turbines and associated hardware is counterbalanced by the energy which can be generated. This paper focuses on the turbine layout design process, considering the cost and energy capture implications of potential spacing options from the viewpoint of a practicing project designer. It is argued that lateral spacings in the range of 1.5 to 5 diameters are all potentially optimal, but only when matched to wind resource characteristics and machine design limits. The effect of wakes on energy capture is quantified while the effect on turbine life and maintenance cost is discussed qualitatively. Careful optimization can lower COE and project designers are encouraged to integrate the concepts in project designs.

  9. The UA9 experimental layout

    SciTech Connect

    Scandale, W.; Robert-Demolaize, G.; Arduini, G.; Assmann, R.; Bracco, C.; et al

    2011-10-13

    The UA9 experimental equipment was installed in the CERN-SPS in March '09 with the aim of investigating crystal assisted collimation in coasting mode. Its basic layout comprises silicon bent crystals acting as primary collimators mounted inside two vacuum vessels. A movable 60 cm long block of tungsten located downstream at about 90 degrees phase advance intercepts the deflected beam. Scintillators, Gas Electron Multiplier chambers and other beam loss monitors measure nuclear loss rates induced by the interaction of the beam halo in the crystal. Two Roman pots installed in the path of the deflected particles are equipped with a Medipix detector to reconstruct the transverse distribution of the impinging beam. Finally UA9 takes advantage of an LHC-collimator prototype installed close to the first Roman pot to help in setting the beam conditions and to analyze the efficiency to deflect the beam. This paper describes in details the hardware installed to study the crystal collimation during 2010.

  10. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  11. Fast degradable citrate-based bone scaffold promotes spinal fusion

    PubMed Central

    Tang, Jiajun; Guo, Jinshan; Li, Zhen; Yang, Cheng; Xie, Denghui; Chen, Jian; Li, Shengfa; Li, Shaolin; Kim, Gloria B.; Bai, Xiaochun; Zhang, Zhongmin; Yang, Jian

    2015-01-01

    It is well known that high rates of fusion failure and pseudoarthrosis development (5~35%) are concomitant in spinal fusion surgery, which was ascribed to the shortage of suitable materials for bone regeneration. Citrate was recently recognized to play an indispensable role in enhancing osteconductivity and osteoinductivity, and promoting bone formation. To address the material challenges in spinal fusion surgery, we have synthesized mechanically robust and fast degrading citrate-based polymers by incorporating N-methyldiethanolamine (MDEA) into clickable poly(1, 8-octanediol citrates) (POC-click), referred to as POC-M-click. The obtained POC-M-click were fabricated into POC-M-click-HA matchstick scaffolds by compositing with hydroxyapatite (HA) for interbody spinal fusion in a rabbit model. Spinal fusion was analyzed by radiography, manual palpation, biomechanical testing, and histological evaluation. At 4 and 8 weeks post surgery, POC-M-click-HA scaffolds presented optimal degradation rates that facilitated faster new bone formation and higher spinal fusion rates (11.2±3.7, 80±4.5 at week 4 and 8, respectively) than the poly(L-lactic acid)-HA (PLLA-HA) control group (9.3±2.4 and 71.1±4.4) (p<0.05). The POC-M-click-HA scaffold-fused vertebrates possessed a maximum load and stiffness of 880.8±14.5 N and 843.2±22.4 N/mm, respectively, which were also much higher than those of the PLLA-HA group (maximum: 712.0±37.5 N, stiffness: 622.5±28.4 N/mm, p<0.05). Overall, the results suggest that POC-M-click-HA scaffolds could potentially serve as promising bone grafts for spinal fusion applications. PMID:26213625

  12. Fast and automatic watermark resynchronization based on zernike moments

    NASA Astrophysics Data System (ADS)

    Kang, Xiangui; Liu, Chunhui; Zeng, Wenjun; Huang, Jiwu; Liu, Congbai

    2007-02-01

    In some applications such as real-time video applications, watermark detection needs to be performed in real time. To address image watermark robustness against geometric transformations such as the combination of rotation, scaling, translation and/or cropping (RST), many prior works choose exhaustive search method or template matching method to find the RST distortion parameters, then reverse the distortion to resynchronize the watermark. These methods typically impose huge computation burden because the search space is typically a multiple dimensional space. Some other prior works choose to embed watermarks in an RST invariant domain to meet the real time requirement. But it might be difficult to construct such an RST invariant domain. Zernike moments are useful tools in pattern recognition and image watermarking due to their orthogonality and rotation invariance property. In this paper, we propose a fast watermark resynchronization method based on Zernike moments, which requires only search over scaling factor to combat RST geometric distortion, thus significantly reducing the computation load. We apply the proposed method to circularly symmetric watermarking. According to Plancherel's Theorem and the rotation invariance property of Zernike moments, the rotation estimation only requires performing DFT on Zernike moments correlation value once. Thus for RST attack, we can estimate both rotation angle and scaling factor by searching for the scaling factor to find the overall maximum DFT magnitude mentioned above. With the estimated rotation angle and scaling factor parameters, the watermark can be resynchronized. In watermark detection, the normalized correlation between the watermark and the DFT magnitude of the test image is used. Our experimental results demonstrate the advantage of our proposed method. The watermarking scheme is robust to global RST distortion as well as JPEG compression. In particular, the watermark is robust to print-rescanning and

  13. A Randomized Field Trial of the Fast ForWord Language Computer-Based Training Program

    ERIC Educational Resources Information Center

    Borman, Geoffrey D.; Benson, James G.; Overman, Laura

    2009-01-01

    This article describes an independent assessment of the Fast ForWord Language computer-based training program developed by Scientific Learning Corporation. Previous laboratory research involving children with language-based learning impairments showed strong effects on their abilities to recognize brief and fast sequences of nonspeech and speech…

  14. Fast-melting tablets based on highly plastic granules.

    PubMed

    Fu, Yourong; Jeong, Seong Hoon; Park, Kinam

    2005-12-01

    Highly plastic granules that can be compressed into tablets at low pressure were developed to make fast-melting tablets (FMTs) by compression method. The highly plastic granules are composed of three components: a plastic material, a material enhancing water penetration, and a wet binder. One of the unique properties of the highly plastic granules is that they maintain a porous structure even after compression into tablets. The porous and plastic nature of the granules allows fast absorption of water into the compressed tablet for fast melting/dissolution of the tablet. The prepared tablets possess tablet strength and friability that are suitable for multi-tablet packages. The three-component highly plastic granules provide an effective way of making FMTs by compression.

  15. Printed circuit board layout by microcomputer

    NASA Astrophysics Data System (ADS)

    Krausman, E. W.

    1983-12-01

    Printed circuit board artwork is usually prepared manually because of the unavailability of computer-aided-design tools. This thesis presents the design of a microcomputer based printed circuit board layout system that is easy to use and cheap. Automatic routing and component placement routines will significantly speed up the process. The design satisfies the following requirements: Microcomputer implementation, portable, algorithm independent, interactive, and user friendly. When it is fully implemented a user will be able to select components and a board outline from an automated catalog, enter a schematic diagram, position the components on the board, and completely route the board from a single graphics terminal. Currently, the user interface and the outer level command processor have been implemented in Pascal. Future versions will be written in C for better portability.

  16. Offshore wind farm electrical cable layout optimization

    NASA Astrophysics Data System (ADS)

    Pillai, A. C.; Chick, J.; Johanning, L.; Khorasanchi, M.; de Laleu, V.

    2015-12-01

    This article explores an automated approach for the efficient placement of substations and the design of an inter-array electrical collection network for an offshore wind farm through the minimization of the cost. To accomplish this, the problem is represented as a number of sub-problems that are solved in series using a combination of heuristic algorithms. The overall problem is first solved by clustering the turbines to generate valid substation positions. From this, a navigational mesh pathfinding algorithm based on Delaunay triangulation is applied to identify valid cable paths, which are then used in a mixed-integer linear programming problem to solve for a constrained capacitated minimum spanning tree considering all realistic constraints. The final tree that is produced represents the solution to the inter-array cable problem. This method is applied to a planned wind farm to illustrate the suitability of the approach and the resulting layout that is generated.

  17. Automatic Layout Design for Power Module

    SciTech Connect

    Ning, Puqi; Wang, Fei; Ngo, Khai

    2013-01-01

    The layout of power modules is one of the key points in power module design, especially for high power densities, where couplings are increased. In this paper, along with the design example, an automatic design processes by using a genetic algorithm are presented. Some practical considerations and implementations are introduced in the optimization of module layout design.

  18. Fast and accurate line scanner based on white light interferometry

    NASA Astrophysics Data System (ADS)

    Lambelet, Patrick; Moosburger, Rudolf

    2013-04-01

    White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is

  19. Safe Fast Reactor Based on Nuclear Burning Wave Regime

    SciTech Connect

    Fomin, S.; Mel'nik, Yu.; Pilipenko, V.; Shul'ga, N.

    2006-07-01

    The deterministic approach for describing the phenomenon of self-sustained regime of nuclear burning wave in a fast critical reactor is developed. The results of calculations of the space-time evolution of neutron flux and the fuel burn-up in such a system are presented. (authors)

  20. Mining-Induced Coal Permeability Change Under Different Mining Layouts

    NASA Astrophysics Data System (ADS)

    Zhang, Zetian; Zhang, Ru; Xie, Heping; Gao, Mingzhong; Xie, Jing

    2016-09-01

    To comprehensively understand the mining-induced coal permeability change, a series of laboratory unloading experiments are conducted based on a simplifying assumption of the actual mining-induced stress evolution processes of three typical longwall mining layouts in China, i.e., non-pillar mining (NM), top-coal caving mining (TCM) and protective coal-seam mining (PCM). A theoretical expression of the mining-induced permeability change ratio (MPCR) is derived and validated by laboratory experiments and in situ observations. The mining-induced coal permeability variation under the three typical mining layouts is quantitatively analyzed using the MPCR based on the test results. The experimental results show that the mining-induced stress evolution processes of different mining layouts do have an influence on the mechanical behavior and evolution of MPCR of coal. The coal mass in the PCM simulation has the lowest stress concentration but the highest peak MPCR (approximately 4000 %), whereas the opposite trends are observed for the coal mass under NM. The results of the coal mass under TCM fall between those for PCM and NM. The evolution of the MPCR of coal under different layouts can be divided into three sections, i.e., stable increasing section, accelerated increasing section and reducing section, but the evolution processes are slightly different for the different mining layouts. A coal bed gas intensive extraction region is recommended based on the MPCR distribution of coal seams obtained by simplifying assumptions and the laboratory testing results. The presented results are also compared with existing conventional triaxial compression test results to fully comprehend the effect of actual mining-induced stress evolution on coal property tests.

  1. A scoring methodology for quantitatively evaluating the quality of double patterning technology-compliant layouts

    NASA Astrophysics Data System (ADS)

    Wang, Lynn T.; Madhavan, Sriram; Malik, Shobhit; Pathak, Piyush; Capodieci, Luigi

    2012-03-01

    A Double Patterning Technology (DPT)-aware scoring methodology that systematically quantifies the quality of DPTcompliant layout designs is described. The methodology evaluates layouts based on a set of DPT-specific metrics that characterizes layout-induced process variation. Specific metrics include: the spacing variability between two adjacent oppositely-colored features, the density differences between the two exposure masks, and the stitching area's sensitivity to mask misalignment. These metrics are abstracted to a scoring scale from 0 to 1 such that 1 is the optimum. This methodology provides guidance for opportunistic layout modifications so that DPT manufacturability-related issues are mitigated earlier in design. Results show that by using this methodology, a DPT-compliant layout improved from a composite score of 0.66 and 0.78 by merely changing the decomposition solution so that the density distribution between the two exposure masks is relatively equal.

  2. Luminaire layout: Design and implementation

    NASA Astrophysics Data System (ADS)

    Both, A. J.

    1994-03-01

    The information contained in this report was presented during the discussion regarding guidelines for PAR uniformity in greenhouses. The data shows a lighting uniformity analysis in a research greenhouse for rose production at the Cornell University campus. The luminaire layout was designed using the computer program Lumen-Micro. After implementation of the design, accurate measurements were taken in the greenhouse and the uniformity analysis for both the design and implementation were compared. A study of several supplemental lighting installations resulted in the following recommendations: include only the actual growing area in the lighting uniformity analysis; for growing areas up to 20 square meters, take four measurements per square meter; for growing areas above 20 square meters, take one measurement per square meter; use one of the uniformity criteria and frequency graphs to compare lighting uniformity amongst designs; and design for uniformity criterion of a least 0.75 and the fraction within +/- 15% of the average PAR value should be close to one.

  3. Luminaire layout: Design and implementation

    NASA Technical Reports Server (NTRS)

    Both, A. J.

    1994-01-01

    The information contained in this report was presented during the discussion regarding guidelines for PAR uniformity in greenhouses. The data shows a lighting uniformity analysis in a research greenhouse for rose production at the Cornell University campus. The luminaire layout was designed using the computer program Lumen-Micro. After implementation of the design, accurate measurements were taken in the greenhouse and the uniformity analysis for both the design and implementation were compared. A study of several supplemental lighting installations resulted in the following recommendations: include only the actual growing area in the lighting uniformity analysis; for growing areas up to 20 square meters, take four measurements per square meter; for growing areas above 20 square meters, take one measurement per square meter; use one of the uniformity criteria and frequency graphs to compare lighting uniformity amongst designs; and design for uniformity criterion of a least 0.75 and the fraction within +/- 15% of the average PAR value should be close to one.

  4. Optimised layout and roadway support planning with integrated intelligent software

    SciTech Connect

    Kouniali, S.; Josien, J.P.; Piguet, J.P.

    1996-12-01

    Experience with knowledge-based systems for Layout planning and roadway support dimensioning is on hand in European coal mining since 1985. The systems SOUT (Support choice and dimensioning, 1989), SOUT 2, PLANANK (planning of bolt-support), Exos (layout planning diagnosis. 1994), Sout 3 (1995) have been developed in close cooperation by CdF{sup 1}. INERIS{sup 2} , EMN{sup 3} (France) and RAG{sup 4}, DMT{sup 5}, TH - Aachen{sup 6} (Germany); ISLSP (Integrated Software for Layout and support planning) development is in progress (completion scheduled for July 1996). This new software technology in combination with conventional programming systems, numerical models and existing databases turned out to be suited for setting-up an intelligent decision aid for layout and roadway support planning. The system enhances reliability of planning and optimises the safety-to-cost ratio for (1) deformation forecast for roadways in seam and surrounding rocks, consideration of the general position of the roadway in the rock mass (zones of increased pressure, position of operating and mined panels); (2) support dimensioning; (3) yielding arches, rigid arches, porch sets, rigid rings, yielding rings and bolting/shotcreting for drifts; (4) yielding arches, rigid arches and porch sets for roadways in seam; and (5) bolt support for gateroads (assessment of exclusion criteria and calculation of the bolting pattern) bolting of face-end zones (feasibility and safety assessment; stability guarantee).

  5. Automatic metro map layout using multicriteria optimization.

    PubMed

    Stott, Jonathan; Rodgers, Peter; Martínez-Ovando, Juan Carlos; Walker, Stephen G

    2011-01-01

    This paper describes an automatic mechanism for drawing metro maps. We apply multicriteria optimization to find effective placement of stations with a good line layout and to label the map unambiguously. A number of metrics are defined, which are used in a weighted sum to find a fitness value for a layout of the map. A hill climbing optimizer is used to reduce the fitness value, and find improved map layouts. To avoid local minima, we apply clustering techniques to the map-the hill climber moves both stations and clusters when finding improved layouts. We show the method applied to a number of metro maps, and describe an empirical study that provides some quantitative evidence that automatically-drawn metro maps can help users to find routes more efficiently than either published maps or undistorted maps. Moreover, we have found that, in these cases, study subjects indicate a preference for automatically-drawn maps over the alternatives.

  6. Non-Manhattan layout extraction algorithm

    NASA Astrophysics Data System (ADS)

    Satkhozhina, Aziza; Ahmadullin, Ildus; Allebach, Jan P.; Lin, Qian; Liu, Jerry; Tretter, Daniel; O'Brien-Strain, Eamonn; Hunter, Andrew

    2013-03-01

    Automated publishing requires large databases containing document page layout templates. The number of layout templates that need to be created and stored grows exponentially with the complexity of the document layouts. A better approach for automated publishing is to reuse layout templates of existing documents for the generation of new documents. In this paper, we present an algorithm for template extraction from a docu- ment page image. We use the cost-optimized segmentation algorithm (COS) to segment the image, and Voronoi decomposition to cluster the text regions. Then, we create a block image where each block represents a homo- geneous region of the document page. We construct a geometrical tree that describes the hierarchical structure of the document page. We also implement a font recognition algorithm to analyze the font of each text region. We present a detailed description of the algorithm and our preliminary results.

  7. Vision-based fast navigation of micro aerial vehicles

    NASA Astrophysics Data System (ADS)

    Loianno, Giuseppe; Kumar, Vijay

    2016-05-01

    We address the key challenges for autonomous fast flight for Micro Aerial Vehicles (MAVs) in 3-D, cluttered environments. For complete autonomy, the system must identify the vehicle's state at high rates, using either absolute or relative asynchronous on-board sensor measurements, use these state estimates for feedback control, and plan trajectories to the destination. State estimation requires information from different sensors to be fused, exploiting information from different, possible asynchronous sensors at different rates. In this work, we present techniques in the area of planning, control and visual-inertial state estimation for fast navigation of MAVs. We demonstrate how to solve on-board, on a small computational unit, the pose estimation, control and planning problems for MAVs, using a minimal sensor suite for autonomous navigation composed of a single camera and IMU. Additionally, we show that a consumer electronic device such as a smartphone can alternatively be employed for both sensing and computation. Experimental results validate the proposed techniques. Any consumer, provided with a smartphone, can autonomously drive a quadrotor platform at high speed, without GPS, and concurrently build 3-D maps, using a suitably designed app.

  8. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  9. Layout optimization using the homogenization method

    NASA Technical Reports Server (NTRS)

    Suzuki, Katsuyuki; Kikuchi, Noboru

    1993-01-01

    A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.

  10. Layout optimization using the homogenization method

    NASA Astrophysics Data System (ADS)

    Suzuki, Katsuyuki; Kikuchi, Noboru

    A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.

  11. Brain source localization based on fast fully adaptive approach.

    PubMed

    Ravan, Maryam; Reilly, James P

    2012-01-01

    In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization (beamforming) methods often fail when the number of observations is small. This is particularly true when measuring evoked potentials, especially when the number of electrodes is large. Due to the nonstationarity of the EEG/MEG, an adaptive capability is desirable. Previous work has addressed these issues by reducing the adaptive degrees of freedom (DoFs). This paper develops and tests a new multistage adaptive processing for brain source localization that has been previously used for radar statistical signal processing application with uniform linear antenna array. This processing, referred to as the fast fully adaptive (FFA) approach, could significantly reduce the required sample support and computational complexity, while still processing all available DoFs. The performance improvement offered by the FFA approach in comparison to the fully adaptive minimum variance beamforming (MVB) with limited data is demonstrated by bootstrapping simulated data to evaluate the variability of the source location.

  12. A fast Stokes inversion technique based on quadratic regression

    NASA Astrophysics Data System (ADS)

    Teng, Fei; Deng, Yuan-Yong

    2016-05-01

    Stokes inversion calculation is a key process in resolving polarization information on radiation from the Sun and obtaining the associated vector magnetic fields. Even in the cases of simple local thermodynamic equilibrium (LTE) and where the Milne-Eddington approximation is valid, the inversion problem may not be easy to solve. The initial values for the iterations are important in handling the case with multiple minima. In this paper, we develop a fast inversion technique without iterations. The time taken for computation is only 1/100 the time that the iterative algorithm takes. In addition, it can provide available initial values even in cases with lower spectral resolutions. This strategy is useful for a filter-type Stokes spectrograph, such as SDO/HMI and the developed two-dimensional real-time spectrograph (2DS).

  13. Slow-fast effect and generation mechanism of brusselator based on coordinate transformation

    NASA Astrophysics Data System (ADS)

    Li, Xianghong; Hou, Jingyu; Shen, Yongjun

    2016-08-01

    The Brusselator with different time scales, which behaves in the classical slow-fast effect, is investigated, and is characterized by the coupling of the quiescent and spiking states. In order to reveal the generation mechanism by using the slow-fast analysis method, the coordinate transformation is introduced into the classical Brusselator, so that the transformed system can be divided into the fast and slow subsystems. Furthermore, the stability condition and bifurcation phenomenon of the fast subsystem are analyzed, and the attraction domains of different equilibria are presented by theoretical analysis and numerical simulation respectively. Based on the transformed system, it could be found that the generation mechanism between the quiescent and spiking states is Fold bifurcation and change of the attraction domain of the fast subsystem. The results may also be helpful to the similar system with multiple time scales.

  14. Applications to car bodies - Generalized layout design of three-dimensional shells

    NASA Technical Reports Server (NTRS)

    Fukushima, Junichi; Suzuki, Katsuyuki; Kikuchi, Noboru

    1993-01-01

    We shall describe applications of the homogenization method, formulated in Part 1, to design layout of car bodies represented by three-dimensional shell structures based on a multi-loading optimization.

  15. 48 CFR 52.236-17 - Layout of Work.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Layout of Work. 52.236-17... Layout of Work. As prescribed in 36.517, insert the following clause in solicitations and contracts when... need for accurate work layout and for siting verification during work performance: Layout of Work...

  16. 48 CFR 52.236-17 - Layout of Work.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Layout of Work. 52.236-17... Layout of Work. As prescribed in 36.517, insert the following clause in solicitations and contracts when... need for accurate work layout and for siting verification during work performance: Layout of Work...

  17. 48 CFR 52.236-17 - Layout of Work.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 2 2014-10-01 2014-10-01 false Layout of Work. 52.236-17... Layout of Work. As prescribed in 36.517, insert the following clause in solicitations and contracts when... need for accurate work layout and for siting verification during work performance: Layout of Work...

  18. 48 CFR 52.236-17 - Layout of Work.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 2 2012-10-01 2012-10-01 false Layout of Work. 52.236-17... Layout of Work. As prescribed in 36.517, insert the following clause in solicitations and contracts when... need for accurate work layout and for siting verification during work performance: Layout of Work...

  19. From FAST to E-FAST: an overview of the evolution of ultrasound-based traumatic injury assessment.

    PubMed

    Montoya, J; Stawicki, S P; Evans, D C; Bahner, D P; Sparks, S; Sharpe, R P; Cipolla, J

    2016-04-01

    Ultrasound is a ubiquitous and versatile diagnostic tool. In the setting of acute injury, ultrasound enhances the basic trauma evaluation, influences bedside decision-making, and helps determine whether or not an unstable patient requires emergent procedural intervention. Consequently, continued education of surgeons and other acute care practitioners in performing focused emergency ultrasound is of great importance. This article provides a synopsis of focused assessment with sonography for trauma (FAST) and the extended FAST (E-FAST) that incorporates basic thoracic injury assessment. The authors also review key pitfalls, limitations, controversies, and advances related to FAST, E-FAST, and ultrasound education.

  20. An approach toward fast gradient-based image segmentation.

    PubMed

    Hell, Benjamin; Kassubeck, Marc; Bauszat, Pablo; Eisemann, Martin; Magnor, Marcus

    2015-09-01

    In this paper, we present and investigate an approach to fast multilabel color image segmentation using convex optimization techniques. The presented model is in some ways related to the well-known Mumford-Shah model, but deviates in certain important aspects. The optimization problem has been designed with two goals in mind. The objective function should represent fundamental concepts of image segmentation, such as incorporation of weighted curve length and variation of intensity in the segmented regions, while allowing transformation into a convex concave saddle point problem that is computationally inexpensive to solve. This paper introduces such a model, the nontrivial transformation of this model into a convex-concave saddle point problem, and the numerical treatment of the problem. We evaluate our approach by applying our algorithm to various images and show that our results are competitive in terms of quality at unprecedentedly low computation times. Our algorithm allows high-quality segmentation of megapixel images in a few seconds and achieves interactive performance for low resolution images.

  1. Fast gain and phase recovery of semiconductor optical amplifiers based on submonolayer quantum dots

    SciTech Connect

    Herzog, Bastian Owschimikow, Nina; Kaptan, Yücel; Kolarczik, Mirco; Switaiski, Thomas; Woggon, Ulrike; Schulze, Jan-Hindrik; Rosales, Ricardo; Strittmatter, André; Bimberg, Dieter; Pohl, Udo W.

    2015-11-16

    Submonolayer quantum dots as active medium in opto-electronic devices promise to combine the high density of states of quantum wells with the fast recovery dynamics of self-assembled quantum dots. We investigate the gain and phase recovery dynamics of a semiconductor optical amplifier based on InAs submonolayer quantum dots in the regime of linear operation by one- and two-color heterodyne pump-probe spectroscopy. We find an as fast recovery dynamics as for quantum dot-in-a-well structures, reaching 2 ps at moderate injection currents. The effective quantum well embedding the submonolayer quantum dots acts as a fast and efficient carrier reservoir.

  2. FMFilter: A fast model based variant filtering tool.

    PubMed

    Akgün, Mete; Faruk Gerdan, Ö; Görmez, Zeliha; Demirci, Hüseyin

    2016-04-01

    The availability of whole exome and genome sequencing has completely changed the structure of genetic disease studies. It is now possible to solve the disease causing mechanisms within shorter time and budgets. For this reason, mining out the valuable information from the huge amount of data produced by next generation techniques becomes a challenging task. Current tools analyze sequencing data in various methods. However, there is still need for fast, easy to use and efficacious tools. Considering genetic disease studies, there is a lack of publicly available tools which support compound heterozygous and de novo models. Also, existing tools either require advanced IT expertise or are inefficient for handling large variant files. In this work, we provide FMFilter, an efficient sieving tool for next generation sequencing data produced by genetic disease studies. We develop a software which allows to choose the inheritance model (recessive, dominant, compound heterozygous and de novo), the affected and control individuals. The program provides a user friendly Graphical User Interface which eliminates the requirement of advanced computer techniques. It has various filtering options which enable to eliminate the majority of the false alarms. FMFilter requires negligible memory, therefore it can easily handle very large variant files like multiple whole genomes with ordinary computers. We demonstrate the variant reduction capability and effectiveness of the proposed tool with public and in-house data for different inheritance models. We also compare FMFilter with the existing filtering software. We conclude that FMFilter provides an effective and easy to use environment for analyzing next generation sequencing data from Mendelian diseases. PMID:26925517

  3. FMFilter: A fast model based variant filtering tool.

    PubMed

    Akgün, Mete; Faruk Gerdan, Ö; Görmez, Zeliha; Demirci, Hüseyin

    2016-04-01

    The availability of whole exome and genome sequencing has completely changed the structure of genetic disease studies. It is now possible to solve the disease causing mechanisms within shorter time and budgets. For this reason, mining out the valuable information from the huge amount of data produced by next generation techniques becomes a challenging task. Current tools analyze sequencing data in various methods. However, there is still need for fast, easy to use and efficacious tools. Considering genetic disease studies, there is a lack of publicly available tools which support compound heterozygous and de novo models. Also, existing tools either require advanced IT expertise or are inefficient for handling large variant files. In this work, we provide FMFilter, an efficient sieving tool for next generation sequencing data produced by genetic disease studies. We develop a software which allows to choose the inheritance model (recessive, dominant, compound heterozygous and de novo), the affected and control individuals. The program provides a user friendly Graphical User Interface which eliminates the requirement of advanced computer techniques. It has various filtering options which enable to eliminate the majority of the false alarms. FMFilter requires negligible memory, therefore it can easily handle very large variant files like multiple whole genomes with ordinary computers. We demonstrate the variant reduction capability and effectiveness of the proposed tool with public and in-house data for different inheritance models. We also compare FMFilter with the existing filtering software. We conclude that FMFilter provides an effective and easy to use environment for analyzing next generation sequencing data from Mendelian diseases.

  4. Aerodynamic and Aerothermodynamic Layout of the Hypersonic Flight Experiment Shefex

    NASA Astrophysics Data System (ADS)

    Eggers, Th.

    2005-02-01

    The purpose of the SHarp Edge Flight EXperiment SHEFEX is the investigation of possible new shapes for future launcher or reentry vehicles [1]. The main focus is the improvement of common space vehicle shapes by application of facetted surfaces and sharp edges. The experiment will enable the time accurate investigation of the flow effects and their structural answer during the hypersonic flight from 90 km down to an altitude of 20 km. The project, being performed under responsibility of the German Aerospace Center (DLR) is scheduled to fly on top of a two-stage solid propellant sounding rocket for the first half of 2005. The paper contains a survey of the aerodynamic and aerothermodynamic layout of the experimental vehicle. The results are inputs for the definition of the structural layout, the TPS and the flight instrumentation as well as for the preparation of the flight test performed by the Mobile Rocket Base of DLR.

  5. Human Factors Evaluations of Two-Dimensional Spacecraft Conceptual Layouts

    NASA Technical Reports Server (NTRS)

    Kennedy, Kriss J.; Toups, Larry D.; Rudisill, Marianne

    2010-01-01

    Much of the human factors work done in support of the NASA Constellation lunar program has been with low fidelity mockups. These volumetric replicas of the future lunar spacecraft allow researchers to insert test subjects from the engineering and astronaut population and evaluate the vehicle design as the test subjects perform simulations of various operational tasks. However, lunar outpost designs must be evaluated without the use of mockups, creating a need for evaluation tools that can be performed on two-dimension conceptual spacecraft layouts, such as floor plans. A tool based on the Cooper- Harper scale was developed and applied to one lunar scenario, enabling engineers to select between two competing floor plan layouts. Keywords: Constellation, human factors, tools, processes, habitat, outpost, Net Habitable Volume, Cooper-Harper.

  6. A Computational Framework for Cable Layout Design in Complex Products

    NASA Astrophysics Data System (ADS)

    Shang, Wei; Liu, Jian-hua; Ning, Ru-xin; Liu, Jia-shun

    The cable layout design in complex products has been challenging because of various strict constraints. In this paper, we present a computationalframework which provides a rich solution for the cable layout problems. The framework centers at the digital mockup of the product,and the digital model of the cable bundle in the productis introduced as an essential part. The design process in the framework is carried out in a virtual environment with a wide range of supporting techniques and tools integrated, including path planning techniques, physically-based model, assembly simulation techniques and more. The techniques and tools respectively emphasizeon different aspects in this problem domain. Besides, the designers play an important role in the framework. They drive the whole design process and make decisions with their knowledge on issues that current techniques cannot solve. A prototype system is developed and applied in practical product development process. The results show that the framework is practical and promising.

  7. Design and simulation of silicon photonic schematics and layouts

    NASA Astrophysics Data System (ADS)

    Chrostowski, Lukas; Lu, Zeqin; Flueckiger, Jonas; Wang, Xu; Klein, Jackson; Liu, Amy; Jhoja, Jaspreet; Pond, James

    2016-05-01

    Electronic circuit designers commonly start their design process with a schematic, namely an abstract representation of the physical circuit. In integrated photonics on the other hand, it is common for the design to begin at the physical component level, and create a layout by connecting components with interconnects. In this paper, we discuss how to create a schematic from the physical layout via netlist extraction, which enables circuit simulations. Post-layout extraction can also be used to predict how fabrication variability and non-uniformity will impact circuit performance. This is based on the component position information, compact models that are parameterized for dimensional variations, and manufacturing variability models such as a simulated wafer thickness map. This final step is critical in understanding how real-world silicon photonic circuits will behave. We present an example based on treating the ring resonator as a circuit. A silicon photonics design kit, as described here, is available for download at http://github.com/lukasc-ubc/SiEPIC_EBeam_PDK.

  8. AmbiguityVis: Visualization of Ambiguity in Graph Layouts.

    PubMed

    Wang, Yong; Shen, Qiaomu; Archambault, Daniel; Zhou, Zhiguang; Zhu, Min; Yang, Sixiao; Qu, Huamin

    2016-01-01

    Node-link diagrams provide an intuitive way to explore networks and have inspired a large number of automated graph layout strategies that optimize aesthetic criteria. However, any particular drawing approach cannot fully satisfy all these criteria simultaneously, producing drawings with visual ambiguities that can impede the understanding of network structure. To bring attention to these potentially problematic areas present in the drawing, this paper presents a technique that highlights common types of visual ambiguities: ambiguous spatial relationships between nodes and edges, visual overlap between community structures, and ambiguity in edge bundling and metanodes. Metrics, including newly proposed metrics for abnormal edge lengths, visual overlap in community structures and node/edge aggregation, are proposed to quantify areas of ambiguity in the drawing. These metrics and others are then displayed using a heatmap-based visualization that provides visual feedback to developers of graph drawing and visualization approaches, allowing them to quickly identify misleading areas. The novel metrics and the heatmap-based visualization allow a user to explore ambiguities in graph layouts from multiple perspectives in order to make reasonable graph layout choices. The effectiveness of the technique is demonstrated through case studies and expert reviews.

  9. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT

  10. Near midplane scintillator-based fast ion loss detector on DIII-D

    SciTech Connect

    Chen, X.; Heidbrink, W. W.; Fisher, R. K.; Pace, D. C.; Chavez, J. A.; Van Zeeland, M. A.; Garcia-Munoz, M.

    2012-10-15

    A new scintillator-based fast-ion loss detector (FILD) installed near the outer midplane of the plasma has been commissioned on DIII-D. This detector successfully measures coherent fast ion losses produced by fast-ion driven instabilities ({<=}500 kHz). Combined with the first FILD at {approx}45 Degree-Sign below the outer midplane [R. K. Fisher, et al., Rev. Sci. Instrum. 81, 10D307 (2010)], the two-detector system measures poloidal variation of losses. The phase space sensitivity of the new detector (gyroradius r{sub L}{approx}[1.5-8] cm and pitch angle {alpha}{approx}[35 Degree-Sign -85 Degree-Sign ]) is calibrated using neutral beam first orbit loss measurements. Since fast ion losses are localized poloidally, having two FILDs at different poloidal locations allows for the study of losses over a wider range of plasma shapes and types of loss orbits.

  11. Near midplane scintillator-based fast ion loss detector on DIII-D.

    PubMed

    Chen, X; Fisher, R K; Pace, D C; García-Muñoz, M; Chavez, J A; Heidbrink, W W; Van Zeeland, M A

    2012-10-01

    A new scintillator-based fast-ion loss detector (FILD) installed near the outer midplane of the plasma has been commissioned on DIII-D. This detector successfully measures coherent fast ion losses produced by fast-ion driven instabilities (≤500 kHz). Combined with the first FILD at ∼45° below the outer midplane [R. K. Fisher, et al., Rev. Sci. Instrum. 81, 10D307 (2010)], the two-detector system measures poloidal variation of losses. The phase space sensitivity of the new detector (gyroradius r(L) ∼ [1.5-8] cm and pitch angle α ∼ [35°-85°]) is calibrated using neutral beam first orbit loss measurements. Since fast ion losses are localized poloidally, having two FILDs at different poloidal locations allows for the study of losses over a wider range of plasma shapes and types of loss orbits.

  12. Near midplane scintillator-based fast ion loss detector on DIII-Da)

    NASA Astrophysics Data System (ADS)

    Chen, X.; Fisher, R. K.; Pace, D. C.; García-Muñoz, M.; Chavez, J. A.; Heidbrink, W. W.; Van Zeeland, M. A.

    2012-10-01

    A new scintillator-based fast-ion loss detector (FILD) installed near the outer midplane of the plasma has been commissioned on DIII-D. This detector successfully measures coherent fast ion losses produced by fast-ion driven instabilities (≤500 kHz). Combined with the first FILD at ˜45° below the outer midplane [R. K. Fisher, et al., Rev. Sci. Instrum. 81, 10D307 (2010), 10.1063/1.3490020], the two-detector system measures poloidal variation of losses. The phase space sensitivity of the new detector (gyroradius rL ˜ [1.5-8] cm and pitch angle α ˜ [35°-85°]) is calibrated using neutral beam first orbit loss measurements. Since fast ion losses are localized poloidally, having two FILDs at different poloidal locations allows for the study of losses over a wider range of plasma shapes and types of loss orbits.

  13. Fast Fragmentation of Networks Using Module-Based Attacks.

    PubMed

    Requião da Cunha, Bruno; González-Avella, Juan Carlos; Gonçalves, Sebastián

    2015-01-01

    In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from a practical point of view. In this contribution, we present a module-based method to efficiently fragment complex networks. The procedure firstly identifies topological communities through which the network can be represented using a well established heuristic algorithm of community finding. Then only the nodes that participate of inter-community links are removed in descending order of their betweenness centrality. We illustrate the method by applying it to a variety of examples in the social, infrastructure, and biological fields. It is shown that the module-based approach always outperforms targeted attacks to vertices based on node degree or betweenness centrality rankings, with gains in efficiency strongly related to the modularity of the network. Remarkably, in the US power grid case, by deleting 3% of the nodes, the proposed method breaks the original network in fragments which are twenty times smaller in size than the fragments left by betweenness-based attack. PMID:26569610

  14. Fast Fragmentation of Networks Using Module-Based Attacks.

    PubMed

    Requião da Cunha, Bruno; González-Avella, Juan Carlos; Gonçalves, Sebastián

    2015-01-01

    In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from a practical point of view. In this contribution, we present a module-based method to efficiently fragment complex networks. The procedure firstly identifies topological communities through which the network can be represented using a well established heuristic algorithm of community finding. Then only the nodes that participate of inter-community links are removed in descending order of their betweenness centrality. We illustrate the method by applying it to a variety of examples in the social, infrastructure, and biological fields. It is shown that the module-based approach always outperforms targeted attacks to vertices based on node degree or betweenness centrality rankings, with gains in efficiency strongly related to the modularity of the network. Remarkably, in the US power grid case, by deleting 3% of the nodes, the proposed method breaks the original network in fragments which are twenty times smaller in size than the fragments left by betweenness-based attack.

  15. Fast Fragmentation of Networks Using Module-Based Attacks

    PubMed Central

    Requião da Cunha, Bruno; González-Avella, Juan Carlos; Gonçalves, Sebastián

    2015-01-01

    In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from a practical point of view. In this contribution, we present a module-based method to efficiently fragment complex networks. The procedure firstly identifies topological communities through which the network can be represented using a well established heuristic algorithm of community finding. Then only the nodes that participate of inter-community links are removed in descending order of their betweenness centrality. We illustrate the method by applying it to a variety of examples in the social, infrastructure, and biological fields. It is shown that the module-based approach always outperforms targeted attacks to vertices based on node degree or betweenness centrality rankings, with gains in efficiency strongly related to the modularity of the network. Remarkably, in the US power grid case, by deleting 3% of the nodes, the proposed method breaks the original network in fragments which are twenty times smaller in size than the fragments left by betweenness-based attack. PMID:26569610

  16. Fast vision-based catheter 3D reconstruction.

    PubMed

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D

    2016-07-21

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms. PMID:27352011

  17. Fast vision-based catheter 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  18. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  19. Fast optical recording media based on semiconductor nanostructures for image recording and processing

    SciTech Connect

    Kasherininov, P. G. Tomasov, A. A.

    2008-11-15

    Fast optical recording media based on semiconductor nanostructures (CdTe, GaAs) for image recording and processing with a speed to 10{sup 6} cycle/s (which exceeds the speed of known recording media based on metal-insulator-semiconductor-(liquid crystal) (MIS-LC) structures by two to three orders of magnitude), a photosensitivity of 10{sup -2}V/cm{sup 2}, and a spatial resolution of 5-10 (line pairs)/mm are developed. Operating principles of nanostructures as fast optical recording media and methods for reading images recorded in such media are described. Fast optical processors for recording images in incoherent light based on CdTe crystal nanostructures are implemented. The possibility of their application to fabricate image correlators is shown.

  20. Fast, moment-based estimation methods for delay network tomography

    SciTech Connect

    Lawrence, Earl Christophre; Michailidis, George; Nair, Vijayan N

    2008-01-01

    Consider the delay network tomography problem where the goal is to estimate distributions of delays at the link-level using data on end-to-end delays. These measurements are obtained using probes that are injected at nodes located on the periphery of the network and sent to other nodes also located on the periphery. Much of the previous literature deals with discrete delay distributions by discretizing the data into small bins. This paper considers more general models with a focus on computationally efficient estimation. The moment-based schemes presented here are designed to function well for larger networks and for applications like monitoring that require speedy solutions.

  1. Fast Object Motion Estimation Based on Dynamic Stixels.

    PubMed

    Morales, Néstor; Morell, Antonio; Toledo, Jonay; Acosta, Leopoldo

    2016-07-28

    The stixel world is a simplification of the world in which obstacles are represented as vertical instances, called stixels, standing on a surface assumed to be planar. In this paper, previous approaches for stixel tracking are extended using a two-level scheme. In the first level, stixels are tracked by matching them between frames using a bipartite graph in which edges represent a matching cost function. Then, stixels are clustered into sets representing objects in the environment. These objects are matched based on the number of stixels paired inside them. Furthermore, a faster, but less accurate approach is proposed in which only the second level is used. Several configurations of our method are compared to an existing state-of-the-art approach to show how our methodology outperforms it in several areas, including an improvement in the quality of the depth reconstruction.

  2. Fast Object Motion Estimation Based on Dynamic Stixels

    PubMed Central

    Morales, Néstor; Morell, Antonio; Toledo, Jonay; Acosta, Leopoldo

    2016-01-01

    The stixel world is a simplification of the world in which obstacles are represented as vertical instances, called stixels, standing on a surface assumed to be planar. In this paper, previous approaches for stixel tracking are extended using a two-level scheme. In the first level, stixels are tracked by matching them between frames using a bipartite graph in which edges represent a matching cost function. Then, stixels are clustered into sets representing objects in the environment. These objects are matched based on the number of stixels paired inside them. Furthermore, a faster, but less accurate approach is proposed in which only the second level is used. Several configurations of our method are compared to an existing state-of-the-art approach to show how our methodology outperforms it in several areas, including an improvement in the quality of the depth reconstruction. PMID:27483265

  3. Layout and Design in "Real Life"

    ERIC Educational Resources Information Center

    Bremer, Janet; Stocker, Donald

    2004-01-01

    Educators are required to combine their expertise and allow students to explore the different areas by using the method of collaboration in which teachers from different disciplines will create an environment where each will use their expert skills. The collaboration of a computer teacher with an art teacher resulted in the creation of Layout and…

  4. Fast spot-based multiscale simulations of granular drainage

    SciTech Connect

    Rycroft, Chris H.; Wong, Yee Lok; Bazant, Martin Z.

    2009-05-22

    We develop a multiscale simulation method for dense granular drainage, based on the recently proposed spot model, where the particle packing flows by local collective displacements in response to diffusing"spots'" of interstitial free volume. By comparing with discrete-element method (DEM) simulations of 55,000 spheres in a rectangular silo, we show that the spot simulation is able to approximately capture many features of drainage, such as packing statistics, particle mixing, and flow profiles. The spot simulation runs two to three orders of magnitude faster than DEM, making it an appropriate method for real-time control or optimization. We demonstrateextensions for modeling particle heaping and avalanching at the free surface, and for simulating the boundary layers of slower flow near walls. We show that the spot simulations are robust and flexible, by demonstrating that they can be used in both event-driven and fixed timestep approaches, and showing that the elastic relaxation step used in the model can be applied much less frequently and still create good results.

  5. Layout of Ancient Maya Cities

    NASA Astrophysics Data System (ADS)

    Aylesworth, Grant R.

    Although there is little doubt that the ancient Maya of Mesoamerica laid their cities out based, in part, on astronomical considerations, the proliferation of "cosmograms" in contemporary scholarly discourse has complicated matters for the acceptance of rigorous archaeoastronomical research.

  6. Nanorod-Based Fast-Response Pressure-Sensitive Paints

    NASA Technical Reports Server (NTRS)

    Bencic, Timothy; VanderWal, Randall

    2007-01-01

    A proposed program of research and development would be devoted to exploitation of nanomaterials in pressuresensitive paints (PSPs), which are used on wind-tunnel models for mapping surface pressures associated with flow fields. Heretofore, some success has been achieved in measuring steady-state pressures by use of PSPs, but success in measuring temporally varying pressures has been elusive because of the inherent slowness of the optical responses of these materials. A PSP contains a dye that luminesces in a suitable wavelength range in response to photoexcitation in a shorter wavelength range. The luminescence is quenched by oxygen at a rate proportional to the partial pressure of oxygen and thus proportional to the pressure of air. As a result, the intensity of luminescence varies inversely with the pressure of air. The major problem in developing a PSP that could be easily applied to a wind-tunnel model and could be useful for measuring rapidly varying pressure is to provide very high gas diffusivity for rapid, easy transport of oxygen to and from active dye molecules. Most PSPs include polymer-base binders, which limit the penetration of oxygen to dye molecules, thereby reducing responses to pressure fluctuations. The proposed incorporation of nanomaterials (somewhat more specifically, nanorods) would result in paints having nanostructured surfaces that, relative to conventional PSP surfaces, would afford easier and more nearly complete access of oxygen molecules to dye molecules. One measure of greater access is effective surface area: For a typical PSP as proposed applied to a given solid surface, the nanometer-scale structural features would result in an exposed surface area more than 100 times that of a conventional PSP, and the mass of proposed PSP needed to cover the surface would be less than tenth of the mass of the conventional PSP. One aspect of the proposed development would be to synthesize nanorods of Si/SiO2, in both tangle-mat and regular- array

  7. Autonomous mobile robot fast hybrid decision system DT-FAM based on laser system measurement LSM

    NASA Astrophysics Data System (ADS)

    Będkowski, Janusz; Jankowski, Stanisław

    2006-10-01

    In this paper the new intelligent data processing system for mobile robot is described. The robot perception uses the LSM - Laser System Measurement. The innovative fast hybrid decision system is based on fuzzy ARTMAP supported by decision tree. The virtual laboratory of robotics was implemented to execute experiments.

  8. Common and Specific Factors Approaches to Home-Based Treatment: I-FAST and MST

    ERIC Educational Resources Information Center

    Lee, Mo Yee; Greene, Gilbert J.; Fraser, J. Scott; Edwards, Shivani G.; Grove, David; Solovey, Andrew D.; Scott, Pamela

    2013-01-01

    Objectives: This study examined the treatment outcomes of integrated families and systems treatment (I-FAST), a moderated common factors approach, in reference to multisystemic therapy (MST), an established specific factor approach, for treating at risk children and adolescents and their families in an intensive community-based setting. Method:…

  9. Basic concepts underlying fast-neutron-based contraband interrogation technology. A systems viewpoint

    SciTech Connect

    Fink, C.L.; Guenther, P.T.; Smith, D.L.

    1992-12-01

    All accelerator-based fast-neutron contraband interrogation systems have many closely interrelated subsystems, whose performance parameters will be critically interdependent. For optimal overall performance, a systems analysis design approach is required. This paper provides a general overview of the interrelationships and the tradeoffs to be considered for optimization of nonaccelerator subsystems.

  10. Child and Parent Voices on a Community-Based Prevention Program (FAST)

    ERIC Educational Resources Information Center

    Fearnow-Kenney, Melodie; Hill, Patricia; Gore, Nicole

    2016-01-01

    Families and Schools Together (FAST) is a collaborative program involving schools, families, and community-based partners in efforts to prevent substance use, juvenile delinquency, school failure, child abuse and neglect, mental health problems, and violence. Although evaluated extensively, there remains a dearth of qualitative data on child and…

  11. Fast and efficient silicon thermo-optic switching based on reverse breakdown of pn junction.

    PubMed

    Li, Xianyao; Xu, Hao; Xiao, Xi; Li, Zhiyong; Yu, Yude; Yu, Jinzhong

    2014-02-15

    We propose and demonstrate a fast and efficient silicon thermo-optic switch based on reverse breakdown of the pn junction. Benefiting from the direct heating of silicon waveguide by embedding the pn junction into the waveguide center, fast switching with on/off time of 330 and 450 ns and efficient thermal tuning of 0.12  nm/mW for a 20 μm radius microring resonator are achieved, indicating a high figure of merit of only 8.8  mW·μs. The results here show great potential for application in the future optical interconnects.

  12. Ultra Fast X-ray Streak Camera for TIM Based Platforms

    SciTech Connect

    Marley, E; Shepherd, R; Fulkerson, E S; James, L; Emig, J; Norman, D

    2012-05-02

    Ultra fast x-ray streak cameras are a staple for time resolved x-ray measurements. There is a need for a ten inch manipulator (TIM) based streak camera that can be fielded in a newer large scale laser facility. The LLNL ultra fast streak camera's drive electronics have been upgraded and redesigned to fit inside a TIM tube. The camera also has a new user interface that allows for remote control and data acquisition. The system has been outfitted with a new sensor package that gives the user more operational awareness and control.

  13. Ultra fast x-ray streak camera for ten inch manipulator based platforms.

    PubMed

    Marley, E V; Shepherd, R; Fulkerson, S; James, L; Emig, J; Norman, D

    2012-10-01

    Ultra fast x-ray streak cameras are a staple for time resolved x-ray measurements. There is a need for a ten inch manipulator (TIM) based streak camera that can be fielded in a newer large scale laser facility. The Lawrence Livermore National Laboratory ultra fast streak camera's drive electronics have been upgraded and redesigned to fit inside a TIM tube. The camera also has a new user interface that allows for remote control and data acquisition. The system has been outfitted with a new sensor package that gives the user more operational awareness and control.

  14. Design Considerations of Fast-cycling Synchrotrons Based on Superconducting Transmission Line Magnets

    SciTech Connect

    Piekarz, H.; Hays, S.; Huang, Y.; Shiltsev, V.; /Fermilab

    2008-06-01

    Fast-cycling synchrotrons are key instruments for accelerator based nuclear and high-energy physics programs. We explore a possibility to construct fast-cycling synchrotrons by using super-ferric, {approx}2 Tesla B-field dipole magnets powered with a superconducting transmission line. We outline both the low temperature (LTS) and the high temperature (HTS) superconductor design options and consider dynamic power losses for an accelerator with operation cycle of 0.5 Hz. We also briefly outline possible power supply system for such accelerator, and discuss the quench protection system for the magnet string powered by a transmission line conductor.

  15. Fast polarization-state tracking scheme based on radius-directed linear Kalman filter.

    PubMed

    Yang, Yanfu; Cao, Guoliang; Zhong, Kangping; Zhou, Xian; Yao, Yong; Lau, Alan Pak Tao; Lu, Chao

    2015-07-27

    We propose and experimentally demonstrate a fast polarization tracking scheme based on radius-directed linear Kalman filter. It has the advantages of fast convergence and is inherently insensitive to phase noise and frequency offset effects. The scheme is experimentally compared to conventional polarization tracking methods on the polarization rotation angular frequency. The results show that better tracking capability with more than one order of magnitude improvement is obtained in the cases of polarization multiplexed QPSK and 16QAM signals. The influences of the filter tuning parameters on tracking performance are also investigated in detail.

  16. 48 CFR 36.517 - Layout of work.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Layout of work. 36.517... CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Contract Clauses 36.517 Layout of work. The contracting officer shall insert the clause at 52.236-17, Layout of Work, in solicitations and contracts...

  17. 48 CFR 36.517 - Layout of work.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Layout of work. 36.517... CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Contract Clauses 36.517 Layout of work. The contracting officer shall insert the clause at 52.236-17, Layout of Work, in solicitations and contracts...

  18. 48 CFR 36.517 - Layout of work.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Layout of work. 36.517... CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Contract Clauses 36.517 Layout of work. The contracting officer shall insert the clause at 52.236-17, Layout of Work, in solicitations and contracts...

  19. Fast online Monte Carlo-based IMRT planning for the MRI linear accelerator.

    PubMed

    Bol, G H; Hissoiny, S; Lagendijk, J J W; Raaymakers, B W

    2012-03-01

    The MRI accelerator, a combination of a 6 MV linear accelerator with a 1.5 T MRI, facilitates continuous patient anatomy updates regarding translations, rotations and deformations of targets and organs at risk. Accounting for these demands high speed, online intensity-modulated radiotherapy (IMRT) re-optimization. In this paper, a fast IMRT optimization system is described which combines a GPU-based Monte Carlo dose calculation engine for online beamlet generation and a fast inverse dose optimization algorithm. Tightly conformal IMRT plans are generated for four phantom cases and two clinical cases (cervix and kidney) in the presence of the magnetic fields of 0 and 1.5 T. We show that for the presented cases the beamlet generation and optimization routines are fast enough for online IMRT planning. Furthermore, there is no influence of the magnetic field on plan quality and complexity, and equal optimization constraints at 0 and 1.5 T lead to almost identical dose distributions.

  20. Case-based reasoning(CBR) model for ultra-fast cooling in plate mill

    NASA Astrophysics Data System (ADS)

    Hu, Xiao; Wang, Zhaodong; Wang, Guodong

    2014-11-01

    New generation thermo-mechanical control process(TMCP) based on ultra-fast cooling is being widely adopted in plate mill to product high-performance steel material at low cost. Ultra-fast cooling system is complex because of optimizing the temperature control error generated by heat transfer mathematical model and process parameters. In order to simplify the system and improve the temperature control precision in ultra-fast cooling process, several existing models of case-based reasoning(CBR) model are reviewed. Combining with ultra-fast cooling process, a developed R5 CBR model is proposed, which mainly improves the case representation, similarity relation and retrieval module. Certainty factor is defined in semantics memory unit of plate case which provides not only internal data reliability but also product performance reliability. Similarity relation is improved by defined power index similarity membership function. Retrieval process is simplified and retrieval efficiency is improved apparently by windmill retrieval algorithm. The proposed CBR model is used for predicting the case of cooling strategy and its capability is superior to traditional process model. In order to perform comprehensive investigations on ultra-fast cooling process, different steel plates are considered for the experiment. The validation experiment and industrial production of proposed CBR model are carried out, which demonstrated that finish cooling temperature(FCT) error is controlled within ±25°C and quality rate of product is more than 97%. The proposed CBR model can simplify ultra-fast cooling system and give quality performance for steel product.

  1. A fluctuation-induced plasma transport diagnostic based upon fast-Fourier transform spectral analysis

    NASA Technical Reports Server (NTRS)

    Powers, E. J.; Kim, Y. C.; Hong, J. Y.; Roth, J. R.; Krawczonek, W. M.

    1978-01-01

    A diagnostic, based on fast Fourier-transform spectral analysis techniques, that provides experimental insight into the relationship between the experimentally observable spectral characteristics of the fluctuations and the fluctuation-induced plasma transport is described. The model upon which the diagnostic technique is based and its experimental implementation is discussed. Some characteristic results obtained during the course of an experimental study of fluctuation-induced transport in the electric field dominated NASA Lewis bumpy torus plasma are presented.

  2. Fast computer simulation of reconstructed image from rainbow hologram based on GPU

    NASA Astrophysics Data System (ADS)

    Shuming, Jiao; Yoshikawa, Hiroshi

    2015-10-01

    A fast computer simulation solution for rainbow hologram reconstruction based on GPU is proposed. In the commonly used segment Fourier transform method for rainbow hologram reconstruction, the computation of 2D Fourier transform on each hologram segment is very time consuming. GPU-based parallel computing can be applied to improve the computing speed. Compared with CPU computing, simulation results indicate that our proposed GPU computing can effectively reduce the computation time by as much as eight times.

  3. 10. Photographic copy of engineering drawing showing the plumbing layout ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. Photographic copy of engineering drawing showing the plumbing layout of Test Stand 'C' Cv Cell, vacuum line, and scrubber-condenser as erected in 1977-78. JPL drawing by VTN Consolidated, Inc. Engineers, Architects, Planners, 2301 Campus Drive, Irvine, California 92664: 'JPL-ETS E-18 (C-Stand Modifications) Flow Diagram,' sheet M-2 (JPL sheet number E18/41-0), September 1, 1977. - Jet Propulsion Laboratory Edwards Facility, Test Stand C, Edwards Air Force Base, Boron, Kern County, CA

  4. 9. Photographic copy of engineering drawing showing the mechanical layout ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Photographic copy of engineering drawing showing the mechanical layout of Test Stand 'C' Cv Cell, vacuum line, and scrubber-condenser as erected in 1977-78. JPL drawing by VTN Consolidated, Inc. Engineers, Architects, Planners, 2301 Campus Drive, Irvine, California 92664: 'JPL-ETS E-18 (C-Stand Modifications) Control Elevations & Schematics,' sheet M-5 (JPL sheet number E18/44-0), 1 September 1977. - Jet Propulsion Laboratory Edwards Facility, Test Stand C, Edwards Air Force Base, Boron, Kern County, CA

  5. An interactive wire-wrap board layout program

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A.

    1987-01-01

    An interactive computer-graphics-based tool for specifying the placement of electronic parts on a wire-wrap circuit board is presented. Input is a data file (currently produced by a commercial logic design system) which describes the parts used and their interconnections. Output includes printed reports describing the parts and wire paths, parts counts, placement lists, board drawing, and a tape to send to the wire-wrap vendor. The program should reduce the engineer's layout time by a factor of 3 to 5 as compared to manual methods.

  6. Document Template for Printed Circuit Board Layout

    SciTech Connect

    Anderson, J.T.; /Fermilab

    1998-01-01

    The purpose of this document is to list the information that may be required to properly specify a printed circuit board (PCB) design. You must provide sufficient information to the PCB layout vendor such that they can quote accurately and design the PCB that you need. Use the following information as a guide to write your specification. Include as much of it as is necessary to get the PCB design that you want.

  7. CLASSIFICATION OF THE MGR SITE LAYOUT SYSTEM

    SciTech Connect

    S.E. Salzman

    1999-08-31

    The purpose of this analysis is to document the Quality Assurance (QA) classification of the Monitored Geologic Repository (MGR) site layout system structures, systems and components (SSCs) performed by the MGR Safety Assurance Department. This analysis also provides the basis for revision of YMP/90-55Q, Q-List (YMP 1998). The Q-List identifies those MGR SSCs subject to the requirements of DOE/RW-0333P, ''Quality Assurance Requirements and Description'' (QARD) (DOE 1998).

  8. Fast-Response Calmodulin-Based Fluorescent Indicators Reveal Rapid Intracellular Calcium Dynamics.

    PubMed

    Helassa, Nordine; Zhang, Xiao-hua; Conte, Ianina; Scaringi, John; Esposito, Elric; Bradley, Jonathan; Carter, Thomas; Ogden, David; Morad, Martin; Török, Katalin

    2015-11-03

    Faithful reporting of temporal patterns of intracellular Ca(2+) dynamics requires the working range of indicators to match the signals. Current genetically encoded calmodulin-based fluorescent indicators are likely to distort fast Ca(2+) signals by apparent saturation and integration due to their limiting fluorescence rise and decay kinetics. A series of probes was engineered with a range of Ca(2+) affinities and accelerated kinetics by weakening the Ca(2+)-calmodulin-peptide interactions. At 37 °C, the GCaMP3-derived probe termed GCaMP3fast is 40-fold faster than GCaMP3 with Ca(2+) decay and rise times, t1/2, of 3.3 ms and 0.9 ms, respectively, making it the fastest to-date. GCaMP3fast revealed discreet transients with significantly faster Ca(2+) dynamics in neonatal cardiac myocytes than GCaMP6f. With 5-fold increased two-photon fluorescence cross-section for Ca(2+) at 940 nm, GCaMP3fast is suitable for deep tissue studies. The green fluorescent protein serves as a reporter providing important novel insights into the kinetic mechanism of target recognition by calmodulin. Our strategy to match the probe to the signal by tuning the affinity and hence the Ca(2+) kinetics of the indicator is applicable to the emerging new generations of calmodulin-based probes.

  9. Fast-Response Calmodulin-Based Fluorescent Indicators Reveal Rapid Intracellular Calcium Dynamics

    PubMed Central

    Helassa, Nordine; Zhang, Xiao-hua; Conte, Ianina; Scaringi, John; Esposito, Elric; Bradley, Jonathan; Carter, Thomas; Ogden, David; Morad, Martin; Török, Katalin

    2015-01-01

    Faithful reporting of temporal patterns of intracellular Ca2+ dynamics requires the working range of indicators to match the signals. Current genetically encoded calmodulin-based fluorescent indicators are likely to distort fast Ca2+ signals by apparent saturation and integration due to their limiting fluorescence rise and decay kinetics. A series of probes was engineered with a range of Ca2+ affinities and accelerated kinetics by weakening the Ca2+-calmodulin-peptide interactions. At 37 °C, the GCaMP3-derived probe termed GCaMP3fast is 40-fold faster than GCaMP3 with Ca2+ decay and rise times, t1/2, of 3.3 ms and 0.9 ms, respectively, making it the fastest to-date. GCaMP3fast revealed discreet transients with significantly faster Ca2+ dynamics in neonatal cardiac myocytes than GCaMP6f. With 5-fold increased two-photon fluorescence cross-section for Ca2+ at 940 nm, GCaMP3fast is suitable for deep tissue studies. The green fluorescent protein serves as a reporter providing important novel insights into the kinetic mechanism of target recognition by calmodulin. Our strategy to match the probe to the signal by tuning the affinity and hence the Ca2+ kinetics of the indicator is applicable to the emerging new generations of calmodulin-based probes. PMID:26527405

  10. The development of a fast radiative transfer model based on an empirical orthogonal functions (EOF) technique

    NASA Astrophysics Data System (ADS)

    Havemann, Stephan

    2006-12-01

    Remote sensing with the new generation of highly spectrally resolving instruments like the Atmospheric Research Interferometer Evaluation System (ARIES) or the assimilation of highly resolved spectra from satellites into Numerical Weather Prediction (NWP) systems requires radiative transfer computations that deliver results essentially instantaneous. This paper reports on the development of such a new fast radiative transfer model. The model is based on an Empirical Orthogonal Functions (EOF) technique. The model can be used for the simulation of sensors with different characteristics and in different spectral ranges from the solar to the infrared. For the purpose of airborne remote sensing, the fast model has been designed to work on any altitude and for slant paths whilst looking down or up. The fast model works for situations with diverse temperature and humidity profiles to an accuracy of better than 0.01K for most of the instrument channels. The EOF fast model works for clear-sky atmospheres and is applicable to atmospheres with scattering layers of aerosols or clouds. The fast model is trained with a large set of diverse atmospheric training profiles. In forward calculations corresponding high resolution spectra are obtained. An EOF analysis is performed on these spectra and only the leading EOF are retained (data compression). When the fast model is applied to a new independent profile, only the weights of the EOF need to be calculated (=predicted). Monochromatic radiances at suitable frequencies are used as predictors. The frequency selection is done by a cluster algorithm, which sorts frequencies with similar characteristics into clusters.

  11. Scintillator-based diagnostic for fast ion loss measurements on DIII-D.

    PubMed

    Fisher, R K; Pace, D C; García-Muñoz, M; Heidbrink, W W; Muscatello, C M; Van Zeeland, M A; Zhu, Y B

    2010-10-01

    A new scintillator-based fast ion loss detector has been installed on DIII-D with the time response (>100 kHz) needed to study energetic ion losses induced by Alfvén eigenmodes and other MHD instabilities. Based on the design used on ASDEX Upgrade, the diagnostic measures the pitch angle and gyroradius of ion losses based on the position of the ions striking the two-dimensional scintillator. For fast time response measurements, a beam splitter and fiberoptics couple a portion of the scintillator light to a photomultiplier. Reverse orbit following techniques trace the lost ions to their possible origin within the plasma. Initial DIII-D results showing prompt losses and energetic ion loss due to MHD instabilities are discussed.

  12. Scintillator-based diagnostic for fast ion loss measurements on DIII-D

    SciTech Connect

    Fisher, R. K.; Van Zeeland, M. A.; Pace, D. C.; Heidbrink, W. W.; Muscatello, C. M.; Zhu, Y. B.; Garcia-Munoz, M.

    2010-10-15

    A new scintillator-based fast ion loss detector has been installed on DIII-D with the time response (>100 kHz) needed to study energetic ion losses induced by Alfven eigenmodes and other MHD instabilities. Based on the design used on ASDEX Upgrade, the diagnostic measures the pitch angle and gyroradius of ion losses based on the position of the ions striking the two-dimensional scintillator. For fast time response measurements, a beam splitter and fiberoptics couple a portion of the scintillator light to a photomultiplier. Reverse orbit following techniques trace the lost ions to their possible origin within the plasma. Initial DIII-D results showing prompt losses and energetic ion loss due to MHD instabilities are discussed.

  13. A multilevel layout algorithm for visualizing physical and genetic interaction networks, with emphasis on their modular organization

    PubMed Central

    2012-01-01

    Background Graph drawing is an integral part of many systems biology studies, enabling visual exploration and mining of large-scale biological networks. While a number of layout algorithms are available in popular network analysis platforms, such as Cytoscape, it remains poorly understood how well their solutions reflect the underlying biological processes that give rise to the network connectivity structure. Moreover, visualizations obtained using conventional layout algorithms, such as those based on the force-directed drawing approach, may become uninformative when applied to larger networks with dense or clustered connectivity structure. Methods We implemented a modified layout plug-in, named Multilevel Layout, which applies the conventional layout algorithms within a multilevel optimization framework to better capture the hierarchical modularity of many biological networks. Using a wide variety of real life biological networks, we carried out a systematic evaluation of the method in comparison with other layout algorithms in Cytoscape. Results The multilevel approach provided both biologically relevant and visually pleasant layout solutions in most network types, hence complementing the layout options available in Cytoscape. In particular, it could improve drawing of large-scale networks of yeast genetic interactions and human physical interactions. In more general terms, the biological evaluation framework developed here enables one to assess the layout solutions from any existing or future graph drawing algorithm as well as to optimize their performance for a given network type or structure. Conclusions By making use of the multilevel modular organization when visualizing biological networks, together with the biological evaluation of the layout solutions, one can generate convenient visualizations for many network biology applications. PMID:22448851

  14. Assessing cognitive processes with diffusion model analyses: a tutorial based on fast-dm-30

    PubMed Central

    Voss, Andreas; Voss, Jochen; Lerche, Veronika

    2015-01-01

    Diffusion models can be used to infer cognitive processes involved in fast binary decision tasks. The model assumes that information is accumulated continuously until one of two thresholds is hit. In the analysis, response time distributions from numerous trials of the decision task are used to estimate a set of parameters mapping distinct cognitive processes. In recent years, diffusion model analyses have become more and more popular in different fields of psychology. This increased popularity is based on the recent development of several software solutions for the parameter estimation. Although these programs make the application of the model relatively easy, there is a shortage of knowledge about different steps of a state-of-the-art diffusion model study. In this paper, we give a concise tutorial on diffusion modeling, and we present fast-dm-30, a thoroughly revised and extended version of the fast-dm software (Voss and Voss, 2007) for diffusion model data analysis. The most important improvement of the fast-dm version is the possibility to choose between different optimization criteria (i.e., Maximum Likelihood, Chi-Square, and Kolmogorov-Smirnov), which differ in applicability for different data sets. PMID:25870575

  15. Polylactide-based polyurethane shape memory nanocomposites (Fe3O4/PLAUs) with fast magnetic responsiveness

    NASA Astrophysics Data System (ADS)

    Gu, Shu-Ying; Jin, Sheng-Peng; Gao, Xie-Feng; Mu, Jian

    2016-05-01

    Polylactide-based polyurethane shape memory nanocomposites (Fe3O4/PLAUs) with fast magnetic responsiveness are presented. For the purpose of fast response and homogeneous dispersion of magnetic nanoparticles, oleic acid was used to improve the dispersibility of Fe3O4 nanoparticles in a polymer matrix. A homogeneous distribution of Fe3O4 nanoparticles in the polymer matrix was obtained for nanocomposites with low Fe3O4 loading content. A small agglomeration was observed for nanocomposites with 6 wt% and 9 wt% loading content, leading to a small decline in the mechanical properties. PLAU and its nanocomposites have glass transition around 52 °C, which can be used as the triggering temperature. PLAU and its nanocomposites have shape fixity ratios above 99%, shape recovery ratios above 82% for the first cycle and shape recovery ratios above 91% for the second cycle. PLAU and its nanocomposites also exhibit a fast water bath or magnetic responsiveness. The magnetic recovery time decreases with an increase in the loading content of Fe3O4 nanoparticles due to an improvement in heating performance for increased weight percentage of fillers. The nanocomposites have fast responses in an alternating magnetic field and have potential application in biomedical areas such as intravascular stent.

  16. Fast traffic sign recognition with a rotation invariant binary pattern based feature.

    PubMed

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-01

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.

  17. FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine

    NASA Astrophysics Data System (ADS)

    Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo

    Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.

  18. Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature

    PubMed Central

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-01

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed. PMID:25608217

  19. PARLO: PArallel Run-Time Layout Optimization for Scientific Data Explorations with Heterogeneous Access Pattern

    SciTech Connect

    Gong, Zhenhuan; Boyuka, David; Zou, X; Liu, Gary; Podhorszki, Norbert; Klasky, Scott A; Ma, Xiaosong; Samatova, Nagiza F

    2013-01-01

    Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-level data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.

  20. Comparing taxi clearance input layouts for advancements in flight deck automation for surface operations

    NASA Astrophysics Data System (ADS)

    Cheng, Lara W. S.

    Airport moving maps (AMMs) have been shown to decrease navigation errors, increase taxiing speed, and reduce workload when they depict airport layout, current aircraft position, and the cleared taxi route. However, current technologies are limited in their ability to depict the cleared taxi route due to the unavailability of datacomm or other means of electronically transmitting clearances from ATC to the flight deck. This study examined methods by which pilots can input ATC-issued taxi clearances to support taxi route depictions on the AMM. Sixteen general aviation (GA) pilots used a touchscreen monitor to input taxi clearances using two input layouts, softkeys and QWERTY, each with and without feedforward (graying out invalid inputs). QWERTY yielded more taxi route input errors than the softkeys layout. The presence of feedforward did not produce fewer taxi route input errors than in the non-feedforward condition. The QWERTY layout did reduce taxi clearance input times relative to the softkeys layout, but when feedforward was present this effect was observed only for the longer, 6-segment taxi clearances. It was observed that with the softkeys layout, feedforward reduced input times compared to non-feedforward but only for the 4-segment clearances. Feedforward did not support faster taxi clearance input times for the QWERTY layout. Based on the results and analyses of the present study, it is concluded that for taxi clearance inputs, (1) QWERTY remain the standard for alphanumeric inputs, and (2) feedforward be investigated further, with a focus on participant preference and performance of black-gray contrast of keys.

  1. Accurate and fast fiber transfer delay measurement based on phase discrimination and frequency measurement

    NASA Astrophysics Data System (ADS)

    Dong, J. W.; Wang, B.; Gao, C.; Wang, L. J.

    2016-09-01

    An accurate and fast fiber transfer delay measurement method is demonstrated. As a key technique, a simple ambiguity resolving process based on phase discrimination and frequency measurement is used to overcome the contradiction between measurement accuracy and system complexity. The system achieves a high measurement accuracy of 0.2 ps with a 0.1 ps measurement resolution and a large dynamic range up to 50 km as well as no dead zone.

  2. Fast visible light photoelectric switch based on ultralong single crystalline V₂O₅ nanobelt.

    PubMed

    Lu, Jianing; Hu, Ming; Tian, Ye; Guo, Chuanfei; Wang, Chuang; Guo, Shengming; Liu, Qian

    2012-03-26

    A photoelectric switch with fast response to visible light (<200 μs), suitable photosensitivity and excellent repeatability is proposed based on the ultralong single crystalline V₂O₅ nanobelt, which are synthesized by chemical vapor deposition and its photoconductive mechanism can well be explained by small polaron hopping theory. Our results reveal that the switch has a great potential in next generation photodetectors and light-wave communications.

  3. LAYOUT AND SIZING OF ESF ALCOVES AND REFUGE CHAMBERS

    SciTech Connect

    John Beesley and Romeo S. Jurani

    1995-08-25

    The purpose of this analysis is to establish size requirements and approximate locations of Exploratory Studies Facility (ESF) test and operations alcoves, including refuge chambers during construction of the Topopah Spring (TS) loop. Preliminary conceptual layouts for non-deferred test alcoves will be developed to examine construction feasibility based on current test plans and available equipment. The final location and configuration layout for alcoves will be developed when in-situ rock conditions can be visually determined. This will be after the TBM has excavated beyond the alcove location and the rock has been exposed. The analysis will examine the need for construction of walkways and electrical alcoves in the ramps and main drift. Niches that may be required to accommodate conveyor booster drives and alignments are not included in this analysis. The analysis will develop design criteria for refuge chambers to meet MSHA requirements and will examine the strategic location of refuge chambers based on their potential use in various ESF fire scenarios. This document supersedes DI:BABE00000-01717-0200-00003 Rev 01, ''TS North Ramp Alcove and Stubout Location Analysis'' in its entirety (Reference 5-6).

  4. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the

  5. MetaSensing's FastGBSAR: ground based radar for deformation monitoring

    NASA Astrophysics Data System (ADS)

    Rödelsperger, Sabine; Meta, Adriano

    2014-10-01

    The continuous monitoring of ground deformation and structural movement has become an important task in engineering. MetaSensing introduces a novel sensor system, the Fast Ground Based Synthetic Aperture Radar (FastGBSAR), based on innovative technologies that have already been successfully applied to airborne SAR applications. The FastGBSAR allows the remote sensing of deformations of a slope or infrastructure from up to a distance of 4 km. The FastGBSAR can be setup in two different configurations: in Real Aperture Radar (RAR) mode it is capable of accurately measuring displacements along a linear range profile, ideal for monitoring vibrations of structures like bridges and towers (displacement accuracy up to 0.01 mm). Modal parameters can be determined within half an hour. Alternatively, in Synthetic Aperture Radar (SAR) configuration it produces two-dimensional displacement images with an acquisition time of less than 5 seconds, ideal for monitoring areal structures like dams, landslides and open pit mines (displacement accuracy up to 0.1 mm). The MetaSensing FastGBSAR is the first ground based SAR instrument on the market able to produce two-dimensional deformation maps with this high acquisition rate. By that, deformation time series with a high temporal and spatial resolution can be generated, giving detailed information useful to determine the deformation mechanisms involved and eventually to predict an incoming failure. The system is fully portable and can be quickly installed on bedrock or a basement. The data acquisition and processing can be fully automated leading to a low effort in instrument operation and maintenance. Due to the short acquisition time of FastGBSAR, the coherence between two acquisitions is very high and the phase unwrapping is simplified enormously. This yields a high density of resolution cells with good quality and high reliability of the acquired deformations. The deformation maps can directly be used as input into an Early

  6. Repository surface design site layout analysis

    SciTech Connect

    Montalvo, H.R.

    1998-02-27

    The purpose of this analysis is to establish the arrangement of the Yucca Mountain Repository surface facilities and features near the North Portal. The analysis updates and expands the North Portal area site layout concept presented in the ACD, including changes to reflect the resizing of the Waste Handling Building (WHB), Waste Treatment Building (WTB), Carrier Preparation Building (CPB), and site parking areas; the addition of the Carrier Washdown Buildings (CWBs); the elimination of the Cask Maintenance Facility (CMF); and the development of a concept for site grading and flood control. The analysis also establishes the layout of the surface features (e.g., roads and utilities) that connect all the repository surface areas (North Portal Operations Area, South Portal Development Operations Area, Emplacement Shaft Surface Operations Area, and Development Shaft Surface Operations Area) and locates an area for a potential lag storage facility. Details of South Portal and shaft layouts will be covered in separate design analyses. The objective of this analysis is to provide a suitable level of design for the Viability Assessment (VA). The analysis was revised to incorporate additional material developed since the issuance of Revision 01. This material includes safeguards and security input, utility system input (size and location of fire water tanks and pump houses, potable water and sanitary sewage rates, size of wastewater evaporation pond, size and location of the utility building, size of the bulk fuel storage tank, and size and location of other exterior process equipment), main electrical substation information, redundancy of water supply and storage for the fire support system, and additional information on the storm water retention pond.

  7. Pharmacy layout: What are consumers' perceptions?.

    PubMed

    Emmett, Dennis; Paul, David P; Chandra, Ashish; Barrett, Hilton

    2006-01-01

    The physical layout of a retail pharmacy can play a significant role in the development of the customers' perceptions which can have a positive (or negative) impact on its sales potential. Compared to most general merchandise stores, pharmacies are more concerned about safety and security issues due to the nature of their products. This paper will discuss these aspects as well as the physical and professional environments of retail pharmacies that influence the perceptions of customers and how these vary whether chain, independent, or hospital pharmacies.

  8. Pharmacy layout: What are consumers' perceptions?.

    PubMed

    Emmett, Dennis; Paul, David P; Chandra, Ashish; Barrett, Hilton

    2006-01-01

    The physical layout of a retail pharmacy can play a significant role in the development of the customers' perceptions which can have a positive (or negative) impact on its sales potential. Compared to most general merchandise stores, pharmacies are more concerned about safety and security issues due to the nature of their products. This paper will discuss these aspects as well as the physical and professional environments of retail pharmacies that influence the perceptions of customers and how these vary whether chain, independent, or hospital pharmacies. PMID:17062535

  9. Fast entropy-based CABAC rate estimation for mode decision in HEVC.

    PubMed

    Chen, Wei-Gang; Wang, Xun

    2016-01-01

    High efficiency video coding (HEVC) seeks the best code tree configuration, the best prediction unit division and the prediction mode, by evaluating the rate-distortion functional in a recursive way and using a "try all and select the best" strategy. Further, HEVC only supports context adaptive binary arithmetic coding (CABAC), which has the disadvantage of being highly sequential and having strong data dependencies, as the entropy coder. So, the development of a fast rate estimation algorithm for CABAC-based coding has a great practical significance for mode decision in HEVC. There are three elementary steps in CABAC encoding process: binarization, context modeling, and binary arithmetic coding. Typical approaches to fast CABAC rate estimation simplify or eliminate the last two steps, but leave the binarization step unchanged. To maximize the reduction of computational complexity, we propose a fast entropy-based CABAC rate estimator in this paper. It eliminates not only the modeling and the coding steps, but also the binarization step. Experimental results demonstrate that the proposed estimator is able to reduce the computational complexity of the mode decision in HEVC by 9-23 % with negligible PSNR loss and BD-rate increment, and therefore exhibits applicability to practical HEVC encoder implementation.

  10. Early-branching or fast-evolving eukaryotes? An answer based on slowly evolving positions.

    PubMed

    Philippe, H; Lopez, P; Brinkmann, H; Budin, K; Germot, A; Laurent, J; Moreira, D; Müller, M; Le Guyader, H

    2000-06-22

    The current paradigm of eukaryotic evolution is based primarily on comparative analysis of ribosomal RNA sequences. It shows several early-emerging lineages, mostly amitochondriate, which might be living relics of a progressive assembly of the eukaryotic cell. However, the analysis of slow-evolving positions, carried out with the newly developed slow-fast method, reveals that these lineages are, in terms of nucleotide substitution, fast-evolving ones, misplaced at the base of the tree by a long branch attraction artefact. Since the fast-evolving groups are not always the same, depending on which macromolecule is used as a marker, this explains most of the observed incongruent phylogenies. The current paradigm of eukaryotic evolution thus has to be seriously re-examined as the eukaryotic phylogeny is presently best summarized by a multifurcation. This is consistent with the Big Bang hypothesis that all extant eukaryotic lineages are the result of multiple cladogeneses within a relatively brief period, although insufficiency of data is also a possible explanation for the lack of resolution. For further resolution, rare evolutionary events such as shared insertions and/or deletions or gene fusions might be helpful.

  11. Fast entropy-based CABAC rate estimation for mode decision in HEVC.

    PubMed

    Chen, Wei-Gang; Wang, Xun

    2016-01-01

    High efficiency video coding (HEVC) seeks the best code tree configuration, the best prediction unit division and the prediction mode, by evaluating the rate-distortion functional in a recursive way and using a "try all and select the best" strategy. Further, HEVC only supports context adaptive binary arithmetic coding (CABAC), which has the disadvantage of being highly sequential and having strong data dependencies, as the entropy coder. So, the development of a fast rate estimation algorithm for CABAC-based coding has a great practical significance for mode decision in HEVC. There are three elementary steps in CABAC encoding process: binarization, context modeling, and binary arithmetic coding. Typical approaches to fast CABAC rate estimation simplify or eliminate the last two steps, but leave the binarization step unchanged. To maximize the reduction of computational complexity, we propose a fast entropy-based CABAC rate estimator in this paper. It eliminates not only the modeling and the coding steps, but also the binarization step. Experimental results demonstrate that the proposed estimator is able to reduce the computational complexity of the mode decision in HEVC by 9-23 % with negligible PSNR loss and BD-rate increment, and therefore exhibits applicability to practical HEVC encoder implementation. PMID:27386240

  12. Early-branching or fast-evolving eukaryotes? An answer based on slowly evolving positions.

    PubMed Central

    Philippe, H; Lopez, P; Brinkmann, H; Budin, K; Germot, A; Laurent, J; Moreira, D; Müller, M; Le Guyader, H

    2000-01-01

    The current paradigm of eukaryotic evolution is based primarily on comparative analysis of ribosomal RNA sequences. It shows several early-emerging lineages, mostly amitochondriate, which might be living relics of a progressive assembly of the eukaryotic cell. However, the analysis of slow-evolving positions, carried out with the newly developed slow-fast method, reveals that these lineages are, in terms of nucleotide substitution, fast-evolving ones, misplaced at the base of the tree by a long branch attraction artefact. Since the fast-evolving groups are not always the same, depending on which macromolecule is used as a marker, this explains most of the observed incongruent phylogenies. The current paradigm of eukaryotic evolution thus has to be seriously re-examined as the eukaryotic phylogeny is presently best summarized by a multifurcation. This is consistent with the Big Bang hypothesis that all extant eukaryotic lineages are the result of multiple cladogeneses within a relatively brief period, although insufficiency of data is also a possible explanation for the lack of resolution. For further resolution, rare evolutionary events such as shared insertions and/or deletions or gene fusions might be helpful. PMID:10902687

  13. Optimization of signal-to-noise ratio for wireless light-emitting diode communication in modern lighting layouts

    NASA Astrophysics Data System (ADS)

    Azizan, Luqman A.; Ab-Rahman, Mohammad S.; Hassan, Mazen R.; Bakar, A. Ashrif A.; Nordin, Rosdiadee

    2014-04-01

    White light-emitting diodes (LEDs) are predicted to be widely used in domestic applications in the future, because they are becoming widespread in commercial lighting applications. The ability of LEDs to be modulated at high speeds offers the possibility of using them as sources for communication instead of illumination. The growing interest in using these devices for both illumination and communication requires attention to combine this technology with modern lighting layouts. A dual-function system is applied to three models of modern lighting layouts: the hybrid corner lighting layout (HCLL), the hybrid wall lighting layout (HWLL), and the hybrid edge lighting layout (HELL). Based on the analysis, the relationship between the space adversity and the signal-to-noise ratio (SNR) performance is demonstrated for each model. The key factor that affects the SNR performance of visible light communication is the reliance on the design parameter that is related to the number and position of LED lights. The model of HWLL is chosen as the best layout, since 61% of the office area is considered as an excellent communication area and the difference between the area classification, Δp, is 22%. Thus, this system is applicable to modern lighting layouts.

  14. Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim

    2016-05-01

    A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.

  15. The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.

    PubMed

    Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas

    2015-09-04

    Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.

  16. A fast multispectral light synthesiser based on LEDs and a diffraction grating.

    PubMed

    Belušič, Gregor; Ilić, Marko; Meglič, Andrej; Pirih, Primož

    2016-01-01

    Optical experiments often require fast-switching light sources with adjustable bandwidths and intensities. We constructed a wavelength combiner based on a reflective planar diffraction grating and light emitting diodes with emission peaks from 350 to 630 nm that were positioned at the angles corresponding to the first diffraction order of the reversed beam. The combined output beam was launched into a fibre. The spacing between 22 equally wide spectral bands was about 15 nm. The time resolution of the pulse-width modulation drivers was 1 ms. The source was validated with a fast intracellular measurement of the spectral sensitivity of blowfly photoreceptors. In hyperspectral imaging of Xenopus skin circulation, the wavelength resolution was adequate to resolve haemoglobin absorption spectra. The device contains no moving parts, has low stray light and is intrinsically capable of multi-band output. Possible applications include visual physiology, biomedical optics, microscopy and spectroscopy. PMID:27558155

  17. A fast multispectral light synthesiser based on LEDs and a diffraction grating

    NASA Astrophysics Data System (ADS)

    Belušič, Gregor; Ilić, Marko; Meglič, Andrej; Pirih, Primož

    2016-08-01

    Optical experiments often require fast-switching light sources with adjustable bandwidths and intensities. We constructed a wavelength combiner based on a reflective planar diffraction grating and light emitting diodes with emission peaks from 350 to 630 nm that were positioned at the angles corresponding to the first diffraction order of the reversed beam. The combined output beam was launched into a fibre. The spacing between 22 equally wide spectral bands was about 15 nm. The time resolution of the pulse-width modulation drivers was 1 ms. The source was validated with a fast intracellular measurement of the spectral sensitivity of blowfly photoreceptors. In hyperspectral imaging of Xenopus skin circulation, the wavelength resolution was adequate to resolve haemoglobin absorption spectra. The device contains no moving parts, has low stray light and is intrinsically capable of multi-band output. Possible applications include visual physiology, biomedical optics, microscopy and spectroscopy.

  18. A fast multispectral light synthesiser based on LEDs and a diffraction grating

    PubMed Central

    Belušič, Gregor; Ilić, Marko; Meglič, Andrej; Pirih, Primož

    2016-01-01

    Optical experiments often require fast-switching light sources with adjustable bandwidths and intensities. We constructed a wavelength combiner based on a reflective planar diffraction grating and light emitting diodes with emission peaks from 350 to 630 nm that were positioned at the angles corresponding to the first diffraction order of the reversed beam. The combined output beam was launched into a fibre. The spacing between 22 equally wide spectral bands was about 15 nm. The time resolution of the pulse-width modulation drivers was 1 ms. The source was validated with a fast intracellular measurement of the spectral sensitivity of blowfly photoreceptors. In hyperspectral imaging of Xenopus skin circulation, the wavelength resolution was adequate to resolve haemoglobin absorption spectra. The device contains no moving parts, has low stray light and is intrinsically capable of multi-band output. Possible applications include visual physiology, biomedical optics, microscopy and spectroscopy. PMID:27558155

  19. A Fast and Robust Ellipse-Detection Method Based on Sorted Merging

    PubMed Central

    Ren, Guanghui; Zhao, Yaqin; Jiang, Lihui

    2014-01-01

    A fast and robust ellipse-detection method based on sorted merging is proposed in this paper. This method first represents the edge bitmap approximately with a set of line segments and then gradually merges the line segments into elliptical arcs and ellipses. To achieve high accuracy, a sorted merging strategy is proposed: the merging degrees of line segments/elliptical arcs are estimated, and line segments/elliptical arcs are merged in descending order of the merging degrees, which significantly improves the merging accuracy. During the merging process, multiple properties of ellipses are utilized to filter line segment/elliptical arc pairs, making the method very efficient. In addition, an ellipse-fitting method is proposed that restricts the maximum ratio of the semimajor axis and the semiminor axis, further improving the merging accuracy. Experimental results indicate that the proposed method is robust to outliers, noise, and partial occlusion and is fast enough for real-time applications. PMID:24782661

  20. Improving abdomen tumor low-dose CT images using a fast dictionary learning based processing

    NASA Astrophysics Data System (ADS)

    Chen, Yang; Yin, Xindao; Shi, Luyao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis; Toumoulin, Christine

    2013-08-01

    In abdomen computed tomography (CT), repeated radiation exposures are often inevitable for cancer patients who receive surgery or radiotherapy guided by CT images. Low-dose scans should thus be considered in order to avoid the harm of accumulative x-ray radiation. This work is aimed at improving abdomen tumor CT images from low-dose scans by using a fast dictionary learning (DL) based processing. Stemming from sparse representation theory, the proposed patch-based DL approach allows effective suppression of both mottled noise and streak artifacts. The experiments carried out on clinical data show that the proposed method brings encouraging improvements in abdomen low-dose CT images with tumors.

  1. Large-scale analytical Fourier transform of photomask layouts using graphics processing units

    NASA Astrophysics Data System (ADS)

    Sakamoto, Julia A.

    2015-10-01

    Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.

  2. Document reconstruction by layout analysis of snippets

    NASA Astrophysics Data System (ADS)

    Kleber, Florian; Diem, Markus; Sablatnig, Robert

    2010-02-01

    Document analysis is done to analyze entire forms (e.g. intelligent form analysis, table detection) or to describe the layout/structure of a document. Also skew detection of scanned documents is performed to support OCR algorithms that are sensitive to skew. In this paper document analysis is applied to snippets of torn documents to calculate features for the reconstruction. Documents can either be destroyed by the intention to make the printed content unavailable (e.g. tax fraud investigation, business crime) or due to time induced degeneration of ancient documents (e.g. bad storage conditions). Current reconstruction methods for manually torn documents deal with the shape, inpainting and texture synthesis techniques. In this paper the possibility of document analysis techniques of snippets to support the matching algorithm by considering additional features are shown. This implies a rotational analysis, a color analysis and a line detection. As a future work it is planned to extend the feature set with the paper type (blank, checked, lined), the type of the writing (handwritten vs. machine printed) and the text layout of a snippet (text size, line spacing). Preliminary results show that these pre-processing steps can be performed reliably on a real dataset consisting of 690 snippets.

  3. Framework for identifying recommended rules and DFM scoring model to improve manufacturability of sub-20nm layout design

    NASA Astrophysics Data System (ADS)

    Pathak, Piyush; Madhavan, Sriram; Malik, Shobhit; Wang, Lynn T.; Capodieci, Luigi

    2012-03-01

    This paper addresses the framework for building critical recommended rules and a methodology for devising scoring models using simulation or silicon data. Recommended rules need to be applied to critical layout configurations (edge or polygon based geometric relations), which can cause yield issues depending on layout context and process variability. Determining of critical recommended rules is the first step for this framework. Based on process specifications and design rule calculations, recommended rules are characterized by evaluating the manufacturability response to improvements in a layout-dependent parameter. This study is applied to critical 20nm recommended rules. In order to enable the scoring of layouts, this paper also discusses a CAD framework involved in supporting use-models for improving the DFM-compliance of a physical design.

  4. Operator Station Design System - A computer aided design approach to work station layout

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.

    1979-01-01

    The Operator Station Design System is resident in NASA's Johnson Space Center Spacecraft Design Division Performance Laboratory. It includes stand-alone minicomputer hardware and Panel Layout Automated Interactive Design and Crew Station Assessment of Reach software. The data base consists of the Shuttle Transportation System Orbiter Crew Compartment (in part), the Orbiter payload bay and remote manipulator (in part), and various anthropometric populations. The system is utilized to provide panel layouts, assess reach and vision, determine interference and fit problems early in the design phase, study design applications as a function of anthropometric and mission requirements, and to accomplish conceptual design to support advanced study efforts.

  5. The effect of concentrator field layout on the performance of point-focus distributed receiver systems

    NASA Technical Reports Server (NTRS)

    Pons, R. L.; Dugan, A. F.

    1984-01-01

    The effect of concentrator field layout on the technical-economic performance of a point-focusing distributed receiver (PFDR) solar thermal power plant is presented. The plant design is based on the small community prototype system currently under development for JPL/DOE; parabolic dish concentrators are employed, and small heat engines are used to generate electricity at each dish. The effect of field size, array proportions, dish-to-dish spacing and packing fraction (concentrator-land area ratio) are presented for typical PFDR layouts. Economic analyses are carried out to determine optimum packing fraction as a function of site cost.

  6. Parallel processing of layout data with selective data distribution

    NASA Astrophysics Data System (ADS)

    Pereira, Mark; Bhat, Nitin; Srinivas, Preethi

    2006-10-01

    With the increase in layout data (GDSII) size due to finer geometries and resolution enhancement techniques such as Optical Proximity Correction (OPC) and Phase Shift Mask (PSM), layout data is proving to be too voluminous to process by single CPU machines. Post-layout tools have now moved towards distributed computing techniques to process this data more efficiently in terms of speed. Typical distributed computing architectures involve distributing the layout data to various workstations and then each workstation processing its part of the data in parallel. This approach will work well provided the amount of data that is to be distributed is not too large. As the size of the layout data is increasing significantly, the time taken to transfer the layout data between the workstations is turning out to be a major bottleneck. This bottleneck gets further highlighted because the time taken for actual operations gets almost linearly scaled down through employing higher number of workstations in the distributed computing environment and also because the clock speed of the workstations get continuously improved. The focus of this paper is on a smart way of distributing the layout data so that the amount of redundant data transfer is significantly reduced. This is achieved by selective data distribution wherein the layout data is fragmented and each workstation is provided with minimal and sufficient layout information for it to determine the actual fragments required for its processing.

  7. Automatic page composition with nested sub-layouts

    NASA Astrophysics Data System (ADS)

    Hunter, Andrew

    2013-03-01

    This paper provides an overview of a system for the automatic composition of publications. The system first composes nested hierarchies of contents, then applies layout engines at branch points in the hierarchies to explore layout options, and finally selects the best overall options for the finished publications. Although the system has been developed as a general platform for automated publishing, this paper describes its application to the composition and layout of a magazine-like publication for social content from Facebook. The composition process works by assembling design fragments that have been populated with text and images from the Facebook social network. The fragments constitute a design language for a publication. Each design fragment is a nested mutable sub-layout that has no specific size or shape until after it has been laid-out. The layout process balances the space requirements of the fragment's internal contents with its external context in the publication. The mutability of sub-layouts requires that their layout options must be kept open until all the other contents that share the same space have been considered. Coping with large numbers of options is one of the greatest challenges in layout automation. Most existing layout methods work by rapidly elimination design options rather than by keeping options open. A further goal of this publishing system is to confirm that a custom publication can be generated quickly by the described methods. In general, the faster that publications can be created, the greater the opportunities for the technology.

  8. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  9. Fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics.

    PubMed

    Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J

    2015-05-15

    Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis.

  10. Fast online Monte Carlo-based IMRT planning for the MRI linear accelerator.

    PubMed

    Bol, G H; Hissoiny, S; Lagendijk, J J W; Raaymakers, B W

    2012-03-01

    The MRI accelerator, a combination of a 6 MV linear accelerator with a 1.5 T MRI, facilitates continuous patient anatomy updates regarding translations, rotations and deformations of targets and organs at risk. Accounting for these demands high speed, online intensity-modulated radiotherapy (IMRT) re-optimization. In this paper, a fast IMRT optimization system is described which combines a GPU-based Monte Carlo dose calculation engine for online beamlet generation and a fast inverse dose optimization algorithm. Tightly conformal IMRT plans are generated for four phantom cases and two clinical cases (cervix and kidney) in the presence of the magnetic fields of 0 and 1.5 T. We show that for the presented cases the beamlet generation and optimization routines are fast enough for online IMRT planning. Furthermore, there is no influence of the magnetic field on plan quality and complexity, and equal optimization constraints at 0 and 1.5 T lead to almost identical dose distributions. PMID:22349450

  11. Accelerated materials design of fast oxygen ionic conductors based on first principles calculations

    NASA Astrophysics Data System (ADS)

    He, Xingfeng; Mo, Yifei

    Over the past decades, significant research efforts have been dedicated to seeking fast oxygen ion conductor materials, which have important technological applications in electrochemical devices such as solid oxide fuel cells, oxygen separation membranes, and sensors. Recently, Na0.5Bi0.5TiO3 (NBT) was reported as a new family of fast oxygen ionic conductor. We will present our first principles computation study aims to understand the O diffusion mechanisms in the NBT material and to design this material with enhanced oxygen ionic conductivity. Using the NBT materials as an example, we demonstrate the computation capability to evaluate the phase stability, chemical stability, and ionic diffusion of the ionic conductor materials. We reveal the effects of local atomistic configurations and dopants on oxygen diffusion and identify the intrinsic limiting factors in increasing the ionic conductivity of the NBT materials. Novel doping strategies were predicted and demonstrated by the first principles calculations. In particular, the K doped NBT compound achieved good phase stability and an order of magnitude increase in oxygen ionic conductivity of up to 0.1 S cm-1 at 900 K compared to the experimental Mg doped compositions. Our results provide new avenues for the future design of the NBT materials and demonstrate the accelerated design of new ionic conductor materials based on first principles techniques. This computation methodology and workflow can be applied to the materials design of any (e.g. Li +, Na +) fast ion-conducting materials.

  12. A CFD-based wind solver for a fast response transport and dispersion model

    SciTech Connect

    Gowardhan, Akshay A; Brown, Michael J; Pardyjak, Eric R; Senocak, Inanc

    2010-01-01

    In many cities, ambient air quality is deteriorating leading to concerns about the health of city inhabitants. In urban areas with narrow streets surrounded by clusters of tall buildings, called street canyons, air pollution from traffic emissions and other sources is difficult to disperse and may accumulate resulting in high pollutant concentrations. For various situations, including the evacuation of populated areas in the event of an accidental or deliberate release of chemical, biological and radiological agents, it is important that models should be developed that produce urban flow fields quickly. For these reasons it has become important to predict the flow field in urban street canyons. Various computational techniques have been used to calculate these flow fields, but these techniques are often computationally intensive. Most fast response models currently in use are at a disadvantage in these cases as they are unable to correlate highly heterogeneous urban structures with the diagnostic parameterizations on which they are based. In this paper, a fast and reasonably accurate computational fluid dynamics (CFD) technique that solves the Navier-Stokes equations for complex urban areas has been developed called QUIC-CFD (Q-CFD). This technique represents an intermediate balance between fast (on the order of minutes for a several block problem) and reasonably accurate solutions. The paper details the solution procedure and validates this model for various simple and complex urban geometries.

  13. Key techniques and applications of adaptive growth method for stiffener layout design of plates and shells

    NASA Astrophysics Data System (ADS)

    Ding, Xiaohong; Ji, Xuerong; Ma, Man; Hou, Jianyun

    2013-11-01

    The application of the adaptive growth method is limited because several key techniques during the design process need manual intervention of designers. Key techniques of the method including the ground structure construction and seed selection are studied, so as to make it possible to improve the effectiveness and applicability of the adaptive growth method in stiffener layout design optimization of plates and shells. Three schemes of ground structures, which are comprised by different shell elements and beam elements, are proposed. It is found that the main stiffener layouts resulted from different ground structures are almost the same, but the ground structure comprised by 8-nodes shell elements and both 3-nodes and 2-nodes beam elements can result in clearest stiffener layout, and has good adaptability and low computational cost. An automatic seed selection approach is proposed, which is based on such selection rules that the seeds should be positioned on where the structural strain energy is great for the minimum compliance problem, and satisfy the dispersancy requirement. The adaptive growth method with the suggested key techniques is integrated into an ANSYS-based program, which provides a design tool for the stiffener layout design optimization of plates and shells. Typical design examples, including plate and shell structures to achieve minimum compliance and maximum bulking stability are illustrated. In addition, as a practical mechanical structural design example, the stiffener layout of an inlet structure for a large-scale electrostatic precipitator is also demonstrated. The design results show that the adaptive growth method integrated with the suggested key techniques can effectively and flexibly deal with stiffener layout design problem for plates and shells with complex geometrical shape and loading conditions to achieve various design objectives, thus it provides a new solution method for engineering structural topology design optimization.

  14. Fast plasmid based protein expression analysis in insect cells using an automated SplitGFP screen

    PubMed Central

    Bleckmann, Maren; Schmelz, Stefan; Schinkowski, Christian; Scrima, Andrea

    2016-01-01

    ABSTRACT Recombinant protein expression often presents a bottleneck for the production of proteins for use in many areas of animal‐cell biotechnology. Difficult‐to‐express proteins require the generation of numerous expression constructs, where popular prokaryotic screening systems often fail to identify expression of multi domain or full‐length protein constructs. Post‐translational modified mammalian proteins require an alternative host system such as insect cells using the Baculovirus Expression Vector System (BEVS). Unfortunately this is time‐, labor‐, and cost‐intensive. It is clearly desirable to find an automated and miniaturized fast multi‐sample screening method for protein expression in such systems. With this in mind, in this paper a high‐throughput initial expression screening method is described using an automated Microcultivation system in conjunction with fast plasmid based transient transfection in insect cells for the efficient generation of protein constructs. The applicability of the system is demonstrated for the difficult to express Nucleotide‐binding Oligomerization Domain‐containing protein 2 (NOD2). To enable detection of proper protein expression the rather weak plasmid based expression has been improved by a sensitive inline detection system. Here we present the functionality and application of the sensitive SplitGFP (split green fluorescent protein) detection system in insect cells. The successful expression of constructs is monitored by direct measurement of the fluorescence in the BioLector Microcultivation system. Additionally, we show that the results obtained with our plasmid‐based SplitGFP protein expression screen correlate directly to the level of soluble protein produced in BEVS. In conclusion our automated SplitGFP screen outlines a sensitive, fast and reliable method reducing the time and costs required for identifying the optimal expression construct prior to large scale protein production in

  15. Fast model-based X-ray CT reconstruction using spatially nonhomogeneous ICD optimization.

    PubMed

    Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A; Sauer, Ken D; Hsieh, Jiang

    2011-01-01

    Recent applications of model-based iterative reconstruction (MBIR) algorithms to multislice helical CT reconstructions have shown that MBIR can greatly improve image quality by increasing resolution as well as reducing noise and some artifacts. However, high computational cost and long reconstruction times remain as a barrier to the use of MBIR in practical applications. Among the various iterative methods that have been studied for MBIR, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a fast model-based iterative reconstruction algorithm using spatially nonhomogeneous ICD (NH-ICD) optimization. The NH-ICD algorithm speeds up convergence by focusing computation where it is most needed. The NH-ICD algorithm has a mechanism that adaptively selects voxels for update. First, a voxel selection criterion VSC determines the voxels in greatest need of update. Then a voxel selection algorithm VSA selects the order of successive voxel updates based upon the need for repeated updates of some locations, while retaining characteristics for global convergence. In order to speed up each voxel update, we also propose a fast 1-D optimization algorithm that uses a quadratic substitute function to upper bound the local 1-D objective function, so that a closed form solution can be obtained rather than using a computationally expensive line search algorithm. We examine the performance of the proposed algorithm using several clinical data sets of various anatomy. The experimental results show that the proposed method accelerates the reconstructions by roughly a factor of three on average for typical 3-D multislice geometries.

  16. Fast calculation of computer-generated hologram using run-length encoding based recurrence relation.

    PubMed

    Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2015-04-20

    Computer-Generated Holograms (CGHs) can be generated by superimposing zoneplates. A zoneplate is a grating that can concentrate an incident light into a point. Since a zoneplate has a circular symmetry, we reported an algorithm that rapidly generates a zoneplate by drawing concentric circles using computer graphic techniques. However, random memory access was required in the algorithm and resulted in degradation of the computational efficiency. In this study, we propose a fast CGH generation algorithm without random memory access using run-length encoding (RLE) based recurrence relation. As a result, we succeeded in improving the calculation time by 88%, compared with that of the previous work.

  17. Fast response hydrogen sensors based on anodic aluminum oxide with pore-widening treatment

    NASA Astrophysics Data System (ADS)

    Wu, Shuanghong; Zhou, Han; Hao, Mengmeng; Wei, Xiongbang; Li, Shibin; Yu, He; Wang, Xiangru; Chen, Zhi

    2016-09-01

    Fast response hydrogen sensors operating at room temperature based on nanoporous palladium (Pd) films supported by treated anodic aluminum oxide (AAO) template have been demonstrated. It was found that the nanoporous Pd film had a quicker and reversible response by a 30-min pore-widening treatment of the AAO template, due to its faster absorption and desorption of hydrogen. We obtained a sensor response time as short as 14 s at 1.4% hydrogen concentration with the 30-min pore-widening treatment of AAO template. The sensor exhibited very good performance at hydrogen concentrations from 0.1% to 2%.

  18. Fast calculation method for computer-generated cylindrical hologram based on wave propagation in spectral domain.

    PubMed

    Jackin, Boaz Jessie; Yatagai, Toyohiko

    2010-12-01

    A fast calculation method for computer generation of cylindrical holograms is proposed. The calculation method is based on wave propagation in spectral domain and in cylindrical co-ordinates, which is otherwise similar to the angular spectrum of plane waves in cartesian co-ordinates. The calculation requires only two FFT operations and hence is much faster. The theoretical background of the calculation method, sampling conditions and simulation results are presented. The generated cylindrical hologram has been tested for reconstruction in different view angles and also in plane surfaces.

  19. The fast beam condition monitor BCM1F backend electronics upgraded MicroTCA-based architecture

    NASA Astrophysics Data System (ADS)

    Zagozdzinska, Agnieszka A.; Bell, Alan; Dabrowski, Anne E.; Guthoff, Moritz; Hempel, Maria; Henschel, Hans; Karacheban, Olena; Lange, Wolfgang; Lohmann, Wolfgang; Lokhovitskiy, Arkady; Leonard, Jessica L.; Loos, Robert; Miraglia, Marco; Penno, Marek; Pozniak, Krzysztof T.; Przyborowski, Dominik; Stickland, David; Trapani, Pier Paolo; Romaniuk, Ryszard; Ryjov, Vladimir; Walsh, Roberval

    2014-11-01

    The Beam Radiation Instrumentation and Luminosity Project of the CMS experiment, consists of several beam monitoring systems. One system, the upgraded Fast Beams Condition Monitor, is based on 24 single crystal CVD diamonds with a double-pad sensor metallization and a custom designed readout. Signals for real-time monitoring are transmitted to the counting room, where they are received and processed by new back-end electronics designed to extract information on LHC collision, beam induced background and activation products. The Slow Control Driver is designed for the front-end electronics configuration and control. The system architecture and the upgrade status will be presented.

  20. Automatic pattern localization across layout database and photolithography mask

    NASA Astrophysics Data System (ADS)

    Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter

    2016-03-01

    Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.

  1. A fast and precise indoor localization algorithm based on an online sequential extreme learning machine.

    PubMed

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-01

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics. PMID:25599427

  2. A Fast and Precise Indoor Localization Algorithm Based on an Online Sequential Extreme Learning Machine †

    PubMed Central

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-01

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics. PMID:25599427

  3. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  4. In-field fast calibration of FOG-based MWD IMU for horizontal drilling

    NASA Astrophysics Data System (ADS)

    Li, Baoguo; Lu, Jiazhen; Xiao, Wenhua; Lin, Tie

    2015-03-01

    A fiber optic gyroscope (FOG)-based measuring-while-drilling (MWD) device for horizontal drilling is developed and an in-field fast calibration method of the MWD inertial measurement unit (IMU) is presented. The IMU consists of a FOG triad and a quartz accelerometer triad. An error model for the inertial sensors is established and 12 errors are confirmed as the main error sources for horizontal drilling after an analysis. A five-rotation-in-level-plane (5RILP) in-field fast calibration scheme for the main errors of the MWD IMU is illustrated. The scheme can be conveniently implemented on an almost level plane, such as a table plane, without any special external equipment. To solve the calibrated errors, a systematic approach based on specific force measurement observation is adopted. Observation equations of the IMU errors are derived according to the rotation sequence of the calibration scheme. Simulation results show that the proposed calibration method is effective even if there are 5° errors in level and 10° errors in heading. Calibration experiments using the developed FOG-based MWD equipment on a table plane confirm the validity of the proposed in-field calibration method. Inertial navigation tests show that the actual accuracy of the MWD IMU is greatly improved by 60.6% after in-field calibration.

  5. A fast and precise indoor localization algorithm based on an online sequential extreme learning machine.

    PubMed

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-15

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics.

  6. Fast Fabrication of Flexible Functional Circuits Based on Liquid Metal Dual-Trans Printing.

    PubMed

    Wang, Qian; Yu, Yang; Yang, Jun; Liu, Jing

    2015-11-25

    A dual-trans method to print the first functional liquid-metal circuit layout on poly(vinyl chloride) film, and then transfer it into a poly(dimethylsiloxane) substrate through freeze phase transition processing for the fabrication of a flexible electronic device. A programmable soft electronic band and a temperature-sensing module wirelessly communicate with a mobile phone, demonstrating the efficiency and capability of the method.

  7. Optimal Sensor Layouts in Underwater Locomotory Systems

    NASA Astrophysics Data System (ADS)

    Colvert, Brendan; Kanso, Eva

    2015-11-01

    Retrieving and understanding global flow characteristics from local sensory measurements is a challenging but extremely relevant problem in fields such as defense, robotics, and biomimetics. It is an inverse problem in that the goal is to translate local information into global flow properties. In this talk we present techniques for optimization of sensory layouts within the context of an idealized underwater locomotory system. Using techniques from fluid mechanics and control theory, we show that, under certain conditions, local measurements can inform the submerged body about its orientation relative to the ambient flow, and allow it to recognize local properties of shear flows. We conclude by commenting on the relevance of these findings to underwater navigation in engineered systems and live organisms.

  8. A high-speed readout scheme for fast optical correlation-based pattern recognition

    NASA Astrophysics Data System (ADS)

    McDonald, Gregor J.; Lewis, Meirion F.; Wilson, Rebecca

    2004-12-01

    We describe recent developments to a novel form of hybrid electronic/photonic correlator, which exploits component innovations in both electronics and photonics to provide fast, compact and rugged target recognition, applicable to a wide range of security applications. The system benefits from a low power, low volume, optical processing core which has the potential to realise man portable pattern recognition for a wide range of security based imagery and target databases. In the seminal Vander Lugt correlator the input image is Fourier transformed optically and multiplied optically with the conjugate Fourier transform of a reference pattern; the required correlation function is completed by taking the inverse Fourier transform of the product optically. The correlator described here is similar in principle, but performs the initial Fourier transforms and multiplication electronically, with only the final most computationally demanding output Fourier transform being performed optically. In this scheme the Fourier transforms of both the input scene and reference pattern are reduced to a binary phase-only format, where the multiplication process simplifies to a simple Boolean logic XOR function. The output of this XOR gate is displayed on a state-of-the-art Fast Bit Plane Spatial Light Modulator (FBPSLM). A novel readout scheme has been developed which overcomes the previous system output bottleneck and for the first time allows correlation frame readout rates capable of matching the inherently fast nature of the SLM. Readout rates of up to ~1 MHz are now possible, exceeding current SLM capabilities and meeting potential medium term SLM developments promised by SLMs based on novel materials and architectures.

  9. Layout as Political Expression: Visual Literacy and the Peruvian Press.

    ERIC Educational Resources Information Center

    Barnhurst, Kevin G.

    Newspaper layout and design studies ignore politics, and most studies of newspaper politics ignore visual design. News layout is generally thought to be a set of neutral, efficient practices. This study suggests that the political position of Peruvian newspapers parallels their visual presentation of terrorism. The liberal "La Republica" covered…

  10. Layout Geometry in Encoding and Retrieval of Spatial Memory

    ERIC Educational Resources Information Center

    Mou, Weimin; Liu, Xianyun; McNamara, Timothy P.

    2009-01-01

    Two experiments investigated whether the spatial reference directions that are used to specify objects' locations in memory can be solely determined by layout geometry. Participants studied a layout of objects from a single viewpoint while their eye movements were recorded. Subsequently, participants used memory to make judgments of relative…

  11. CMOS VLSI Layout and Verification of a SIMD Computer

    NASA Technical Reports Server (NTRS)

    Zheng, Jianqing

    1996-01-01

    A CMOS VLSI layout and verification of a 3 x 3 processor parallel computer has been completed. The layout was done using the MAGIC tool and the verification using HSPICE. Suggestions for expanding the computer into a million processor network are presented. Many problems that might be encountered when implementing a massively parallel computer are discussed.

  12. Simulation Modeling of a Facility Layout in Operations Management Classes

    ERIC Educational Resources Information Center

    Yazici, Hulya Julie

    2006-01-01

    Teaching quantitative courses can be challenging. Similarly, layout modeling and lean production concepts can be difficult to grasp in an introductory OM (operations management) class. This article describes a simulation model developed in PROMODEL to facilitate the learning of layout modeling and lean manufacturing. Simulation allows for the…

  13. Fast axial and lateral displacement estimation in myocardial elastography based on RF signals with predictions.

    PubMed

    Zhang, Yaonan; Sun, Tingting; Teng, Yueyang; Li, Hong; Kang, Yan

    2015-01-01

    Myocardial elastography (ME) is a strain imaging technique used to diagnose myocardial diseases. Axial and lateral displacement calculations are pre-conditions of strain image acquisition in ME. W.N. Lee et al. proposed a normalized cross-correlation (NCC) and recorrelation method to obtain both axial and lateral displacements in ME. However, this method is not noise-resistant and of high computational cost. This paper proposes a predicted fast NCC algorithm based on W.N. Lee's method, with the additions of sum-table NCC and a displacement prediction algorithm, to obtain efficient and accurate axial and lateral displacements. Compared to experiments based on the NCC and recorrelation methods, the results indicate that the proposed NCC method is much faster (predicted fast NCC method, 69.75s for a 520×260 image; NCC and recorrelation method, 1092.25s for a 520×260 image) and demonstrates better performance in eliminating decorrelation noise (SNR of the axial and lateral strain using the proposed method, 5.87 and 1.25, respectively; SNR of the axial and lateral strain using the NCC and recorrelation method, 1.48 and 1.09, respectively).

  14. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining. PMID:26751200

  15. Predictive-based cross line for fast motion estimation in MPEG-4 videos

    NASA Astrophysics Data System (ADS)

    Fang, Hui; Jiang, Jianmin

    2004-05-01

    Block-based motion estimation is widely used in the field of video compression due to its feature of high processing speed and competitive compression efficiency. In the chain of compression operations, however, motion estimation still remains to be the most time-consuming process. As a result, any improvement in fast motion estimation will enable practical applications of MPEG techniques more efficient and more sustainable in terms of both processing speed and computing cost. To meet the requirements of real-time compression of videos and image sequences, such as video conferencing, remote video surveillance and video phones etc., we propose a new search algorithm and achieve fast motion estimation for MPEG compression standards based on existing algorithm developments. To evaluate the proposed algorithm, we adopted MPEG-4 and the prediction line search algorithm as the benchmarks to design the experiments. Their performances are measured by: (i) reconstructed video quality; (ii) processing time. The results reveal that the proposed algorithm provides a competitive alternative to the existing prediction line search algorithm. In comparison with MPEG-4, the proposed algorithm illustrates significant advantages in terms of processing speed and video quality.

  16. GPU-based ultra-fast dose calculation using a finite size pencil beam model

    NASA Astrophysics Data System (ADS)

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B.

    2009-10-01

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  17. Fast mode decision algorithm for scalable video coding based on luminance coded block pattern

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Jung; Yoo, Jeong-Ju; Hong, Jin-Woo; Suh, Jae-Won

    2013-01-01

    A fast mode decision algorithm is proposed to reduce the computation complexity of adaptive inter layer prediction method, which is a motion estimation algorithm for video compression in scalable video coding (SVC) encoder systems. SVC is standard as an extension of H.264/AVC to provide multimedia services within variable transport environments and across various terminal systems. SVC supports an adaptive inter mode prediction, which includes not only the temporal prediction modes with varying block sizes but also inter layer prediction modes based on correlation between the lower layer information and the current layer. To achieve high coding efficiency, a rate distortion optimization technique is employed to select the best coding mode and reference frame for each MB. As a result, the performance gains of SVC come with increased computational complexity. To overcome this problem, we propose fast mode decision based on coded block pattern (CBP) of 16×16 mode and reference block of best CBP. The experimental results in SVC with combined scalability structure show that the proposed algorithm achieves up to an average 61.65% speed up factor in the encoding time with a negligible bit increment and a minimal image quality loss. In addition, experimental results in spatial and quality scalability show that the computational complexity has been reduced about 55.32% and 52.69%, respectively.

  18. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.

  19. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction

    PubMed Central

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining. PMID:26751200

  20. Fast digital envelope detector based on generalized harmonic wavelet transform for BOTDR performance improvement

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Yang, Yuanhong; Yang, Mingwei

    2014-06-01

    We propose a fast digital envelope detector (DED) based on the generalized harmonic wavelet transform to improve the performance of coherent heterodyne Brillouin optical time domain reflectometry. The proposed DED can obtain undistorted envelopes due to the zero phase-shift ideal bandpass filter (BPF) characteristics of the generalized harmonic wavelet (GHW). Its envelope average ability benefits from the passband designing flexibility of the GHW, and its demodulation speed can be accelerated by using a fast algorithm that only analyses signals of interest within the passband of the GHW with reduced computational complexity. The feasibility and advantage of the proposed DED are verified by simulations and experiments. With an optimized bandwidth, Brillouin frequency shift accuracy improvements of 19.4% and 11.14%, as well as envelope demodulation speed increases of 39.1% and 24.9%, are experimentally attained by the proposed DED over Hilbert transform (HT) and Morlet wavelet transform (MWT) based DEDs, respectively. Spatial resolution by the proposed DED is undegraded, which is identical to the undegraded value by HT-DED with an allpass filter characteristic and better than the degraded value by MWT-DED with a Gaussian BPF characteristic.

  1. A ZnO nanowire-based photo-inverter with pulse-induced fast recovery

    NASA Astrophysics Data System (ADS)

    Ali Raza, Syed Raza; Lee, Young Tack; Hosseini Shokouh, Seyed Hossein; Ha, Ryong; Choi, Heon-Jin; Im, Seongil

    2013-10-01

    We demonstrate a fast response photo-inverter comprised of one transparent gated ZnO nanowire field-effect transistor (FET) and one opaque FET respectively as the driver and load. Under ultraviolet (UV) light the transfer curve of the transparent gate FET shifts to the negative side and so does the voltage transfer curve (VTC) of the inverter. After termination of UV exposure the recovery of photo-induced current takes a long time in general. This persistent photoconductivity (PPC) is due to hole trapping on the surface of ZnO NWs. Here, we used a positive voltage short pulse after UV exposure, for the first time resolving the PPC issue in nanowire-based photo-detectors by accumulating electrons at the ZnO/dielectric interface. We found that a pulse duration as small as 200 ns was sufficient to reach a full recovery to the dark state from the UV induced state, realizing a fast UV detector with a voltage output.We demonstrate a fast response photo-inverter comprised of one transparent gated ZnO nanowire field-effect transistor (FET) and one opaque FET respectively as the driver and load. Under ultraviolet (UV) light the transfer curve of the transparent gate FET shifts to the negative side and so does the voltage transfer curve (VTC) of the inverter. After termination of UV exposure the recovery of photo-induced current takes a long time in general. This persistent photoconductivity (PPC) is due to hole trapping on the surface of ZnO NWs. Here, we used a positive voltage short pulse after UV exposure, for the first time resolving the PPC issue in nanowire-based photo-detectors by accumulating electrons at the ZnO/dielectric interface. We found that a pulse duration as small as 200 ns was sufficient to reach a full recovery to the dark state from the UV induced state, realizing a fast UV detector with a voltage output. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr03801g

  2. TH-E-BRE-08: GPU-Monte Carlo Based Fast IMRT Plan Optimization

    SciTech Connect

    Li, Y; Tian, Z; Shi, F; Jiang, S; Jia, X

    2014-06-15

    Purpose: Intensity-modulated radiation treatment (IMRT) plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC) methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow. Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, rough beamlet dose calculations is conducted with only a small number of particles per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final Result. Results: For a lung case with 5317 beamlets, 10{sup 5} particles per beamlet in the first round, and 10{sup 8} particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec. Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.

  3. A fast and low-power microelectromechanical system-based non-volatile memory device

    PubMed Central

    Lee, Sang Wook; Park, Seung Joo; Campbell, Eleanor E. B.; Park, Yung Woo

    2011-01-01

    Several new generation memory devices have been developed to overcome the low performance of conventional silicon-based flash memory. In this study, we demonstrate a novel non-volatile memory design based on the electromechanical motion of a cantilever to provide fast charging and discharging of a floating-gate electrode. The operation is demonstrated by using an electromechanical metal cantilever to charge a floating gate that controls the charge transport through a carbon nanotube field-effect transistor. The set and reset currents are unchanged after more than 11 h constant operation. Over 500 repeated programming and erasing cycles were demonstrated under atmospheric conditions at room temperature without degradation. Multinary bit programming can be achieved by varying the voltage on the cantilever. The operation speed of the device is faster than a conventional flash memory and the power consumption is lower than other memory devices. PMID:21364559

  4. Continuous shading and its fast update in fully analytic triangular-mesh-based computer generated hologram.

    PubMed

    Park, Jae-Hyeung; Kim, Seong-Bok; Yeom, Han-Ju; Kim, Hee-Jae; Zhang, HuiJun; Li, BoNi; Ji, Yeong-Min; Kim, Sang-Hoo; Ko, Seok-Bum

    2015-12-28

    Fully analytic mesh-based computer generated hologram enables efficient and precise representation of three-dimensional scene. Conventional method assigns uniform amplitude inside individual mesh, resulting in reconstruction of the three-dimensional scene of flat shading. In this paper, we report an extension of the conventional method to achieve the continuous shading where the amplitude in each mesh is continuously varying. The proposed method enables the continuous shading, while maintaining fully analytic framework of the conventional method without any sacrifice in the precision. The proposed method can also be extended to enable fast update of the shading for different illumination directions and the ambient-diffuse reflection ratio based on Phong reflection model. The feasibility of the proposed method is confirmed by the numerical and optical reconstruction of the generated hologram.

  5. Fast vaccine design and development based on correlates of protection (COPs)

    PubMed Central

    van Els, Cécile; Mjaaland, Siri; Næss, Lisbeth; Sarkadi, Julia; Gonczol, Eva; Smith Korsholm, Karen; Hansen, Jon; de Jonge, Jørgen; Kersten, Gideon; Warner, Jennifer; Semper, Amanda; Kruiswijk, Corine; Oftung, Fredrik

    2014-01-01

    New and reemerging infectious diseases call for innovative and efficient control strategies of which fast vaccine design and development represent an important element. In emergency situations, when time is limited, identification and use of correlates of protection (COPs) may play a key role as a strategic tool for accelerated vaccine design, testing, and licensure. We propose that general rules for COP-based vaccine design can be extracted from the existing knowledge of protective immune responses against a large spectrum of relevant viral and bacterial pathogens. Herein, we focus on the applicability of this approach by reviewing the established and up-coming COPs for influenza in the context of traditional and a wide array of new vaccine concepts. The lessons learnt from this field may be applied more generally to COP-based accelerated vaccine design for emerging infections. PMID:25424803

  6. Fast multiscale directional filter bank-based speckle mitigation in gallstone ultrasound images.

    PubMed

    Leavline, Epiphany Jebamalar; Sutha, Shunmugam; Singh, Danasingh Asir Antony Gnana

    2014-02-01

    Speckle noise is a multiplicative type of noise commonly seen in medical and remote sensing images. It gives a granular appearance that degrades the quality of the recorded images. These speckle noise components need to be mitigated before the image is used for further processing and analysis. This paper presents a novel approach for removing granular speckle noise in gray scale images. We used an efficient multiscale image representation scheme named fast multiscale directional filter bank (FMDFB) along with simple threshold methods such as Vishushrink for image processing. It is a perfect reconstruction framework that can be used for a wide range of image processing applications because of its directionality and reduced computational complexity. The FMDFB-based speckle mitigation is appealing over other traditional multiscale approaches such as wavelets and Contourlets. Our experimental results show that the despeckling performance of the proposed method outperforms the wavelet and Contourlet-based despeckling methods.

  7. Fast wavelength calibration method for spectrometers based on waveguide comb optical filter

    SciTech Connect

    Yu, Zhengang; Huang, Meizhen Zou, Ye; Wang, Yang; Sun, Zhenhua; Cao, Zhuangqi

    2015-04-15

    A novel fast wavelength calibration method for spectrometers based on a standard spectrometer and a double metal-cladding waveguide comb optical filter (WCOF) is proposed and demonstrated. By using the WCOF device, a wide-spectrum beam is comb-filtered, which is very suitable for spectrometer wavelength calibration. The influence of waveguide filter’s structural parameters and the beam incident angle on the comb absorption peaks’ wavelength and its bandwidth are also discussed. The verification experiments were carried out in the wavelength range of 200–1100 nm with satisfactory results. Comparing with the traditional wavelength calibration method based on discrete sparse atomic emission or absorption lines, the new method has some advantages: sufficient calibration data, high accuracy, short calibration time, fit for produce process, stability, etc.

  8. Fast QRS Detection with an Optimized Knowledge-Based Method: Evaluation on 11 Standard ECG Databases

    PubMed Central

    Elgendi, Mohamed

    2013-01-01

    The current state-of-the-art in automatic QRS detection methods show high robustness and almost negligible error rates. In return, the methods are usually based on machine-learning approaches that require sufficient computational resources. However, simple-fast methods can also achieve high detection rates. There is a need to develop numerically efficient algorithms to accommodate the new trend towards battery-driven ECG devices and to analyze long-term recorded signals in a time-efficient manner. A typical QRS detection method has been reduced to a basic approach consisting of two moving averages that are calibrated by a knowledge base using only two parameters. In contrast to high-accuracy methods, the proposed method can be easily implemented in a digital filter design. PMID:24066054

  9. Fast SAR image change detection using Bayesian approach based difference image and modified statistical region merging.

    PubMed

    Zhang, Han; Ni, Weiping; Yan, Weidong; Bian, Hui; Wu, Junzheng

    2014-01-01

    A novel fast SAR image change detection method is presented in this paper. Based on a Bayesian approach, the prior information that speckles follow the Nakagami distribution is incorporated into the difference image (DI) generation process. The new DI performs much better than the familiar log ratio (LR) DI as well as the cumulant based Kullback-Leibler divergence (CKLD) DI. The statistical region merging (SRM) approach is first introduced to change detection context. A new clustering procedure with the region variance as the statistical inference variable is exhibited to tailor SAR image change detection purposes, with only two classes in the final map, the unchanged and changed classes. The most prominent advantages of the proposed modified SRM (MSRM) method are the ability to cope with noise corruption and the quick implementation. Experimental results show that the proposed method is superior in both the change detection accuracy and the operation efficiency.

  10. Theory of ion transport with fast acid-base equilibrations in bioelectrochemical systems

    NASA Astrophysics Data System (ADS)

    Dykstra, J. E.; Biesheuvel, P. M.; Bruning, H.; Ter Heijne, A.

    2014-07-01

    Bioelectrochemical systems recover valuable components and energy in the form of hydrogen or electricity from aqueous organic streams. We derive a one-dimensional steady-state model for ion transport in a bioelectrochemical system, with the ions subject to diffusional and electrical forces. Since most of the ionic species can undergo acid-base reactions, ion transport is combined in our model with infinitely fast ion acid-base equilibrations. The model describes the current-induced ammonia evaporation and recovery at the cathode side of a bioelectrochemical system that runs on an organic stream containing ammonium ions. We identify that the rate of ammonia evaporation depends not only on the current but also on the flow rate of gas in the cathode chamber, the diffusion of ammonia from the cathode back into the anode chamber, through the ion exchange membrane placed in between, and the membrane charge density.

  11. Fast SAR Image Change Detection Using Bayesian Approach Based Difference Image and Modified Statistical Region Merging

    PubMed Central

    Ni, Weiping; Yan, Weidong; Bian, Hui; Wu, Junzheng

    2014-01-01

    A novel fast SAR image change detection method is presented in this paper. Based on a Bayesian approach, the prior information that speckles follow the Nakagami distribution is incorporated into the difference image (DI) generation process. The new DI performs much better than the familiar log ratio (LR) DI as well as the cumulant based Kullback-Leibler divergence (CKLD) DI. The statistical region merging (SRM) approach is first introduced to change detection context. A new clustering procedure with the region variance as the statistical inference variable is exhibited to tailor SAR image change detection purposes, with only two classes in the final map, the unchanged and changed classes. The most prominent advantages of the proposed modified SRM (MSRM) method are the ability to cope with noise corruption and the quick implementation. Experimental results show that the proposed method is superior in both the change detection accuracy and the operation efficiency. PMID:25258740

  12. Theory of ion transport with fast acid-base equilibrations in bioelectrochemical systems.

    PubMed

    Dykstra, J E; Biesheuvel, P M; Bruning, H; Ter Heijne, A

    2014-07-01

    Bioelectrochemical systems recover valuable components and energy in the form of hydrogen or electricity from aqueous organic streams. We derive a one-dimensional steady-state model for ion transport in a bioelectrochemical system, with the ions subject to diffusional and electrical forces. Since most of the ionic species can undergo acid-base reactions, ion transport is combined in our model with infinitely fast ion acid-base equilibrations. The model describes the current-induced ammonia evaporation and recovery at the cathode side of a bioelectrochemical system that runs on an organic stream containing ammonium ions. We identify that the rate of ammonia evaporation depends not only on the current but also on the flow rate of gas in the cathode chamber, the diffusion of ammonia from the cathode back into the anode chamber, through the ion exchange membrane placed in between, and the membrane charge density. PMID:25122405

  13. Fast multiscale directional filter bank-based speckle mitigation in gallstone ultrasound images.

    PubMed

    Leavline, Epiphany Jebamalar; Sutha, Shunmugam; Singh, Danasingh Asir Antony Gnana

    2014-02-01

    Speckle noise is a multiplicative type of noise commonly seen in medical and remote sensing images. It gives a granular appearance that degrades the quality of the recorded images. These speckle noise components need to be mitigated before the image is used for further processing and analysis. This paper presents a novel approach for removing granular speckle noise in gray scale images. We used an efficient multiscale image representation scheme named fast multiscale directional filter bank (FMDFB) along with simple threshold methods such as Vishushrink for image processing. It is a perfect reconstruction framework that can be used for a wide range of image processing applications because of its directionality and reduced computational complexity. The FMDFB-based speckle mitigation is appealing over other traditional multiscale approaches such as wavelets and Contourlets. Our experimental results show that the despeckling performance of the proposed method outperforms the wavelet and Contourlet-based despeckling methods. PMID:24562027

  14. Fast intensity-modulated arc therapy based on 2-step beam segmentation

    SciTech Connect

    Bratengeier, Klaus; Gainey, Mark; Sauer, Otto A.; Richter, Anne; Flentje, Michael

    2011-01-15

    Purpose: Single or few arc intensity-modulated arc therapy (IMAT) is intended to be a time saving irradiation method, potentially replacing classical intensity-modulated radiotherapy (IMRT). The aim of this work was to evaluate the quality of different IMAT methods with the potential of fast delivery, which also has the possibility of adapting to the daily shape of the target volume. Methods: A planning study was performed. Novel double and triple IMAT techniques based on the geometrical analysis of the target organ at risk geometry (2-step IMAT) were evaluated. They were compared to step and shoot IMRT reference plans generated using direct machine parameter optimization (DMPO). Volumetric arc (VMAT) plans from commercial preclinical software (SMARTARC) were used as an additional benchmark to classify the quality of the novel techniques. Four cases with concave planning target volumes (PTV) with one dominating organ at risk (OAR), viz., the PTV/OAR combination of the ESTRO Quasimodo phantom, breast/lung, spine metastasis/spinal cord, and prostate/rectum, were used for the study. The composite objective value (COV) and other parameters representing the plan quality were studied. Results: The novel 2-step IMAT techniques with geometry based segment definition were as good as or better than DMPO and were superior to the SMARTARC VMAT techniques. For the spine metastasis, the quality measured by the COV differed only by 3%, whereas the COV of the 2-step IMAT for the other three cases decreased by a factor of 1.4-2.4 with respect to the reference plans. Conclusions: Rotational techniques based on geometrical analysis of the optimization problem (2-step IMAT) provide similar or better plan quality than DMPO or the research version of SMARTARC VMAT variants. The results justify pursuing the goal of fast IMAT adaptation based on 2-step IMAT techniques.

  15. Web tool for worst-case assessment of aberration effects in printing a layout

    NASA Astrophysics Data System (ADS)

    Gennari, Frank E.; Madahar, Sachan; Neureuther, Andrew R.

    2003-06-01

    A web-based tool is presented that analyzes the worst-case effect of lens aberrations on projection printed layouts. These effects are important to the designer since they can be half as large as those of OPC. They can easily be detected by scanning through the layout and matching the inverse Fourier transform of the aberration function to the local layout geometry at each location of interest. The software system for detecting and quantifying these effects is based on a client/server model, where the user interface runs on the client side as a Java applet and the server has access to the binaries and performs all of the heavy numerical processing. The online system provides direct access to this lithography tool, allowing the user to create custom aberration patterns with Zernike polynomials and input custom mask layouts in either CIF or GDS II formats. As a result of the simulation run, the user is provided with a JPEG image of the match results as well as a text file listing match statistics including coordinate locations of the best matches and the match factors.

  16. Evaluation of Carrying Capacity Land-Based Layout to Mitigate Flood Risk (Case Study in Tempuran Floodplain, Ponorogo Regency) Novia Lusiana1 Bambang Rahadi2 Tunggul Sutanhaji3 1Environmental and Natural Resources Management Graduate Program University of Brawijaya, Malang, Indonesia 23Laboratory of Environment and Natural Resources Engineering, Department of Agricultural Engineering, Faculty of Agricultural Technology, University of Brawijaya, Malang, Indonesia Email : novialusiana@rocketmail.com, jbrahadi@ub.ac.id, tunggulsutanhaji@yahoo.com

    NASA Astrophysics Data System (ADS)

    Lusiana, N.

    2013-12-01

    Abstract Floods haves frequently hit Indonesia and have had greater negative impacts. In Javaboth the area affected by flooding and the amount of damage caused by floods have increased. At least, five factors, affect the flooding in Indonesia, including rainfall, reduced retention capacity of the watershed, erroneous design of river channel development, silting-up of the river, and erroneous regional layout. The level of the disastrous risks can be evaluated based on the extent of the threat and susceptibility of a region. One methode for risk assessment is Geographical Information System (GIS)-based mapping. Objectives of this research are: 1) evaluating current flood risk in susceptible areas, 2) applying supported land-based layout as effort to mitigate floodrisk, and 3) evaluating floodrisk for the period 2031 in the Tempuran floodplain of Ponorogo Regency. Result show that the area categorized as high risk covers 104. 6 ha (1. 2%), moderate risk covers 2512. 9 ha (28. 4%), low risk covers 3140. 8 ha (35. 5%), and the lowest risk covers 3096. 1 (34. 9%). Using Regional Layout Design for the years 2011 - 2031, the high risk area covers 67. 9 ha (0.8%), moderate risk covers 3033 ha (34. 3%), low risk covers 2770. 8 ha (31, 3%), and the lowest risk covers 2982. 6 ha (34%). Based on supported land suitability, the high-risk areais only 2. 9 ha (0.1%), moderate risk covers of 426. 1 ha (4. 8%), low risk covers 4207. 4 ha (47. 5%), and the lowest risk covers 4218 ha (47. 6%). Flood risk can be mitigated by applying supported land-based layout as shown by the reduced high-risk area, and the fact that > 90% of the areas are categorized as low or lowest risk of disaster. Keywords : Carrying Capacity, Land Capacity, Flood Risk

  17. A fast continuous magnetic field measurement system based on digital signal processors

    SciTech Connect

    Velev, G.V.; Carcagno, R.; DiMarco, J.; Kotelnikov, S.; Lamm, M.; Makulski, A.; Maroussov, V.; Nehring, R.; Nogiec, J.; Orris, D.; Poukhov, O.; Prakoshyn, F.; Schlabach, P.; Tompkins, J.C.; /Fermilab

    2005-09-01

    In order to study dynamic effects in accelerator magnets, such as the decay of the magnetic field during the dwell at injection and the rapid so-called ''snapback'' during the first few seconds of the resumption of the energy ramp, a fast continuous harmonics measurement system was required. A new magnetic field measurement system, based on the use of digital signal processors (DSP) and Analog to Digital (A/D) converters, was developed and prototyped at Fermilab. This system uses Pentek 6102 16 bit A/D converters and the Pentek 4288 DSP board with the SHARC ADSP-2106 family digital signal processor. It was designed to acquire multiple channels of data with a wide dynamic range of input signals, which are typically generated by a rotating coil probe. Data acquisition is performed under a RTOS, whereas processing and visualization are performed under a host computer. Firmware code was developed for the DSP to perform fast continuous readout of the A/D FIFO memory and integration over specified intervals, synchronized to the probe's rotation in the magnetic field. C, C++ and Java code was written to control the data acquisition devices and to process a continuous stream of data. The paper summarizes the characteristics of the system and presents the results of initial tests and measurements.

  18. Fast single photon avalanche photodiode-based time-resolved diffuse optical tomography scanner

    PubMed Central

    Mu, Ying; Niedre, Mark

    2015-01-01

    Resolution in diffuse optical tomography (DOT) is a persistent problem and is primarily limited by high degree of light scatter in biological tissue. We showed previously that the reduction in photon scatter between a source and detector pair at early time points following a laser pulse in time-resolved DOT is highly dependent on the temporal response of the instrument. To this end, we developed a new single-photon avalanche photodiode (SPAD) based time-resolved DOT scanner. This instrument uses an array of fast SPADs, a femto-second Titanium Sapphire laser and single photon counting electronics. In combination, the overall instrument temporal impulse response function width was 59 ps. In this paper, we report the design of this instrument and validate its operation in symmetrical and irregularly shaped optical phantoms of approximately small animal size. We were able to accurately reconstruct the size and position of up to 4 absorbing inclusions, with increasing image quality at earlier time windows. We attribute these results primarily to the rapid response time of our instrument. These data illustrate the potential utility of fast SPAD detectors in time-resolved DOT. PMID:26417526

  19. Understanding and eliminating the fast creep problem in Fe-based superconductors

    NASA Astrophysics Data System (ADS)

    Civale, Leonardo; Eley, Serena; Maiorov, Boris; Miura, Masashi

    One surprising characteristic of Fe-based superconductors is that they exhibit flux creep rates (S) as large as, or larger than, those found in oxide high temperature superconductors (HTS). This very fast vortex dynamics appears to be inconsistent with the estimate of the influence of the thermal fluctuations as quantified by the Ginzburg number (Gi), which measures the ratio of the thermal energy to the condensation energy in an elemental superconducting volume. In particular, compounds of the AFe2As2 family (``122'') have Gi ~10-5 to 10-4, so S could be expected to lie between that of low Tc materials (where typically Gi ~ 10-8) and HTS such as YBa2Cu3O7 (Gi ~ 10-2) , as indeed occurs in other superconductors with intermediate fluctuations, such as MgB2 (Gi ~10-6 to 10-4) . We have found the solution to this puzzle: the fast creep rates in 122 compounds are due to non-optimized pinning landscapes. Initial evidence comes from our previous studies showing that the introduction of additional disorder by irradiation decreases creep significantly in 122 single crystals, although still remaining well above the ideal limit. We now have new evidence from 122 thin films demonstrating that S can be reduced to the lower limit set by Gi by appropriate engineering of the pinning landscape.

  20. Fast multichannel astronomical photometer based on silicon photo multipliers mounted at the Telescopio Nazionale Galileo

    NASA Astrophysics Data System (ADS)

    Ambrosino, Filippo; Meddi, Franco; Rossi, Corinne; Sclavi, Silvia; Nesci, Roberto; Bruni, Ivan; Ghedina, Adriano; Riverol, Luis; Di Fabrizio, Luca

    2014-07-01

    The realization of low-cost instruments with high technical performance is a goal that deserves efforts in an epoch of fast technological developments. Such instruments can be easily reproduced and therefore allow new research programs to be opened in several observatories. We realized a fast optical photometer based on the SiPM (Silicon Photo Multiplier) technology, using commercially available modules. Using low-cost components, we developed a custom electronic chain to extract the signal produced by a commercial MPPC (Multi Pixel Photon Counter) module produced by Hamamatsu Photonics to obtain sub-millisecond sampling of the light curve of astronomical sources (typically pulsars). We built a compact mechanical interface to mount the MPPC at the focal plane of the TNG (Telescopio Nazionale Galileo), using the space available for the slits of the LRS (Low Resolution Spectrograph). On February 2014 we observed the Crab pulsar with the TNG with our prototype photometer, deriving its period and the shape of its light curve, in very good agreement with the results obtained in the past with other much more expensive instruments. After the successful run at the telescope we describe here the lessons learned and the ideas that burst to optimize this instrument and make it more versatile.

  1. Fast Coalescent-Based Computation of Local Branch Support from Quartet Frequencies

    PubMed Central

    Sayyari, Erfan; Mirarab, Siavash

    2016-01-01

    Species tree reconstruction is complicated by effects of incomplete lineage sorting, commonly modeled by the multi-species coalescent model (MSC). While there has been substantial progress in developing methods that estimate a species tree given a collection of gene trees, less attention has been paid to fast and accurate methods of quantifying support. In this article, we propose a fast algorithm to compute quartet-based support for each branch of a given species tree with regard to a given set of gene trees. We then show how the quartet support can be used in the context of the MSC to compute (1) the local posterior probability (PP) that the branch is in the species tree and (2) the length of the branch in coalescent units. We evaluate the precision and recall of the local PP on a wide set of simulated and biological datasets, and show that it has very high precision and improved recall compared with multi-locus bootstrapping. The estimated branch lengths are highly accurate when gene tree estimation error is low, but are underestimated when gene tree estimation error increases. Computation of both the branch length and local PP is implemented as new features in ASTRAL. PMID:27189547

  2. Particle-based labeling: Fast point-feature labeling without obscuring other visual features.

    PubMed

    Luboschik, Martin; Schumann, Heidrun; Cords, Hilko

    2008-01-01

    In many information visualization techniques, labels are an essential part to communicate the visualized data. To preserve the expressiveness of the visual representation, a placed label should neither occlude other labels nor visual representatives (e.g., icons, lines) that communicate crucial information. Optimal, non-overlapping labeling is an NP-hard problem. Thus, only a few approaches achieve a fast non-overlapping labeling in highly interactive scenarios like information visualization. These approaches generally target the point-feature label placement (PFLP) problem, solving only label-label conflicts. This paper presents a new, fast, solid and flexible 2D labeling approach for the PFLP problem that additionally respects other visual elements and the visual extent of labeled features. The results (number of placed labels, processing time) of our particle-based method compare favorably to those of existing techniques. Although the esthetic quality of non-real-time approaches may not be achieved with our method, it complies with practical demands and thus supports the interactive exploration of information spaces. In contrast to the known adjacent techniques, the flexibility of our technique enables labeling of dense point clouds by the use of non-occluding distant labels. Our approach is independent of the underlying visualization technique, which enables us to demonstrate the application of our labeling method within different information visualization scenarios.

  3. A thermodynamically based definition of fast verses slow heating in secondary explosives

    NASA Astrophysics Data System (ADS)

    Henson, Bryan; Smilowitz, Laura

    2013-06-01

    The thermal response of energetic materials is often categorized according to the rate of heating as either fast or slow, e.g. slow cook-off. Such categorizations have most often followed some operational rationale, without a material based definition. We have spent several years demonstrating that for the energetic material octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) a single mechanism of thermal response reproduces times to ignition independent of rate or means of heating over the entire range of thermal response. HMX is unique in that bulk melting is rarely observed in either thermal ignition or combustion. We have recently discovered a means of expressing this mechanism for HMX in a reduced form applicable to many secondary explosives. We will show that with this mechanism a natural definition of fast versus slow rates of heating emerges, related to the rate of melting, and we use this to illustrate why HMX does not exhibit melting, and why a number of other secondary explosives do, and require the two separate categories.

  4. Fast single photon avalanche photodiode-based time-resolved diffuse optical tomography scanner.

    PubMed

    Mu, Ying; Niedre, Mark

    2015-09-01

    Resolution in diffuse optical tomography (DOT) is a persistent problem and is primarily limited by high degree of light scatter in biological tissue. We showed previously that the reduction in photon scatter between a source and detector pair at early time points following a laser pulse in time-resolved DOT is highly dependent on the temporal response of the instrument. To this end, we developed a new single-photon avalanche photodiode (SPAD) based time-resolved DOT scanner. This instrument uses an array of fast SPADs, a femto-second Titanium Sapphire laser and single photon counting electronics. In combination, the overall instrument temporal impulse response function width was 59 ps. In this paper, we report the design of this instrument and validate its operation in symmetrical and irregularly shaped optical phantoms of approximately small animal size. We were able to accurately reconstruct the size and position of up to 4 absorbing inclusions, with increasing image quality at earlier time windows. We attribute these results primarily to the rapid response time of our instrument. These data illustrate the potential utility of fast SPAD detectors in time-resolved DOT.

  5. Development of a fast radiation detector based on barium fluoride scintillation crystal

    SciTech Connect

    Han, Hetong; Zhang, Zichuan; Weng, Xiufeng; Liu, Junhong; Zhang, Kan; Li, Gang; Guan, Xingyin

    2013-07-15

    Barium fluoride (BaF{sub 2}) is an inorganic scintillation material used for the detection of X/gamma radiation due to its relatively high density, equivalent atomic number, radiation hardness, and high luminescence. BaF{sub 2} has a potential capacity to be used in gamma ray timing experiments due to the prompt decay emission components. It is known that the light output from BaF{sub 2} has three decay components: two prompt of those at approximately 195 nm and 220 nm with a decay constant around 600-800 ps and a more intense, slow component at approximately 310 nm with a decay constant around 630 ns which hinders fast timing experiments. We report here the development of a fast radiation detector based on a BaF{sub 2} scintillation crystal employing a special optical filter device, a multiple reflection multi-path ultraviolet region short-wavelength pass light guides (MRMP-short pass filter) by using selective reflection technique, for which the intensity of the slow component is reduced to less than 1%. The methods used for this study provide a novel way to design radiation detector by utilizing scintillation crystal with several emission bands.

  6. Fast Coalescent-Based Computation of Local Branch Support from Quartet Frequencies.

    PubMed

    Sayyari, Erfan; Mirarab, Siavash

    2016-07-01

    Species tree reconstruction is complicated by effects of incomplete lineage sorting, commonly modeled by the multi-species coalescent model (MSC). While there has been substantial progress in developing methods that estimate a species tree given a collection of gene trees, less attention has been paid to fast and accurate methods of quantifying support. In this article, we propose a fast algorithm to compute quartet-based support for each branch of a given species tree with regard to a given set of gene trees. We then show how the quartet support can be used in the context of the MSC to compute (1) the local posterior probability (PP) that the branch is in the species tree and (2) the length of the branch in coalescent units. We evaluate the precision and recall of the local PP on a wide set of simulated and biological datasets, and show that it has very high precision and improved recall compared with multi-locus bootstrapping. The estimated branch lengths are highly accurate when gene tree estimation error is low, but are underestimated when gene tree estimation error increases. Computation of both the branch length and local PP is implemented as new features in ASTRAL. PMID:27189547

  7. Cygrid: A fast Cython-powered convolution-based gridding module for Python

    NASA Astrophysics Data System (ADS)

    Winkel, B.; Lenz, D.; Flöer, L.

    2016-06-01

    Context. Data gridding is a common task in astronomy and many other science disciplines. It refers to the resampling of irregularly sampled data to a regular grid. Aims: We present cygrid, a library module for the general purpose programming language Python. Cygrid can be used to resample data to any collection of target coordinates, although its typical application involves FITS maps or data cubes. The FITS world coordinate system standard is supported. Methods: The regridding algorithm is based on the convolution of the original samples with a kernel of arbitrary shape. We introduce a lookup table scheme that allows us to parallelize the gridding and combine it with the HEALPix tessellation of the sphere for fast neighbor searches. Results: We show that for n input data points, cygrids runtime scales between O(n) and O(nlog n) and analyze the performance gain that is achieved using multiple CPU cores. We also compare the gridding speed with other techniques, such as nearest-neighbor, and linear and cubic spline interpolation. Conclusions: Cygrid is a very fast and versatile gridding library that significantly outperforms other third-party Python modules, such as the linear and cubic spline interpolation provided by SciPy. http://https://github.com/bwinkel/cygrid

  8. PCM-Based Durable Write Cache for Fast Disk I/O

    SciTech Connect

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick; Li, Dong; Vetter, Jeffrey S; Yu, Weikuan

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop a novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.

  9. A fast density-based clustering algorithm for real-time Internet of Things stream.

    PubMed

    Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.

  10. Fast hybrid CPU- and GPU-based CT reconstruction algorithm using air skipping technique.

    PubMed

    Lee, Byeonghun; Lee, Ho; Shin, Yeong Gil

    2010-01-01

    This paper presents a fast hybrid CPU- and GPU-based CT reconstruction algorithm to reduce the amount of back-projection operation using air skipping involving polygon clipping. The algorithm easily and rapidly selects air areas that have significantly higher contrast in each projection image by applying K-means clustering method on CPU, and then generates boundary tables for verifying valid region using segmented air areas. Based on these boundary tables of each projection image, clipped polygon that indicates active region when back-projection operation is performed on GPU is determined on each volume slice. This polygon clipping process makes it possible to use smaller number of voxels to be back-projected, which leads to a faster GPU-based reconstruction method. This approach has been applied to a clinical data set and Shepp-Logan phantom data sets having various ratio of air region for quantitative and qualitative comparison and analysis of our and conventional GPU-based reconstruction methods. The algorithm has been proved to reduce computational time to half without losing any diagnostic information, compared to conventional GPU-based approaches.

  11. A Fast Density-Based Clustering Algorithm for Real-Time Internet of Things Stream

    PubMed Central

    Ying Wah, Teh

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753

  12. Fast tracking based on local histogram of oriented gradient and dual detection

    NASA Astrophysics Data System (ADS)

    Shi, Huan; Kai; Cheng, Fei; Ding, Wenwen; Zhang, Baijian

    2016-05-01

    Visual tracking is important in computer vision. At present, although many algorithms of visual tracking have been proposed, there are still many problems which are needed to be solved, such as occlusion and frame speed. To solve these problems, this paper proposes a novel method which based on compressive tracking. Firstly, we make sure the occlusion happens if the testing result about image features by the classifiers is lower than a threshold value which is certain. Secondly, we mark the occluded image and record the occlusion region. In the next frame, we test both the classifier and the marked image. This algorithm makes sure the tracking is fast, and the result about solving occlusion is much better than other algorithms, especially compressive tracking.

  13. Fast approach to infrared image restoration based on shrinkage functions calibration

    NASA Astrophysics Data System (ADS)

    Zhang, Chengshuo; Shi, Zelin; Xu, Baoshu; Feng, Bin

    2016-05-01

    High-quality image restoration in real time is a challenge for infrared imaging systems. We present a fast approach to infrared image restoration based on shrinkage functions calibration. Rather than directly modeling the prior of sharp images to obtain the shrinkage functions, we calibrate them for restoration directly by using the acquirable sharp and blurred image pairs from the same infrared imaging system. The calibration method is employed to minimize the sum of squared errors between sharp images and restored images from the blurred images. Our restoration algorithm is noniterative and its shrinkage functions are stored in the look-up tables, so an architecture solution of pipeline structure can work in real time. We demonstrate the effectiveness of our approach by testing its quantitative performance from simulation experiments and its qualitative performance from a developed wavefront coding infrared imaging system.

  14. A fast image retrieval method based on SVM and imbalanced samples in filtering multimedia message spam

    NASA Astrophysics Data System (ADS)

    Chen, Zhang; Peng, Zhenming; Peng, Lingbing; Liao, Dongyi; He, Xin

    2011-11-01

    With the swift and violent development of the Multimedia Messaging Service (MMS), it becomes an urgent task to filter the Multimedia Message (MM) spam effectively in real-time. For the fact that most MMs contain images or videos, a method based on retrieving images is given in this paper for filtering MM spam. The detection method used in this paper is a combination of skin-color detection, texture detection, and face detection, and the classifier for this imbalanced problem is a very fast multi-classification combining Support vector machine (SVM) with unilateral binary decision tree. The experiments on 3 test sets show that the proposed method is effective, with the interception rate up to 60% and the average detection time for each image less than 1 second.

  15. Wavelet-based fast time-resolved magnetic sensing with electronic spins in diamond

    NASA Astrophysics Data System (ADS)

    Xu, Nanyang; Jiang, Fengjian; Tian, Yu; Ye, Jianfeng; Shi, Fazhan; Lv, Haijiang; Wang, Ya; Wrachtrup, Jörg; Du, Jiangfeng

    2016-04-01

    Time-resolved magnetic sensing is of great importance from fundamental studies to applications in physical and biological sciences. Recently, the nitrogen-vacancy defect center in diamond has been developed as a promising sensor of magnetic fields under ambient conditions. However, methods to reconstruct time-resolved magnetic fields with high sensitivity are not yet fully developed. Here, we propose and demonstrate a sensing method based on spin echo and Haar wavelet transformation. Our method is exponentially faster in reconstructing time-resolved magnetic fields with comparable sensitivity than existing methods. It is also easier to implement in experiments. Furthermore, the wavelet's unique features enable our method to extract information from the whole signal with only part of the measuring sequences. We then explore this feature for a fast detection of simulated nerve impulses. These results will be useful to time-resolved magnetic sensing with quantum probes at nanoscale.

  16. Nanowire humidity optical sensor system based on fast Fourier transform technique

    NASA Astrophysics Data System (ADS)

    Rota-Rodrigo, S.; Pérez-Herrera, R.; Lopez-Aldaba, A.; López Bautista, M. C.; Esteban, O.; López-Amo, M.

    2015-09-01

    In this paper, a new sensor system for relative humidity measurements based on its interaction with the evanescent field of a nanowire is presented. The interrogation of the sensing head is carried out by monitoring the fast Fourier transform phase variations of one of the nanowire interference frequencies. This method is independent of the signal amplitude and also avoids the necessity of tracking the wavelength evolution in the spectrum, which can be a handicap when there are multiple interference frequency components with different sensitivities. The sensor is operated within a wide humidity range (20%-70% relative humidity) with a maximum sensitivity achieved of 0.14rad/% relative humidity. Finally, due to the system uses an optical interrogator as unique active element, the system presents a cost-effective feature.

  17. Two-dimensional electronic spectroscopy based on conventional optics and fast dual chopper data acquisition

    NASA Astrophysics Data System (ADS)

    Heisler, Ismael A.; Moca, Roberta; Camargo, Franco V. A.; Meech, Stephen R.

    2014-06-01

    We report an improved experimental scheme for two-dimensional electronic spectroscopy (2D-ES) based solely on conventional optical components and fast data acquisition. This is accomplished by working with two choppers synchronized to a 10 kHz repetition rate amplified laser system. We demonstrate how scattering and pump-probe contributions can be removed during 2D measurements and how the pump probe and local oscillator spectra can be generated and saved simultaneously with each population time measurement. As an example the 2D-ES spectra for cresyl violet were obtained. The resulting 2D spectra show a significant oscillating signal during population evolution time which can be assigned to an intramolecular vibrational mode.

  18. Analysis of Nickel Based Hardfacing Materials Manufactured by Laser Cladding for Sodium Fast Reactor

    NASA Astrophysics Data System (ADS)

    Aubry, P.; Blanc, C.; Demirci, I.; Dal, M.; Malot, T.; Maskrot, H.

    For improving the operational capacity, the maintenance and the decommissioning of the future French Sodium Fast Reactor ASTRID which is under study, it is asked to find or develop a cobalt free hardfacing alloy and the associated manufacturing process that will give satisfying wear performances. This article presents recent results obtained on some selected nickel-based hardfacing alloys manufactured by laser cladding, particularly on Tribaloy 700 alloy. A process parameter search is made and associated the microstructural analysis of the resulting clads. A particular attention is made on the solidification of the main precipitates (chromium carbides, boron carbides, Laves phases,…) that will mainly contribute to the wear properties of the material. Finally, the wear resistance of some samples is evaluated in simple wear conditions evidencing promising results on tribology behavior of Tribaloy 700.

  19. Multilevel fast multipole method based on a potential formulation for 3D electromagnetic scattering problems.

    PubMed

    Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome

    2013-06-01

    A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.

  20. Fault Diagnosis of Rolling Bearing Based on Fast Nonlocal Means and Envelop Spectrum

    PubMed Central

    Lv, Yong; Zhu, Qinglin; Yuan, Rui

    2015-01-01

    The nonlocal means (NL-Means) method that has been widely used in the field of image processing in recent years effectively overcomes the limitations of the neighborhood filter and eliminates the artifact and edge problems caused by the traditional image denoising methods. Although NL-Means is very popular in the field of 2D image signal processing, it has not received enough attention in the field of 1D signal processing. This paper proposes a novel approach that diagnoses the fault of a rolling bearing based on fast NL-Means and the envelop spectrum. The parameters of the rolling bearing signals are optimized in the proposed method, which is the key contribution of this paper. This approach is applied to the fault diagnosis of rolling bearing, and the results have shown the efficiency at detecting roller bearing failures. PMID:25585105

  1. Fast Restoration Based on Alternative Wavelength Paths in a Wide Area Optical IP Network

    NASA Astrophysics Data System (ADS)

    Matera, Francesco; Rea, Luca; Venezia, Matteo; Capanna, Lorenzo; Del Prete, Giuseppe

    In this article we describe an experimental investigation of IP network restoration based on wavelength recovery. We propose a procedure for metro and wide area gigabit Ethernet networks that allows us to route the wavelength in case of link failure to another existing link by exploiting wavelength division multiplexing in the fiber. Such a procedure is obtained by means of an optical switch that is managed by a loss-of-light signal that is generated by a router in case of link failure. Such a method has been tested in an IP network consisting of three core routers with optical gigabit Ethernet interfaces connected by means of 50-km-long single-mode fibers between Rome and Pomezia. Compared with other conventional restoration techniques, such as OSPF and MPLS, our method -in very fast (20 ms) and is compatible with real-time TV services and low-cost chips.

  2. Fast prediction unit selection method for HEVC intra prediction based on salient regions

    NASA Astrophysics Data System (ADS)

    Feng, Lei; Dai, Ming; Zhao, Chun-lei; Xiong, Jing-ying

    2016-07-01

    In order to reduce the computational complexity of the high efficiency video coding (HEVC) standard, a new algorithm for HEVC intra prediction, namely, fast prediction unit (PU) size selection method for HEVC based on salient regions is proposed in this paper. We first build a saliency map for each largest coding unit (LCU) to reduce its texture complexity. Secondly, the optimal PU size is determined via a scheme that implements an information entropy comparison among sub-blocks of saliency maps. Finally, we apply the partitioning result of saliency map on the original LCUs, obtaining the optimal partitioning result. Our algorithm can determine the PU size in advance to the angular prediction in intra coding, reducing computational complexity of HEVC. The experimental results show that our algorithm achieves a 37.9% reduction in encoding time, while producing a negligible loss in Bjontegaard delta bit rate ( BDBR) of 0.62%.

  3. Proton linac for hospital-based fast neutron therapy and radioisotope production

    SciTech Connect

    Lennox, A.J.; Hendrickson, F.R.; Swenson, D.A.; Winje, R.A.; Young, D.E.; Rush Univ., Chicago, IL; Science Applications International Corp., Princeton, NJ; Fermi National Accelerator Lab., Batavia, IL )

    1989-09-01

    Recent developments in linac technology have led to the design of a hospital-based proton linac for fast neutron therapy. The 180 microamp average current allows beam to be diverted for radioisotope production during treatments while maintaining an acceptable dose rate. During dedicated operation, dose rates greater than 280 neutron rads per minute are achievable at depth, DMAX = 1.6 cm with source to axis distance, SAD = 190 cm. Maximum machine energy is 70 MeV and several intermediate energies are available for optimizing production of isotopes for Positron Emission Tomography and other medical applications. The linac can be used to produce a horizontal or a gantry can be added to the downstream end of the linac for conventional patient positioning. The 70 MeV protons can also be used for proton therapy for ocular melanomas. 17 refs., 1 fig., 1 tab.

  4. Two-dimensional electronic spectroscopy based on conventional optics and fast dual chopper data acquisition

    SciTech Connect

    Heisler, Ismael A. Moca, Roberta; Meech, Stephen R.; Camargo, Franco V. A.

    2014-06-15

    We report an improved experimental scheme for two-dimensional electronic spectroscopy (2D-ES) based solely on conventional optical components and fast data acquisition. This is accomplished by working with two choppers synchronized to a 10 kHz repetition rate amplified laser system. We demonstrate how scattering and pump-probe contributions can be removed during 2D measurements and how the pump probe and local oscillator spectra can be generated and saved simultaneously with each population time measurement. As an example the 2D-ES spectra for cresyl violet were obtained. The resulting 2D spectra show a significant oscillating signal during population evolution time which can be assigned to an intramolecular vibrational mode.

  5. A robust and fast line segment detector based on top-down smaller eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing

    2014-01-01

    In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.

  6. Fast randomized Hough transformation track initiation algorithm based on multi-scale clustering

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Chen, Qian; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    A fast randomized Hough transformation track initiation algorithm based on multi-scale clustering is proposed to overcome existing problems in traditional infrared search and track system(IRST) which cannot provide movement information of the initial target and select the threshold value of correlation automatically by a two-dimensional track association algorithm based on bearing-only information . Movements of all the targets are presumed to be uniform rectilinear motion throughout this new algorithm. Concepts of space random sampling, parameter space dynamic linking table and convergent mapping of image to parameter space are developed on the basis of fast randomized Hough transformation. Considering the phenomenon of peak value clustering due to shortcomings of peak detection itself which is built on threshold value method, accuracy can only be ensured on condition that parameter space has an obvious peak value. A multi-scale idea is added to the above-mentioned algorithm. Firstly, a primary association is conducted to select several alternative tracks by a low-threshold .Then, alternative tracks are processed by multi-scale clustering methods , through which accurate numbers and parameters of tracks are figured out automatically by means of transforming scale parameters. The first three frames are processed by this algorithm in order to get the first three targets of the track , and then two slightly different gate radius are worked out , mean value of which is used to be the global threshold value of correlation. Moreover, a new model for curvilinear equation correction is applied to the above-mentioned track initiation algorithm for purpose of solving the problem of shape distortion when a space three-dimensional curve is mapped to a two-dimensional bearing-only space. Using sideways-flying, launch and landing as examples to build models and simulate, the application of the proposed approach in simulation proves its effectiveness , accuracy , and adaptivity

  7. A fast and scalable content transfer protocol (FSCTP) for VANET based architecture

    NASA Astrophysics Data System (ADS)

    Santamaria, A. F.; Scala, F.; Sottile, C.; Tropea, M.; Raimondo, P.

    2016-05-01

    In the modern Vehicular Ad-hoc Networks (VANET) based systems even more applications require lot of data to be exchanged among vehicles and infrastructure entities. Due to mobility issues and unplanned events that may occurs it is important that contents should be transferred as fast as possible by taking into account consistence of the exchanged data and reliability of the connections. In order to face with these issues, in this work we propose a new transfer data protocol called Fast and Scalable Content Transfer Protocol (FSCTP). This protocol allows a data transfer by using a bidirectional channel among content suppliers and receivers exploiting several cooperative sessions. Each session will be based on User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) to start and manage data transfer. Often in urban area the VANET scenario is composed of several vehicle and infrastructures points. The main idea is to exploit ad-hoc connections between vehicles to reach content suppliers. Moreover, in order to obtain a faster data transfer, more than one session is exploited to achieve a higher transfer rate. Of course it is important to manage data transfer between suppliers to avoid redundancy and resource wastages. The main goal is to instantiate a cooperative multi-session layer efficiently managed in a VANET environment exploiting the wide coverage area and avoiding common issues known in this kind of scenario. High mobility and unstable connections between nodes are some of the most common issues to address, thus a cooperative work between network, transport and application layers needs to be designed.

  8. Two linear time, low overhead algorithms for graph layout

    2008-01-10

    The software comprises two algorithms designed to perform a 2D layout of a graph structure in time linear with respect to the vertices and edges in the graph, whereas most other layout algorithms have a running time that is quadratic with respect to the number of vertices or greater. Although these layout algorithms run in a fraction of the time as their competitors, they provide competitive results when applied to most real-world graphs. These algorithmsmore » also have a low constant running time and small memory footprint, making them useful for small to large graphs.« less

  9. The perception of surface layout during low level flight

    NASA Technical Reports Server (NTRS)

    Perrone, John A.

    1991-01-01

    Although it is fairly well established that information about surface layout can be gained from motion cues, it is not so clear as to what information humans can use and what specific information they should be provided. Theoretical analyses tell us that the information is in the stimulus. It will take more experiments to verify that this information can be used by humans to extract surface layout from the 2D velocity flow field. The visual motion factors that can affect the pilot's ability to control an aircraft and to infer the layout of the terrain ahead are discussed.

  10. PRIMAL: Fast and Accurate Pedigree-based Imputation from Sequence Data in a Founder Population

    PubMed Central

    Livne, Oren E.; Han, Lide; Alkorta-Aranburu, Gorka; Wentworth-Sheilds, William; Abney, Mark; Ober, Carole; Nicolae, Dan L.

    2015-01-01

    Founder populations and large pedigrees offer many well-known advantages for genetic mapping studies, including cost-efficient study designs. Here, we describe PRIMAL (PedigRee IMputation ALgorithm), a fast and accurate pedigree-based phasing and imputation algorithm for founder populations. PRIMAL incorporates both existing and original ideas, such as a novel indexing strategy of Identity-By-Descent (IBD) segments based on clique graphs. We were able to impute the genomes of 1,317 South Dakota Hutterites, who had genome-wide genotypes for ~300,000 common single nucleotide variants (SNVs), from 98 whole genome sequences. Using a combination of pedigree-based and LD-based imputation, we were able to assign 87% of genotypes with >99% accuracy over the full range of allele frequencies. Using the IBD cliques we were also able to infer the parental origin of 83% of alleles, and genotypes of deceased recent ancestors for whom no genotype information was available. This imputed data set will enable us to better study the relative contribution of rare and common variants on human phenotypes, as well as parental origin effect of disease risk alleles in >1,000 individuals at minimal cost. PMID:25735005

  11. A fast region-based active contour model for boundary detection of echocardiographic images.

    PubMed

    Saini, Kalpana; Dewal, M L; Rohit, Manojkumar

    2012-04-01

    This paper presents the boundary detection of atrium and ventricle in echocardiographic images. In case of mitral regurgitation, atrium and ventricle may get dilated. To examine this, doctors draw the boundary manually. Here the aim of this paper is to evolve the automatic boundary detection for carrying out segmentation of echocardiography images. Active contour method is selected for this purpose. There is an enhancement of Chan-Vese paper on active contours without edges. Our algorithm is based on Chan-Vese paper active contours without edges, but it is much faster than Chan-Vese model. Here we have developed a method by which it is possible to detect much faster the echocardiographic boundaries. The method is based on the region information of an image. The region-based force provides a global segmentation with variational flow robust to noise. Implementation is based on level set theory so it easy to deal with topological changes. In this paper, Newton-Raphson method is used which makes possible the fast boundary detection.

  12. Compressive sensing for seismic data reconstruction via fast projection onto convex sets based on seislet transform

    NASA Astrophysics Data System (ADS)

    Gan, Shuwei; Wang, Shoudong; Chen, Yangkang; Chen, Xiaohong; Huang, Weiling; Chen, Hanming

    2016-07-01

    According to the compressive sensing (CS) theory in the signal-processing field, we proposed a new CS approach based on a fast projection onto convex sets (POCS) algorithm with sparsity constraint in the seislet transform domain. The seislet transform appears to be the sparest among the state-of-the-art sparse transforms. The FPOCS can obtain much faster convergence than conventional POCS (about two thirds of conventional iterations can be saved), while maintaining the same recovery performance. The FPOCS can obtain faster and better performance than FISTA for relatively cleaner data but will get slower and worse performance than FISTA, which becomes a reference to decide which algorithm to use in practice according the noise level in the seismic data. The seislet transform based CS approach can achieve obviously better data recovery results than f - k transform based scenarios, considering both signal-to-noise ratio (SNR), local similarity comparison, and visual observation, because of a much sparser structure in the seislet transform domain. We have used both synthetic and field data examples to demonstrate the superior performance of the proposed seislet-based FPOCS approach.

  13. Navigating the Shift to Value-Based Reimbursement: How Fast Is Too Fast, and How Slow Is Too Slow?

    PubMed

    Greeter, Aimee

    2016-01-01

    Providers are struggling to understand how the macro-level changes occurring in the healthcare industry will affect them on a micro-level, especially as they pertain to the shift toward value-based reimbursement. This article presents a guide to physicians and practice administration, in both the private and hospital-employed practice setting, on how to effectively manage this shift from fee-for-volume to fee-for-value. It analyzes new reimbursement models, population health management trends, and second-generation alignment and compensation models to help the reader understand practical tactics and overarching strategies to prepare for the changing method of reimbursement in the health-care industry. The goal of this article is to provide clarity for decision-makers as they embrace the fee-for-value shift in a historically and predominantly fee-for-service environment. PMID:27443053

  14. Infrared image guidance for ground vehicle based on fast wavelet image focusing and tracking

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Kobayashi, Nobuaki; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu

    2009-08-01

    We studied the infrared image guidance for ground vehicle based on the fast wavelet image focusing and tracking. Here we uses the image of the uncooled infrared imager mounted on the two axis gimbal system and the developed new auto focusing algorithm on the Daubechies wavelet transform. The developed new focusing algorithm on the Daubechies wavelet transform processes the result of the high pass filter effect to meet the direct detection of the objects. This new focusing gives us the distance information of the outside world smoothly, and the information of the gimbal system gives us the direction of objects in the outside world to match the sense of the spherical coordinate system. We installed this system on the hand made electric ground vehicle platform powered by 24VDC battery. The electric vehicle equips the rotary encoder units and the inertia rate sensor units to make the correct navigation process. The image tracking also uses the developed newt wavelet focusing within several image processing. The size of the hand made electric ground vehicle platform is about 1m long, 0.75m wide, 1m high, and 50kg weight. We tested the infrared image guidance for ground vehicle based on the new wavelet image focusing and tracking using the electric vehicle indoor and outdoor. The test shows the good results by the developed infrared image guidance for ground vehicle based on the new wavelet image focusing and tracking.

  15. Diffuse correlation spectroscopy with a fast Fourier transform-based software autocorrelator

    NASA Astrophysics Data System (ADS)

    Dong, Jing; Bi, Renzhe; Ho, Jun Hui; Thong, Patricia S. P.; Soo, Khee-Chee; Lee, Kijoon

    2012-09-01

    Diffuse correlation spectroscopy (DCS) is an emerging noninvasive technique that probes the deep tissue blood flow, by using the time-averaged intensity autocorrelation function of the fluctuating diffuse reflectance signal. We present a fast Fourier transform (FFT)-based software autocorrelator that utilizes the graphical programming language LabVIEW (National Instruments) to complete data acquisition, recording, and processing tasks. The validation and evaluation experiments were conducted on an in-house flow phantom, human forearm, and photodynamic therapy (PDT) on mouse tumors under the acquisition rate of ˜400 kHz. The software autocorrelator in general has certain advantages, such as flexibility in raw photon count data preprocessing and low cost. In addition to that, our FFT-based software autocorrelator offers smoother starting and ending plateaus when compared to a hardware correlator, which could directly benefit the fitting results without too much sacrifice in speed. We show that the blood flow index (BFI) obtained by using a software autocorrelator exhibits better linear behavior in a phantom control experiment when compared to a hardware one. The results indicate that an FFT-based software autocorrelator can be an alternative solution to the conventional hardware ones in DCS systems with considerable benefits.

  16. A fast algorithm for voxel-based deterministic simulation of X-ray imaging

    NASA Astrophysics Data System (ADS)

    Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2008-04-01

    Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video

  17. Fast intersections on nested tetrahedrons (FINT): An algorithm for adaptive finite element based distributed parameter estimation.

    PubMed

    Lee, Jae Hoon; Joshi, Amit; Sevick-Muraca, Eva M

    2008-01-01

    A variety of biomedical imaging techniques such as optical and fluorescence tomography, electrical impedance tomography, and ultrasound imaging can be cast as inverse problems, wherein image reconstruction involves the estimation of spatially distributed parameter(s) of the PDE system describing the physics of the imaging process. Finite element discretization of imaged domain with tetrahedral elements is a popular way of solving the forward and inverse imaging problems on complicated geometries. A dual-adaptive mesh-based approach wherein, one mesh is used for solving the forward imaging problem and the other mesh used for iteratively estimating the unknown distributed parameter, can result in high resolution image reconstruction at minimum computation effort, if both the meshes are allowed to adapt independently. Till date, no efficient method has been reported to identify and resolve intersection between tetrahedrons in independently refined or coarsened dual meshes. Herein, we report a fast and robust algorithm to identify and resolve intersection of tetrahedrons within nested dual meshes generated by 8-similar subtetrahedron subdivision scheme. The algorithm exploits finite element weight functions and gives rise to a set of weight functions on each vertex of disjoint tetrahedron pieces that completely cover up the intersection region of two tetrahedrons. The procedure enables fully adaptive tetrahedral finite elements by supporting independent refinement and coarsening of each individual mesh while preserving fast identification and resolution of intersection. The computational efficiency of the algorithm is demonstrated by diffuse photon density wave solutions obtained from a single- and a dual-mesh, and by reconstructing a fluorescent inclusion in simulated phantom from boundary frequency domain fluorescence measurements.

  18. Fast mass spectrometry-based enantiomeric excess determination of proteinogenic amino acids.

    PubMed

    Fleischer, Heidi; Thurow, Kerstin

    2013-03-01

    A rapid determination of the enantiomeric excess of proteinogenic amino acids is of great importance in various fields of chemical and biologic research and industries. Owing to their different biologic effects, enantiomers are interesting research subjects in drug development for the design of new and more efficient pharmaceuticals. Usually, the enantiomeric composition of amino acids is determined by conventional analytical methods such as liquid or gas chromatography or capillary electrophoresis. These analytical techniques do not fulfill the requirements of high-throughput screening due to their relative long analysis times. The method presented allows a fast analysis of chiral amino acids without previous time consuming chromatographic separation. The analytical measurements base on parallel kinetic resolution with pseudoenantiomeric mass tagged auxiliaries and were carried out by mass spectrometry with electrospray ionization. All 19 chiral proteinogenic amino acids were tested and Pro, Ser, Trp, His, and Glu were selected as model substrates for verification measurements. The enantiomeric excesses of amino acids with non-polar and aliphatic side chains as well as Trp and Phe (aromatic side chains) were determined with maximum deviations of the expected value less than or equal to 10ee%. Ser, Cys, His, Glu, and Asp were determined with deviations lower or equal to 14ee% and the enantiomeric excess of Tyr were calculated with 17ee% deviation. The total screening process is fully automated from the sample pretreatment to the data processing. The method presented enables fast measurement times about 1.38 min per sample and is applicable in the scope of high-throughput screenings.

  19. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    PubMed

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. PMID:26907860

  20. Fast mass spectrometry-based enantiomeric excess determination of proteinogenic amino acids.

    PubMed

    Fleischer, Heidi; Thurow, Kerstin

    2013-03-01

    A rapid determination of the enantiomeric excess of proteinogenic amino acids is of great importance in various fields of chemical and biologic research and industries. Owing to their different biologic effects, enantiomers are interesting research subjects in drug development for the design of new and more efficient pharmaceuticals. Usually, the enantiomeric composition of amino acids is determined by conventional analytical methods such as liquid or gas chromatography or capillary electrophoresis. These analytical techniques do not fulfill the requirements of high-throughput screening due to their relative long analysis times. The method presented allows a fast analysis of chiral amino acids without previous time consuming chromatographic separation. The analytical measurements base on parallel kinetic resolution with pseudoenantiomeric mass tagged auxiliaries and were carried out by mass spectrometry with electrospray ionization. All 19 chiral proteinogenic amino acids were tested and Pro, Ser, Trp, His, and Glu were selected as model substrates for verification measurements. The enantiomeric excesses of amino acids with non-polar and aliphatic side chains as well as Trp and Phe (aromatic side chains) were determined with maximum deviations of the expected value less than or equal to 10ee%. Ser, Cys, His, Glu, and Asp were determined with deviations lower or equal to 14ee% and the enantiomeric excess of Tyr were calculated with 17ee% deviation. The total screening process is fully automated from the sample pretreatment to the data processing. The method presented enables fast measurement times about 1.38 min per sample and is applicable in the scope of high-throughput screenings. PMID:23232768

  1. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    PubMed

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM.

  2. An algorithm for automated layout of process description maps drawn in SBGN

    PubMed Central

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Motivation: Evolving technology has increased the focus on genomics. The combination of today’s advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. Results: We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. Availability and implementation: An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). Contact: ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26363029

  3. The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.

    PubMed

    Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas

    2015-01-01

    Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used. PMID:26528565

  4. 10. Floor Layout of Thermal Hydraulics Laboratory, from The Thermal ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. Floor Layout of Thermal Hydraulics Laboratory, from The Thermal Hydraulics Laboratory at Hanford. General Electric Company, Hanford Atomic Products Operation, Richland, Washington, 1961. - D-Reactor Complex, Deaeration Plant-Refrigeration Buildings, Area 100-D, Richland, Benton County, WA

  5. 11. General layout of Erie tracks and roundhouse west of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. General layout of Erie tracks and roundhouse west of old Bergen Tunnel, looking west from Tonnel Avenue bridge, taken June 14, 1906 - Erie Railway, Bergen Hill Open Cut, Palisade Avenue to Tonnele Avenue, Jersey City, Hudson County, NJ

  6. 120. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    120. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF EXISTING PIER Sheet 2 of 11 (#3274) - Huntington Beach Municipal Pier, Pacific Coast Highway at Main Street, Huntington Beach, Orange County, CA

  7. 121. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    121. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF EXISTING PIER Sheet 3 of 11 (#3275) - Huntington Beach Municipal Pier, Pacific Coast Highway at Main Street, Huntington Beach, Orange County, CA

  8. 122. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    122. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF EXTENSION TO PIER Sheet 4 of 11 (#3276) - Huntington Beach Municipal Pier, Pacific Coast Highway at Main Street, Huntington Beach, Orange County, CA

  9. Supporting the design of office layout meeting ergonomics requirements.

    PubMed

    Margaritis, Spyros; Marmaras, Nicolas

    2007-11-01

    This paper proposes a method and an information technology tool aiming to support the ergonomics layout design of individual workstations in a given space (building). The proposed method shares common ideas with previous generic methods for office layout. However, it goes a step forward and focuses on the cognitive tasks which have to be carried out by the designer or the design team trying to alleviate them. This is achieved in two ways: (i) by decomposing the layout design problem to six main stages, during which only a limited number of variables and requirements are considered and (ii) by converting the ergonomics requirements to functional design guidelines. The information technology tool (ErgoOffice 0.1) automates certain phases of the layout design process, and supports the design team either by its editing and graphical facilities or by providing adequate memory support.

  10. 32. INTERIOR LAYOUT PLAN OF CROSSCUT STEAM AND DIESEL PLANT, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    32. INTERIOR LAYOUT PLAN OF CROSSCUT STEAM AND DIESEL PLANT, TRACED FROM DRAWING BY C.C. MOORE AND CO., ENGINEERS. July 1947 - Crosscut Steam Plant, North side Salt River near Mill Avenue & Washington Street, Tempe, Maricopa County, AZ

  11. Ad Layout Students Become "Artists" with Viewer Device

    ERIC Educational Resources Information Center

    Engel, Jack

    1977-01-01

    Suggests that the use of a projection viewer employed by professional art studios to make revised enlargements or reductions of existing art can improve the appearance of layouts done by creative, but artistically unskilled, students. (KS)

  12. 30. CONSTRUCTION LAYOUT DETAILS OF OUTLET WORKS AND SPILLWAY. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    30. CONSTRUCTION LAYOUT - DETAILS OF OUTLET WORKS AND SPILLWAY. Sheet C-10, September, 1938. File no. SA 343/1. - Prado Dam, Outlet Works, Santa Ana River near junction of State Highways 71 & 91, Corona, Riverside County, CA

  13. 29. TRACK LAYOUT, INDEX TO DRAWINGS AND INDEX TO MATERIALS, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    29. TRACK LAYOUT, INDEX TO DRAWINGS AND INDEX TO MATERIALS, REED & STEM ARCHITECTS, ST. PAUL, NEW YORK, 1909 (Burlington Northern Collection, Seattle, Washington) - Union Passenger Station Concourse, 1713 Pacific Avenue, Tacoma, Pierce County, WA

  14. Intelligent data layout mechanism for high-performance image retrieval

    NASA Astrophysics Data System (ADS)

    Leung, Kelvin T.; Tao, Wenchao; Yang, Limin; Kimme-Smith, Carolyn; Bassett, Lawrence W.; Valentino, Daniel J.

    1998-06-01

    Trends in medical imaging indicate that the storage requirements for digital medical datasets require a more efficient, scalable storage architecture for large-scale RIS/PACS to support high-speed retrieval for multiple concurrent clients. As storage and networking technologies mature, the cost of applying such technologies in medical imaging has become more economically viable. We propose to take advantage of such economies of scale in technology to provide an effective network workstation storage solution for achieving (1) faster display and navigation response time, (2) higher server throughput and (3) better data storage management. Full-field direct digital mammography presents a challenging problem in the design of digital workstation systems for screening and diagnosis. Due to the spatial and contrast resolution required for mammography, the digital images are large (exceeding 5K X 6K X 14 bits approximately equals 60MB per image) and therefore difficult to display using commercially available technology. We are developing clinically useful methods of storing, displaying and manipulating large digital images in a medical media server using commercial technology. In this paper we propose an Intelligent Grid-based Data Layout Mechanism to optimize the total response time of a reading by minimizing the speed of image access (data I/O time) and the number of data access requests to the server (queueing effects) during the image navigation. A Navigation Threads Model is developed to characterize the performance of many navigation threads involved in the course of performing a reading session. In our grid-based data layout approach, a large 2D direct-digital mammogram image is divided spatially into many small 2D grids and is stored into an array of magnetic disks to provide parallel grid-based readout services to clients. Such a grid- based approach not only provides fine-granularity control, but also provides a means of collecting statistical information about

  15. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  16. [Super-resolution reconstruction of lung 4D-CT images based on fast sub-pixel motion estimation].

    PubMed

    Xiao, Shan; Wang, Tingting; Lü, Qingwen; Zhang, Yu

    2015-07-01

    Super-resolution image reconstruction techniques play an important role for improving image resolution of lung 4D-CT. We presents a super-resolution approach based on fast sub-pixel motion estimation to reconstruct lung 4D-CT images. A fast sub-pixel motion estimation method was used to estimate the deformation fields between "frames", and then iterative back projection (IBP) algorithm was employed to reconstruct high-resolution images. Experimental results showed that compared with traditional interpolation method and super-resolution reconstruction algorithm based on full search motion estimation, the proposed method produced clearer images with significantly enhanced image structure details and reduced time for computation.

  17. 36. Photograph of a line drawing. 'PLAN LAYOUT OF PART ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    36. Photograph of a line drawing. 'PLAN LAYOUT OF PART III, SECTION 1, EQUIPMENT LAYOUT, BUILDINGS E-1 TO E-10 INCL., WASHING, MANUFACTURING AREA PLANT 'B'.' From the U.S. Army Corps of Engineers. Industrial Facilities Inventory, Holston Ordnance Works, Kingsport, Tennessee. Plant B, Parts II, III. (Nashville, TN: Office of the District Engineer, 1944). - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  18. 27. Photograph of a line drawing. 'PLAN LAYOUT AND CROSS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    27. Photograph of a line drawing. 'PLAN LAYOUT AND CROSS SECTION OF PART III, SECTION 1, EQUIPMENT LAYOUT, BUILDINGS C-1, C-3, C-5, C-6, C-7, C-9 INCL., MIXING, MANUFACTURING AREA, PLANT 'B'.' From the U.S. Army Corps of Engineers. Industrial Facilities Inventory, Holston Ordnance Works, Kingsport, Tennessee. Plant B, Parts II, III. (Nashville, TN: Office of the District Engineer, 1944). - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  19. 48. Photocopy of Architectural Layout drawing, dated August 6, 1976 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    48. Photocopy of Architectural Layout drawing, dated August 6, 1976 by Raytheon Company. Original drawing property of United States Air Force, 21" Space Command. AL-2 - PAVE PAWS TECHNICAL FACILITY - OTIS AFB - EQUIPMENT LAYOUT - SECOND FLOOR AND PLATFORM 2A. DRAWING NO. AW35-46-06 - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  20. 59. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    59. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21" Space Command. AL-6 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - LAYOUT 4-A, 5TH & 5-A. DRAWING NO. AL-6 - SHEET 7 OF 21. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  1. 57. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    57. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21" Space Command. AL-3 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - LAYOUT 1 FLOOR AND 1sr FLOOR ROOF. DRAWING NO. AL-3 - SHEET 4 OF 21. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  2. 58. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    58. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21" Space Command. AL-5 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - LAYOUT 3RD, 3A, 4TH LEVELS. DRAWING NO. AL-5 - SHEET 6 OF 21 - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  3. 44. Photograph of a line drawing. 'PLAN LAYOUT OF PART ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    44. Photograph of a line drawing. 'PLAN LAYOUT OF PART III, SECTION 1, EQUIPMENT LAYOUT, BUILDINGS H-1 TO H-10 INCL., GRINDING, MANUFACTURING AREA, PLANT 'B'.' From U.S. Army Corps of Engineers. Industrial Facilities Inventory, Holston Ordnance Works, Kingsport, Tennessee. Plant 8, Parts II, III. (Nashville, TN: Office of the District Engineer, 1944). - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  4. 31. Photograph of a line drawing. 'PLAN LAYOUT OF PART ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    31. Photograph of a line drawing. 'PLAN LAYOUT OF PART III, SECTION 1, EQUIPMENT LAYOUT, BUILDINGS D-1 TO U-10 INCL., NITRATION, MANUFACTURING AREA, PLANT 'B'.' From U.S. Army Corps of Engineers. Industrial Facilities Inventory, Holston Ordnance Works, Kingsport, Tennessee. Plant B, Parts II, III. (Nashville, TN: Office of the District Engineer, 1944). - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  5. Global scene layout modulates contextual learning in change detection.

    PubMed

    Conci, Markus; Müller, Hermann J

    2014-01-01

    Change in the visual scene often goes unnoticed - a phenomenon referred to as "change blindness." This study examined whether the hierarchical structure, i.e., the global-local layout of a scene can influence performance in a one-shot change detection paradigm. To this end, natural scenes of a laid breakfast table were presented, and observers were asked to locate the onset of a new local object. Importantly, the global structure of the scene was manipulated by varying the relations among objects in the scene layouts. The very same items were either presented as global-congruent (typical) layouts or as global-incongruent (random) arrangements. Change blindness was less severe for congruent than for incongruent displays, and this congruency benefit increased with the duration of the experiment. These findings show that global layouts are learned, supporting detection of local changes with enhanced efficiency. However, performance was not affected by scene congruency in a subsequent control experiment that required observers to localize a static discontinuity (i.e., an object that was missing from the repeated layouts). Our results thus show that learning of the global layout is particularly linked to the local objects. Taken together, our results reveal an effect of "global precedence" in natural scenes. We suggest that relational properties within the hierarchy of a natural scene are governed, in particular, by global image analysis, reducing change blindness for local objects through scene learning.

  6. Layout dependent effects analysis on 28nm process

    NASA Astrophysics Data System (ADS)

    Li, Helen; Zhang, Mealie; Wong, Waisum; Song, Huiyuan; Xu, Wei; Hurat, Philippe; Ding, Hua; Zhang, Yifan; Cote, Michel; Huang, Jason; Lai, Ya-ch

    2015-03-01

    Advanced process nodes introduce new variability effects due to increased density, new material, new device structures, and so forth. This creates more and stronger Layout Dependent effects (LDE), especially below 28nm. These effects such as WPE (Well Proximity Effect), PSE (Poly Spacing Effect) change the carrier mobility and threshold voltage and therefore make the device performances, such as Vth and Idsat, extremely layout dependent. In traditional flows, the impact of these changes can only be simulated after the block has been fully laid out, the design is LVS and DRC clean. It's too late in the design cycle and it increases the number of post-layout iteration. We collaborated to develop a method on an advanced process to embed several LDE sources into a LDE kit. We integrated this LDE kit in custom analog design environment, for LDE analysis at early design stage. These features allow circuit and layout designers to detect the variations caused by LDE, and to fix the weak points caused by LDE. In this paper, we will present this method and how it accelerates design convergence of advanced node custom analog designs by detecting early-on LDE hotspots on partial or fully placed layout, reporting contribution of each LDE component to help identify the root cause of LDE variation, and even providing fixing guidelines on how to modify the layout and to reduce the LDE impact.

  7. Revisiting the layout decomposition problem for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Kahng, Andrew B.; Park, Chul-Hong; Xu, Xu; Yao, Hailong

    2008-10-01

    In double patterning lithography (DPL) layout decomposition for 45nm and below process nodes, two features must be assigned opposite colors (corresponding to different exposures) if their spacing is less than the minimum coloring spacing.5, 11, 14 However, there exist pattern configurations for which pattern features separated by less than the minimum coloring spacing cannot be assigned different colors. In such cases, DPL requires that a layout feature be split into two parts. We address this problem using a layout decomposition algorithm that incorporates integer linear programming (ILP), phase conflict detection (PCD), and node-deletion bipartization (NDB) methods. We evaluate our approach on both real-world and artificially generated testcases in 45nm technology. Experimental results show that our proposed layout decomposition method effectively decomposes given layouts to satisfy the key goals of minimized line-ends and maximized overlap margin. There are no design rule violations in the final decomposed layout. While we have previously reported other facets of our research on DPL pattern decomposition,6 the present paper differs from that work in the following key respects: (1) instead of detecting conflict cycles and splitting nodes in conflict cycles to achieve graph bipartization,6 we split all nodes of the conflict graph at all feasible dividing points and then formulate a problem of bipartization by ILP, PCD8 and NDB9 methods; and (2) instead of reporting unresolvable conflict cycles, we report the number of deleted conflict edges to more accurately capture the needed design changes in the experimental results.

  8. 70 Group Neutron Fast Reactor Cross Section Set Based on JENDL-2B.

    1984-02-06

    Version 00 These multigroup cross sections are used in fast reactor calculations. The benchmark calculations for the 23 fast critical assemblies used in the benchmark tests of JFS-2 were performed with one-dimensional diffusion theory by using the JFS-3-J2 set.

  9. FAST: FAST Analysis of Sequences Toolbox

    PubMed Central

    Lawrence, Travis J.; Kauffman, Kyle T.; Amrine, Katherine C. H.; Carper, Dana L.; Lee, Raymond S.; Becich, Peter J.; Canales, Claudia J.; Ardell, David H.

    2015-01-01

    FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought. PMID:26042145

  10. FAST: FAST Analysis of Sequences Toolbox.

    PubMed

    Lawrence, Travis J; Kauffman, Kyle T; Amrine, Katherine C H; Carper, Dana L; Lee, Raymond S; Becich, Peter J; Canales, Claudia J; Ardell, David H

    2015-01-01

    FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought.

  11. A laser diode based system for calibration of fast time-of-flight detectors

    NASA Astrophysics Data System (ADS)

    Bertoni, R.; Bonesini, M.; de Bari, A.; Rossella, M.

    2016-05-01

    A system based on commercially available items, such as a laser diode, emitting in the visible range ~ 400 nm, and multimode fiber patches, fused fiber splitters and optical switches may be assembled, for time calibration of multi-channels time-of-flight (TOF) detectors with photomultipliers' (PMTs') readout. As available laser diode sources have unfortunately limited peak power, the main experimental problem is the tight light power budget of such a system. In addition, while the technology for fused fiber splitters is common in the Telecom wavelength range (λ ~ 850, 1300-1500 nm), it is not easily available in the visible one. Therefore, extensive laboratory tests had to be done on purpose, to qualify the used optical components, and a full scale timing calibration prototype was built. Obtained results show that with such a system, a calibration resolution (σ) in the range 20-30 ps may be within reach. Therefore, fast multi-channels TOF detectors, with timing resolutions in the range 50-100 ps, may be easily calibrated in time. Results on tested optical components may be of interest also for time calibration of different light detection systems based on PMTs, as the ones used for detection of the vacuum ultraviolet scintillation light emitted by ionizing particles in large LAr TPCs.

  12. Fast k-space-based evaluation of imaging properties of ultrasound apertures

    NASA Astrophysics Data System (ADS)

    Zapf, M.; Dapp, R.; Hardt, M.; Henning, P. A.; Ruiter, N. V.

    2011-03-01

    At the Karlsruhe Institute of Technology (KIT) a three-dimensional ultrasound computer tomography (3D USCT) system for early breast cancer diagnosis is being developed. This method promises reproducible volume images of the female breast in 3D. Initial measurements and a simulation based optimization method, which took several physical properties into account, led to a new aperture setup. Yet this simulation is computational too demanding to systematically evaluate the different 'virtual' apertures which can be achieved by rotation and lifting of the system. In optics a Fourier based approach is available to simulate imaging systems as linear systems. For the two apertures used in our project and one hypothetical linear array aperture this concept was evaluated and compared to a reference simulation. An acceptable conformity between the new approach and the reference simulation could be shown. With this approach a fast evaluation of optimal 'virtual' apertures for specific measurement objects and imaging constraints can be carried out within an acceptable time constraint.

  13. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. PMID:24929345

  14. A Wavelet-based Fast Discrimination of Transformer Magnetizing Inrush Current

    NASA Astrophysics Data System (ADS)

    Kitayama, Masashi

    Recently customers who need electricity of higher quality have been installing co-generation facilities. They can avoid voltage sags and other distribution system related disturbances by supplying electricity to important load from their generators. For another example, FRIENDS, highly reliable distribution system using semiconductor switches or storage devices based on power electronics technology, is proposed. These examples illustrates that the request for high reliability in distribution system is increasing. In order to realize these systems, fast relaying algorithms are indispensable. The author proposes a new method of detecting magnetizing inrush current using discrete wavelet transform (DWT). DWT provides the function of detecting discontinuity of current waveform. Inrush current occurs when transformer core becomes saturated. The proposed method detects spikes of DWT components derived from the discontinuity of the current waveform at both the beginning and the end of inrush current. Wavelet thresholding, one of the wavelet-based statistical modeling, was applied to detect the DWT component spikes. The proposed method is verified using experimental data using single-phase transformer and the proposed method is proved to be effective.

  15. Cloning of expansin genes in ramie (Boehmeria nivea L.) based on universal fast walking.

    PubMed

    Chen, Jie; Dai, Lunjin; Wang, Bo; Liu, Lijun; Peng, Dingxiang

    2015-09-10

    Gene cloning is the first step to study the expression profiles and functions of a particular gene; considerable cloning methods have been developed. Expansin, thought to involve in the cell-wall modification events, was not cloned in ramie (Boehmeria nivea L.), which is one of the most important bast fiber crops with little conducted molecular research, especially on its fiber development. Studying the expansin gene family will uncover its possible relationship with ramie fiber development and other growth events. As a result, five expansin genes were cloned with full-length and their sequence information was investigated. Additionally, the phylogenetic analysis was conducted, which suggested that the cloned genes belong to the α-subfamily, and these genes expressed differently during ramie fiber developmental process. In this study, we aimed to apply a strategy for cloning novel full-length genes from genomic DNA of ramie, based on using degenerate primers, touchdown polymerase chain reaction and universal fast walking protocols. By cloning five full-length expansin genes, we believe the polymerase chain reaction-based gene cloning strategy could be applied to general gene studies in ramie and other crops.

  16. A laser diode based system for calibration of fast time-of-flight detectors

    NASA Astrophysics Data System (ADS)

    Bertoni, R.; Bonesini, M.; de Bari, A.; Rossella, M.

    2016-05-01

    A system based on commercially available items, such as a laser diode, emitting in the visible range ~ 400 nm, and multimode fiber patches, fused fiber splitters and optical switches may be assembled, for time calibration of multi-channels time-of-flight (TOF) detectors with photomultipliers' (PMTs') readout. As available laser diode sources have unfortunately limited peak power, the main experimental problem is the tight light power budget of such a system. In addition, while the technology for fused fiber splitters is common in the Telecom wavelength range (λ ~ 850, 1300–1500 nm), it is not easily available in the visible one. Therefore, extensive laboratory tests had to be done on purpose, to qualify the used optical components, and a full scale timing calibration prototype was built. Obtained results show that with such a system, a calibration resolution (σ) in the range 20–30 ps may be within reach. Therefore, fast multi-channels TOF detectors, with timing resolutions in the range 50–100 ps, may be easily calibrated in time. Results on tested optical components may be of interest also for time calibration of different light detection systems based on PMTs, as the ones used for detection of the vacuum ultraviolet scintillation light emitted by ionizing particles in large LAr TPCs.

  17. Field calibration of binocular stereo vision based on fast reconstruction of 3D control field

    NASA Astrophysics Data System (ADS)

    Zhang, Haijun; Liu, Changjie; Fu, Luhua; Guo, Yin

    2015-08-01

    Construction of high-speed railway in China has entered a period of rapid growth. To accurately and quickly obtain the dynamic envelope curve of high-speed vehicle is an important guarantee for safe driving. The measuring system is based on binocular stereo vision. Considering the difficulties in field calibration such as environmental changes and time limits, carried out a field calibration method based on fast reconstruction of three-dimensional control field. With the rapid assembly of pre-calibrated three-dimensional control field, whose coordinate accuracy is guaranteed by manufacture accuracy and calibrated by V-STARS, two cameras take a quick shot of it at the same time. The field calibration parameters are then solved by the method combining linear solution with nonlinear optimization. Experimental results showed that the measurement accuracy can reach up to +/- 0.5mm, and more importantly, in the premise of guaranteeing accuracy, the speed of the calibration and the portability of the devices have been improved considerably.

  18. GPU-based ultra-fast direct aperture optimization for online adaptive radiation therapy

    NASA Astrophysics Data System (ADS)

    Men, Chunhua; Jia, Xun; Jiang, Steve B.

    2010-08-01

    Online adaptive radiation therapy (ART) has great promise to significantly reduce normal tissue toxicity and/or improve tumor control through real-time treatment adaptations based on the current patient anatomy. However, the major technical obstacle for clinical realization of online ART, namely the inability to achieve real-time efficiency in treatment re-planning, has yet to be solved. To overcome this challenge, this paper presents our work on the implementation of an intensity-modulated radiation therapy (IMRT) direct aperture optimization (DAO) algorithm on the graphics processing unit (GPU) based on our previous work on the CPU. We formulate the DAO problem as a large-scale convex programming problem, and use an exact method called the column generation approach to deal with its extremely large dimensionality on the GPU. Five 9-field prostate and five 5-field head-and-neck IMRT clinical cases with 5 × 5 mm2 beamlet size and 2.5 × 2.5 × 2.5 mm3 voxel size were tested to evaluate our algorithm on the GPU. It takes only 0.7-3.8 s for our implementation to generate high-quality treatment plans on an NVIDIA Tesla C1060 GPU card. Our work has therefore solved a major problem in developing ultra-fast (re-)planning technologies for online ART.

  19. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance.

  20. Fast community detection based on sector edge aggregation metric model in hyperbolic space

    NASA Astrophysics Data System (ADS)

    Wang, Zuxi; Li, Qingguang; Xiong, Wei; Jin, Fengdong; Wu, Yao

    2016-06-01

    By studying the edge aggregation characteristic of nodes in hyperbolic space, Sector Edge Aggregation Metric (SEAM) model is proposed and theoretically proved in this paper. In hyperbolic disk SEAM model determines the minimum angular range of a sector which possesses the maximal edge aggregation of nodes. The set of nodes within such sector has dense internal links, which corresponds with the characteristic of community structure. Based on SEAM model, we propose a fast community detection algorithm called Greedy Optimization Modularity Algorithm (GOMA) which employs greedy optimization strategy and hyperbolic coordinates. GOMA firstly divides initial communities according to the quantitative results of sector edge aggregation given by SEAM and the nodes' hyperbolic coordinates, then based on greedy optimization strategy, only merges the two angular neighboring communities in hyperbolic disk to optimize the network modularity function, and consequently obtains high-quality community detection. The strategies of initial community partition and merger in hyperbolic space greatly improve the speed of searching the most optimal modularity. Experimental results indicate that GOMA is able to detect out high-quality community structure in synthetic and real networks, and performs better when applied to the large-scale and dense networks with strong clustering.

  1. Reference Beam Pattern Design for Frequency Invariant Beamforming Based on Fast Fourier Transform.

    PubMed

    Zhang, Wang; Su, Tao

    2016-09-22

    In the field of fast Fourier transform (FFT)-based frequency invariant beamforming (FIB), there is still an unsolved problem. That is the selection of the reference beam to make the designed wideband pattern frequency invariant (FI) over a given frequency range. This problem is studied in this paper. The research shows that for a given array, the selection of the reference beam pattern is determined by the number of sensors and the ratio of the highest frequency to the lowest frequency of the signal (RHL). The length of the weight vector corresponding to a given reference beam pattern depends on the reference frequency. In addition, the upper bound of the weight length to ensure the FI property over the whole frequency band of interest is also given. When the constraints are added to the reference beam, it does not affect the FI property of the designed wideband beam as long as the symmetry of the reference beam is ensured. Based on this conclusion, a scheme for reference beam design is proposed.

  2. Estimates for Pu-239 loadings in burial ground culverts based on fast/slow neutron measurements

    SciTech Connect

    Winn, W.G.; Hochel, R.C.; Hofstetter, K.J.; Sigg, R.A.

    1989-08-15

    This report provides guideline estimates for Pu-239 mass loadings in selected burial ground culverts. The relatively high recorded Pu-239 contents of these culverts have been appraised as suspect relative to criticality concerns, because they were assayed only with the solid waste monitor (SWM) per gamma-ray counting. After 1985, subsequent waste was also assayed with the neutron coincidence counter (NCC), and a comparison of the assay methods showed that the NCC generally yielded higher assays than the SWM. These higher NCC readings signaled a need to conduct non-destructive/non-intrusive nuclear interrogations of these culverts, and a technical team conducted scoping measurements to illustrate potential assay methods based on neutron and/or gamma counting. A fast/slow neutron method has been developed to estimate the Pu-239 in the culverts. In addition, loading records include the SWM assays of all Pu-239 cuts of some of the culvert drums and these data are useful in estimating the corresponding NCC drum assays from NCC vs SWM data. Together, these methods yield predictions based on direct measurements and statistical inference.

  3. SDM: a fast distance-based approach for (super) tree building in phylogenomics.

    PubMed

    Criscuolo, Alexis; Berry, Vincent; Douzery, Emmanuel J P; Gascuel, Olivier

    2006-10-01

    Phylogenomic studies aim to build phylogenies from large sets of homologous genes. Such "genome-sized" data require fast methods, because of the typically large numbers of taxa examined. In this framework, distance-based methods are useful for exploratory studies and building a starting tree to be refined by a more powerful maximum likelihood (ML) approach. However, estimating evolutionary distances directly from concatenated genes gives poor topological signal as genes evolve at different rates. We propose a novel method, named super distance matrix (SDM), which follows the same line as average consensus supertree (ACS; Lapointe and Cucumel, 1997) and combines the evolutionary distances obtained from each gene into a single distance supermatrix to be analyzed using a standard distance-based algorithm. SDM deforms the source matrices, without modifying their topological message, to bring them as close as possible to each other; these deformed matrices are then averaged to obtain the distance supermatrix. We show that this problem is equivalent to the minimization of a least-squares criterion subject to linear constraints. This problem has a unique solution which is obtained by resolving a linear system. As this system is sparse, its practical resolution requires O(naka) time, where n is the number of taxa, k the number of matrices, and a < 2, which allows the distance supermatrix to be quickly obtained. Several uses of SDM are proposed, from fast exploratory studies to more accurate approaches requiring heavier computing time. Using simulations, we show that SDM is a relevant alternative to the standard matrix representation with parsimony (MRP) method, notably when the taxa sets of the different genes have low overlap. We also show that SDM can be used to build an excellent starting tree for an ML approach, which both reduces the computing time and increases the topogical accuracy. We use SDM to analyze the data set of Gatesy et al. (2002, Syst. Biol. 51: 652

  4. A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids.

    PubMed

    Boschitsch, Alexander H; Fenley, Marcia O

    2011-05-10

    An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent - analytical solutions are available for this case, thus allowing rigorous

  5. (abstract) A Low-Cost Mission to 2060 Chiron Based on the Pluto Fast Flyby

    NASA Technical Reports Server (NTRS)

    Stern, S. A.; Salvo, C. G.; Wallace, R. A.; Weinstein, S. S.; Weissman, P. R.

    1994-01-01

    The Pluto Fast Flyby-based mission to Chiron described in this paper is a low cost, scientifically rewarding, focused mission in the outer solar system. The proposed mission will make a flyby of 2060 Chiron, an active 'comet' with over 10(sup 4) times the mass of Halley, and an eccentric, Saturn-crossing orbit which ranges from 8.5 to 19 AU. This mission concept achieves the flyby 4.2 years after launch on a direct trajectory from Earth, is independent of Jupiter launch windows, and fits within Discovery cost guidelines. This mission offers the scientific opportunity to examine a class of object left unsampled by the trail-blazing Mariners, Pioneers, Voyagers, and missions to Halley. Spacecraft reconnaissance of Chiron addresses unique objectives relating to cometary science, other small bodies, the structure of quasi-bound atmospheres on modest-sized bodies, and the origin of primitive bodies and the giant planets. Owing to Chiron's large size (180based on the opportunity to use the planned Pluto Flyby spare spacecraft and a Proton Expendable Launch Vehicle (ELV) (the pluto spacecraft is being designed to be compatible with a Proton launch). Backup

  6. R-LODs: fast LOD-based ray tracing of massive models

    SciTech Connect

    Yoon, Sung-Eui; Lauterbach, Christian; Manocha, Dinesh

    2006-08-25

    We present a novel LOD (level-of-detail) algorithm to accelerate ray tracing of massive models. Our approach computes drastic simplifications of the model and the LODs are well integrated with the kd-tree data structure. We introduce a simple and efficient LOD metric to bound the error for primary and secondary rays. The LOD representation has small runtime overhead and our algorithm can be combined with ray coherence techniques and cache-coherent layouts to improve the performance. In practice, the use of LODs can alleviate aliasing artifacts and improve memory coherence. We implement our algorithm on both 32bit and 64bit machines and able to achieve up to 2.20 times improvement in frame rate of rendering models consisting of tens or hundreds of millions of triangles with little loss in image quality.

  7. Clinical Documents: Attribute-Values Entity Representation, Context, Page Layout And Communication

    PubMed Central

    Lovis, Christian; Lamb, Alexander; Baud, Robert; Rassinoux, Anne-Marie; Fabry, Paul; Geissbühler, Antoine

    2003-01-01

    This paper presents how acquisition, storage and communication of clinical documents are implemented at the University Hospitals of Geneva. Careful attention has been given to user-interfaces, in order to support complex layouts, spell checking, templates management with automatic prefilling in order to facilitate acquisition. A dual architecture has been developed for storage using an attributes-values entity unified database and a consolidated, patient-centered, layout-respectful files-based storage, providing both representation power and sinsert (peed of accesses. This architecture allows great flexibility to store a continuum of data types from simple type values up to complex clinical reports. Finally, communication is entirely based on HTTP-XML internally and a HL-7 CDA interface V2 is currently studied for external communication. Some of the problem encountered, mostly concerning the typology of documents and the ontology of clinical attributes are evoked. PMID:14728202

  8. The relationship between contrast, resolution and detectability in accelerator-based fast neutron radiography

    SciTech Connect

    Ambrosi, R. M.; Watterson, J. I. W.

    1999-06-10

    Fast neutron radiography as a method for non destructive testing is a fast growing field of research. At the Schonland Research Center for Nuclear Sciences we have been engaged in the formulation of a model for the physics of image formation in fast neutron radiography (FNR). This involves examining all the various factors that affect image formation in FNR by experimental and Monte Carlo methods. One of the major problems in the development of a model for fast neutron radiography is the determination of the factors that affect image contrast and resolution. Monte Carlo methods offer an ideal tool for the determination of the origin of many of these factors. In previous work the focus of these methods has been the determination of the scattered neutron field in both a scintillator and a fast neutron radiography facility. As an extension of this work MCNP has been used to evaluate the role neutron scattering in a specimen plays in image detectability. Image processing of fast neutron radiographs is a necessary method of enhancing the detectability of features in an image. MCNP has been used to determine the part it can play in indirectly improving image resolution and aiding in image processing. The role noise plays in fast neutron radiography and its impact on image reconstruction has been evaluated. All these factors aid in the development of a model describing the relationship between contrast, resolution and detectability.

  9. Fast Simulation of X-ray Projections of Spline-based Surfaces using an Append Buffer

    PubMed Central

    Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Hornegger, Joachim; Keil, Andreas; Fahrig, Rebecca

    2012-01-01

    Many scientists in the field of x-ray imaging rely on the simulation of x-ray images. As the phantom models become more and more realistic, their projection requires high computational effort. Since x-ray images are based on transmission, many standard graphics acceleration algorithms cannot be applied to this task. However, if adapted properly, simulation speed can be increased dramatically using state-of-the-art graphics hardware. A custom graphics pipeline that simulates transmission projections for tomographic reconstruction was implemented based on moving spline surface models. All steps from tessellation of the splines, projection onto the detector, and drawing are implemented in OpenCL. We introduced a special append buffer for increased performance in order to store the intersections with the scene for every ray. Intersections are then sorted and resolved to materials. Lastly, an absorption model is evaluated to yield an absorption value for each projection pixel. Projection of a moving spline structure is fast and accurate. Projections of size 640×480 can be generated within 254 ms. Reconstructions using the projections show errors below 1 HU with a sharp reconstruction kernel. Traditional GPU-based acceleration schemes are not suitable for our reconstruction task. Even in the absence of noise, they result in errors up to 9 HU on average, although projection images appear to be correct under visual examination. Projections generated with our new method are suitable for the validation of novel CT reconstruction algorithms. For complex simulations, such as the evaluation of motion-compensated reconstruction algorithms, this kind of x-ray simulation will reduce the computation time dramatically. Source code is available at http://conrad.stanford.edu/ PMID:22975431

  10. Antipodally Invariant Metrics for Fast Regression-Based Super-Resolution.

    PubMed

    Perez-Pellitero, Eduardo; Salvador, Jordi; Ruiz-Hidalgo, Javier; Rosenhahn, Bodo

    2016-06-01

    Dictionary-based super-resolution (SR) algorithms usually select dictionary atoms based on the distance or similarity metrics. Although the optimal selection of the nearest neighbors is of central importance for such methods, the impact of using proper metrics for SR has been overlooked in literature, mainly due to the vast usage of Euclidean distance. In this paper, we present a very fast regression-based algorithm, which builds on the densely populated anchored neighborhoods and sublinear search structures. We perform a study of the nature of the features commonly used for SR, observing that those features usually lie in the unitary hypersphere, where every point has a diametrically opposite one, i.e., its antipode, with same module and angle, but the opposite direction. Even though, we validate the benefits of using antipodally invariant metrics, most of the binary splits use Euclidean distance, which does not handle antipodes optimally. In order to benefit from both the worlds, we propose a simple yet effective antipodally invariant transform that can be easily included in the Euclidean distance calculation. We modify the original spherical hashing algorithm with this metric in our antipodally invariant spherical hashing scheme, obtaining the same performance as a pure antipodally invariant metric. We round up our contributions with a novel feature transform that obtains a better coarse approximation of the input image thanks to iterative backprojection. The performance of our method, which we named antipodally invariant SR, improves quality (Peak Signal to Noise Ratio) and it is faster than any other state-of-the-art method.

  11. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    PubMed

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  12. Motion-based, high-yielding, and fast separation of different charged organics in water.

    PubMed

    Xuan, Mingjun; Lin, Xiankun; Shao, Jingxin; Dai, Luru; He, Qiang

    2015-01-12

    We report a self-propelled Janus silica micromotor as a motion-based analytical method for achieving fast target separation of polyelectrolyte microcapsules, enriching different charged organics with low molecular weights in water. The self-propelled Janus silica micromotor catalytically decomposes a hydrogen peroxide fuel and moves along the direction of the catalyst face at a speed of 126.3 μm s(-1) . Biotin-functionalized Janus micromotors can specifically capture and rapidly transport streptavidin-modified polyelectrolyte multilayer capsules, which could effectively enrich and separate different charged organics in water. The interior of the polyelectrolyte multilayer microcapsules were filled with a strong charged polyelectrolyte, and thus a Donnan equilibrium is favorable between the inner solution within the capsules and the bulk solution to entrap oppositely charged organics in water. The integration of these self-propelled Janus silica micromotors and polyelectrolyte multilayer capsules into a lab-on-chip device that enables the separation and analysis of charged organics could be attractive for a diverse range of applications.

  13. Fast model-based restoration of noisy and undersampled spectral CT data

    NASA Astrophysics Data System (ADS)

    Rigie, David; La Riviere, Patrick J.

    2014-03-01

    In this work we propose a fast, model-based restoration scheme for noisy or undersampled spec- tral CT data and demonstrate its potential utility with two simulation studies. First, we show how one can denoise photon counting CT images, post- reconstruction, by using a spectrally averaged im- age formed from all detected photons as a high SNR prior. Next, we consider a slow slew-rate kV switch- ing scheme, where sparse sinograms are obtained at peak voltages of 80 and 140 kVp. We show how the missing views can be restored by using a spectrally av- eraged, composite sinogram containing all of the views as a fully sampled prior. We have chosen these ex- amples to demonstrate the versatility of the proposed approach and because they have been discussed in the literature before3,6 but we hope to convey that it may be applicable to a fairly general class of spectral CT systems. Comparisons to several sparsity-exploiting, iterative reconstructions are provided for reference.

  14. Support vector machine based classification of fast Fourier transform spectroscopy of proteins

    NASA Astrophysics Data System (ADS)

    Lazarevic, Aleksandar; Pokrajac, Dragoljub; Marcano, Aristides; Melikechi, Noureddine

    2009-02-01

    Fast Fourier transform spectroscopy has proved to be a powerful method for study of the secondary structure of proteins since peak positions and their relative amplitude are affected by the number of hydrogen bridges that sustain this secondary structure. However, to our best knowledge, the method has not been used yet for identification of proteins within a complex matrix like a blood sample. The principal reason is the apparent similarity of protein infrared spectra with actual differences usually masked by the solvent contribution and other interactions. In this paper, we propose a novel machine learning based method that uses protein spectra for classification and identification of such proteins within a given sample. The proposed method uses principal component analysis (PCA) to identify most important linear combinations of original spectral components and then employs support vector machine (SVM) classification model applied on such identified combinations to categorize proteins into one of given groups. Our experiments have been performed on the set of four different proteins, namely: Bovine Serum Albumin, Leptin, Insulin-like Growth Factor 2 and Osteopontin. Our proposed method of applying principal component analysis along with support vector machines exhibits excellent classification accuracy when identifying proteins using their infrared spectra.

  15. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.

    PubMed

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-03-09

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.

  16. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic

    PubMed Central

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  17. A fast SPAD-based small animal imager for early-photon diffuse optical tomography.

    PubMed

    Mu, Ying; Niedre, Mark

    2014-01-01

    Photon scatter is the dominant light transport process in biological tissue and is well understood to degrade imaging performance in near-infrared diffuse optical tomography. Measurement of photons arriving at early times following a short laser pulse is considered to be an effective method to improve this limitation, i.e. by systematically selecting photons that have experienced fewer scattering events. Previously, we tested the performance of single photon avalanche photodiode (SPAD) in measurement of early transmitted photons through diffusive media and showed that it outperformed photo-multiplier tube (PMT) systems in similar configurations, principally due to its faster temporal response. In this paper, we extended this work and developed a fast SPAD-based time-resolved diffuse optical tomography system. As a first validation of the instrument, we scanned an optical phantom with multiple absorbing inclusions and measured full time-resolved data at 3240 scan points per axial slice. We performed image reconstruction with very early-arriving photon data and showed significant improvements compared to time-integrated data. Extension of this work to mice in vivo and measurement of time-resolved fluorescence data is the subject of ongoing research.

  18. Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics

    NASA Astrophysics Data System (ADS)

    Hošek, Petr; Spiwok, Vojtěch

    2016-01-01

    Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.

  19. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    PubMed

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.

  20. Fast and selective extraction of sulfonamides from honey based on magnetic molecularly imprinted polymer.

    PubMed

    Chen, Ligang; Zhang, Xiaopan; Sun, Lei; Xu, Yang; Zeng, Qinglei; Wang, Hui; Xu, Haoyan; Yu, Aimin; Zhang, Hanqi; Ding, Lan

    2009-11-11

    A fast and selective method was developed for the determination of sulfonamides (SAs) in honey based on magnetic molecularly imprinted polymer. The extraction was carried out by blending and stirring the sample, extraction solvent and polymers. When the extraction was complete, the polymers, along with the captured analytes, were easily separated from the sample matrix by an adscititious magnet. The analytes eluted from the polymers were determined by liquid chromatography-tandem mass spectrometry. Under the optimal conditions, the detection limits of SAs are in the range of 1.5-4.3 ng g(-1). The relative standard deviations of intra- and interday ranging from 3.7% to 7.9% and from 4.3% to 9.9% are obtained, respectively. The proposed method was successfully applied to determine SAs including sulfadiazine, sulfamerazine, sulfamethoxydiazine, sulfamonomethoxine, sulfadimethoxine, sulfamethoxazole and sulfaquinoxaline in different honey samples. The recoveries of SAs in these samples from 67.1% to 93.6% were obtained. PMID:19817457

  1. Development of a personnel fast-neutron dosimeter based on CR-39 detectors

    NASA Astrophysics Data System (ADS)

    Mutiullah; Durrani, S. A.

    1987-07-01

    An energy- and direction-independent fast neutron dosimeter based on electrochemically etched (ECE) CR-39 detectors is presented. We describe, first, our theoretical and experimental work to achieve a nearly flat detector response (in terms of energy) over the range 0.1 to 19 MeV for normally incident neutrons. Here, we have used CR-39 detectors with an optimized front radiator stack consisting of polymers with different hydrogenous contents. Such a detector assembly is, however, found to have a response which is strongly dependent upon the neutron angle of incidence. The paper then proceeds to describe a method developed by us to overcome this problem by attaching a detector assembly to each of the three adjacent sides of a perspex support cube (of side ˜ 2.5 cm). By aggregating (or averaging) the response of all three detectors it is found that these cubical assemblies yield a response that is virtually independent of the orientation of the cube with respect to the neutron incidence direction.

  2. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.

    PubMed

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-01-01

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms. PMID:27005632

  3. Fast terahertz optoelectronic amplitude modulator based on plasmonic metamaterial antenna arrays and graphene

    NASA Astrophysics Data System (ADS)

    Jessop, David S.; Sol, Christian W. O.; Xiao, Long; Kindness, Stephen J.; Braeuninger-Weimer, Philipp; Lin, Hungyen; Griffiths, Jonathan P.; Ren, Yuan; Kamboj, Varun S.; Hofmann, Stephan; Zeitler, J. Axel; Beere, Harvey E.; Ritchie, David A.; Degl'Innocenti, Riccardo

    2016-02-01

    The growing interest in terahertz (THz) technologies in recent years has seen a wide range of demonstrated applications, spanning from security screening, non-destructive testing, gas sensing, to biomedical imaging and communication. Communication with THz radiation offers the advantage of much higher bandwidths than currently available, in an unallocated spectrum. For this to be realized, optoelectronic components capable of manipulating THz radiation at high speeds and high signal-to-noise ratios must be developed. In this work we demonstrate a room temperature frequency dependent optoelectronic amplitude modulator working at around 2 THz, which incorporates graphene as the tuning medium. The architecture of the modulator is an array of plasmonic dipole antennas surrounded by graphene. By electrostatically doping the graphene via a back gate electrode, the reflection characteristics of the modulator are modified. The modulator is electrically characterized to determine the graphene conductivity and optically characterization, by THz time-domain spectroscopy and a single-mode 2 THz quantum cascade laser, to determine the optical modulation depth and cut-off frequency. A maximum optical modulation depth of ~ 30% is estimated and is found to be most (least) sensitive when the electrical modulation is centered at the point of maximum (minimum) differential resistivity of the graphene. A 3 dB cut-off frequency > 5 MHz, limited only by the area of graphene on the device, is reported. The results agree well with theoretical calculations and numerical simulations, and demonstrate the first steps towards ultra-fast, graphene based THz optoelectronic devices.

  4. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors

    PubMed Central

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-01-01

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel’s type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms. PMID:27005632

  5. A novel small area fast block matching algorithm based on high-accuracy gyro in digital image stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhao, Yuejin; Yu, Fei; Zhu, Weiwen; Lang, Guanqing; Dong, Liquan

    2010-11-01

    This paper presents a novel fast block matching algorithm based on high-accuracy Gyro for steadying shaking image. It acquires motion vector from Gyro firstly. Then determines searching initial position and divides image motion into three modes of small, medium and large using the motion vector from Gyro. Finally, fast block matching algorithm is designed by improving four types of templates (square, diamond, hexagon, octagon). Experimental result shows that the algorithm can speed up 50% over common method (such as NTSS, FSS, DS) and maintain the same accuracy.

  6. Fast-response humidity-sensing films based on methylene blue aggregates formed on nanoporous semiconductor films

    NASA Astrophysics Data System (ADS)

    Ishizaki, Ryota; Katoh, Ryuzi

    2016-05-01

    We prepared fast-response colorimetric humidity-sensing (vapochromic) films based on methylene blue adsorption onto nanoporous semiconductor (TiO2, Al2O3) films. Color changes caused by changes of humidity could be easily identified visually. A characteristic feature of the vapochromic films was their fast response to changes of humidity. We found that the response began to occur within 10 ms. The response was rapid because all the methylene blue molecules attached to the nanoporous semiconductor surface were directly exposed to the environment. We also deduced that the color changes were caused by structural changes of the methylene blue aggregates on the surface.

  7. Fast 2-D soft X-ray imaging device based on micro pattern gas detector

    NASA Astrophysics Data System (ADS)

    Pacella, D.; Bellazzini, R.; Brez, A.; Pizzicaroli, G.

    2003-09-01

    An innovative fast system for X-ray imaging has been developed at ENEA Frascati (Italy) to be used as diagnostic of magnetic plasmas for thermonuclear fusion. It is based on a pinhole camera coupled to a Micro Pattern Gas Detector (MPGD) having a Gas Electron Multiplier (GEM) as amplifying stage. This detector (2.5 cm × 2.5 cm active area) is equipped with a 2-D read-out printed circuit board with 144 pixels (12 × 12), with an electronic channel for each pixel (charge conversion, shaping, discrimination and counting). Working in photon counting mode, in proportional regime, it is able to get X-ray images of the plasma in a selectable X-ray energy range, at very high photon fluxes (106 ph s-̊1mm-2 all over the detector) and high framing rate (up to 100 kHz). It has very high dynamic range, high signal to noise ratio (statistical) and large flexibility in the optical configurations (magnification and views on the plasma). The system has been tested successfully on the Frascati Tokamak Upgrade (FTU), having central electron temperature of a few keV and density of 1020 m-3, during the summer 2001, with a one-dimensional perpendicular view of the plasma. In collaboration with ENEA, the Johns Hopkins University (JHU) and Princeton Plasma Physics (PPPL), this system has been set up and calibrated in the X-ray energy range 2-8 keV and it has been installed, with a two-dimensional tangential view, on the spherical tokamak NSTX at Princeton. Time resolved X-ray images of the NSTX plasma core have been obtained. Fast acquisitions, performed up to 50 kHz of framing rate, allow the study of the plasma evolution and its magneto-hydrodynamic instabilities, while with a slower sampling (a few kHz) the curvature of the magnetic surfaces can be measured. All these results reveal the good imaging properties of this device at high time resolution, despite of the low number of pixels, and the effectiveness of the fine controlled energy discrimination.

  8. Nomenclature-based data retrieval without prior annotation: facilitating biomedical data integration with fast doublet matching.

    PubMed

    Berman, Jules J

    2005-01-01

    Assigning nomenclature codes to biomedical data is an arduous, expensive and error-prone task. Data records are coded to to provide a common representation of contained concepts, allowing facile retrieval of records via a standard terminology. In the medical field, cancer registrars, nurses, pathologists, and private clinicians all understand the importance of annotating medical records with vocabularies that codify the names of diseases, procedures, billing categories, etc. Molecular biologists need codified medical records so that they can discover or validate relationships between experimental data and clinical data. This paper introduces a new approach to retrieving data records without prior coding. The approach achieves the same result as a search over pre-coded records. It retrieves all records that contain any terms that are synonymous with a user's query-term. A recently described fast algorithm (the doublet method) permits quick iterative searches over every synonym for any term from any nomenclature occurring in a dataset of any size. As a demonstration, a 105+ Megabyte corpus of Pubmed abstracts was searched for medical terms. Query terms were matched against either of two vocabularies and expanded as an array of equivalent search items. A single search term may have over one hundred nomenclature synonyms, all of which were searched against the full database. Iterative searches of a list of concept-equivalent terms involves many more operations than a single search over pre-annotated concept codes. Nonetheless, the doublet method achieved fast query response times (0.05 seconds using Snomed and 5 seconds using the Developmental Lineage Classification of Neoplasms, on a computer with a 2.89 GHz processor). Pre-annotated datasets lose their value when the chosen vocabulary is replaced by a different vocabulary or by a different version of the same vocabulary. The doublet method can employ any version of any vocabulary with no pre-annotation. In many

  9. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and

  10. Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy

    PubMed Central

    Chang, Chia-Yuan; Hu, Yvonne Yuling; Lin, Chun-Yu; Lin, Cheng-Han; Chang, Hsin-Yu; Tsai, Sheng-Feng; Lin, Tzu-Wei; Chen, Shean-Jen

    2016-01-01

    Temporal focusing multiphoton microscopy (TFMPM) has the advantage of area excitation in an axial confinement of only a few microns; hence, it can offer fast three-dimensional (3D) multiphoton imaging. Herein, fast volumetric imaging via a developed digital micromirror device (DMD)-based TFMPM has been realized through the synchronization of an electron multiplying charge-coupled device (EMCCD) with a dynamic piezoelectric stage for axial scanning. The volumetric imaging rate can achieve 30 volumes per second according to the EMCCD frame rate of more than 400 frames per second, which allows for the 3D Brownian motion of one-micron fluorescent beads to be spatially observed. Furthermore, it is demonstrated that the dynamic HiLo structural multiphoton microscope can reject background noise by way of the fast volumetric imaging with high-speed DMD patterned illumination. PMID:27231617

  11. Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy.

    PubMed

    Chang, Chia-Yuan; Hu, Yvonne Yuling; Lin, Chun-Yu; Lin, Cheng-Han; Chang, Hsin-Yu; Tsai, Sheng-Feng; Lin, Tzu-Wei; Chen, Shean-Jen

    2016-05-01

    Temporal focusing multiphoton microscopy (TFMPM) has the advantage of area excitation in an axial confinement of only a few microns; hence, it can offer fast three-dimensional (3D) multiphoton imaging. Herein, fast volumetric imaging via a developed digital micromirror device (DMD)-based TFMPM has been realized through the synchronization of an electron multiplying charge-coupled device (EMCCD) with a dynamic piezoelectric stage for axial scanning. The volumetric imaging rate can achieve 30 volumes per second according to the EMCCD frame rate of more than 400 frames per second, which allows for the 3D Brownian motion of one-micron fluorescent beads to be spatially observed. Furthermore, it is demonstrated that the dynamic HiLo structural multiphoton microscope can reject background noise by way of the fast volumetric imaging with high-speed DMD patterned illumination. PMID:27231617

  12. Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy.

    PubMed

    Chang, Chia-Yuan; Hu, Yvonne Yuling; Lin, Chun-Yu; Lin, Cheng-Han; Chang, Hsin-Yu; Tsai, Sheng-Feng; Lin, Tzu-Wei; Chen, Shean-Jen

    2016-05-01

    Temporal focusing multiphoton microscopy (TFMPM) has the advantage of area excitation in an axial confinement of only a few microns; hence, it can offer fast three-dimensional (3D) multiphoton imaging. Herein, fast volumetric imaging via a developed digital micromirror device (DMD)-based TFMPM has been realized through the synchronization of an electron multiplying charge-coupled device (EMCCD) with a dynamic piezoelectric stage for axial scanning. The volumetric imaging rate can achieve 30 volumes per second according to the EMCCD frame rate of more than 400 frames per second, which allows for the 3D Brownian motion of one-micron fluorescent beads to be spatially observed. Furthermore, it is demonstrated that the dynamic HiLo structural multiphoton microscope can reject background noise by way of the fast volumetric imaging with high-speed DMD patterned illumination.

  13. Fast valve based on double-layer eddy-current repulsion for disruption mitigation in Experimental Advanced Superconducting Tokamak.

    PubMed

    Zhuang, H D; Zhang, X D

    2015-05-01

    A fast valve based on the double-layer eddy-current repulsion mechanism has been developed on Experimental Advanced Superconducting Tokamak (EAST). In addition to a double-layer eddy-current coil, a preload system was added to improve the security of the valve, whereby the valve opens more quickly and the open-valve time becomes shorter, making it much safer than before. In this contribution, testing platforms, open-valve characteristics, and throughput of the fast valve are discussed. Tests revealed that by choosing appropriate parameters the valve opened within 0.15 ms, and open-valve times were no longer than 2 ms. By adjusting working parameter values, the maximum number of particles injected during this open-valve time was estimated at 7 × 10(22). The fast valve will become a useful tool to further explore disruption mitigation experiments on EAST in 2015.

  14. Triple patterning lithography layout decomposition using end-cutting

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Roy, Subhendu; Gao, Jhih-Rong; Pan, David Z.

    2015-01-01

    Triple patterning lithography (TPL) is one of the most promising techniques in the 14-nm logic node and beyond. Conventional LELELE type TPL technology suffers from native conflict and overlapping problems. Recently, as an alternative process, TPL with end-cutting (LELE-EC) was proposed to overcome the limitations of LELELE manufacturing. In the LELE-EC process, the first two masks are LELE type double patterning, while the third mask is used to generate the end-cuts. Although the layout decomposition problem for LELELE has been well studied in the literature, only a few attempts have been made to address the LELE-EC layout decomposition problem. We propose a comprehensive study for LELE-EC layout decomposition. Layout graph and end-cut graph are constructed to extract all the geometrical relationships of both input layout and end-cut candidates. Based on these graphs, integer linear programming is formulated to minimize the conflict and the stitch numbers. The experimental results demonstrate the effectiveness of the proposed algorithms.

  15. A fast video clip retrieval algorithm based on VA-file

    NASA Astrophysics Data System (ADS)

    Liu, Fangjie; Dong, DaoGuo; Miao, Xiaoping; Xue, XiangYang

    2003-12-01

    Video clip retrieval is a significant research topic of content-base multimedia retrieval. Generally, video clip retrieval process is carried out as following: (1) segment a video clip into shots; (2) extract a key frame from each shot as its representative; (3) denote every key frame as a feature vector, and thus a video clip can be denoted as a sequence of feature vectors; (4) retrieve match clip by computing the similarity between the feature vector sequence of a query clip and the feature vector sequence of any clip in database. To carry out fast video clip retrieval the index structure is indispensable. According to our literature survey, S2-tree [17] is the one and only index structure having been applied to support video clip retrieval, which combines the characteristics of both X-tree and Suffix-tree and converts the series vectors retrieval to string matching. But S2-tree structure will not be applicable if the feature vector's dimension is beyond 20, because the X-tree itself cannot be used to sustain similarity query effectively when dimensions of vectors are beyond 20. Furthermore, it cannot support flexible similarity definitions between two vector sequences. VA-file represents the vector approximately by compressing the original data and it maintains the original order when representing vectors in a sequence, which is a very valuable merit for vector sequences matching. In this paper, a new video clip similarity model as well as video clip retrieval algorithm based on VA-File are proposed. The experiments show that our algorithm incredibly shortened the retrieval time compared to sequential scanning without index structure.

  16. A fast color image enhancement algorithm based on Max Intensity Channel.

    PubMed

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-30

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details. PMID:25110395

  17. A combinatorial chemistry method for fast screening of perovskite-based NO oxidation catalyst.

    PubMed

    Yoon, Dal Young; Lim, Eunho; Kim, Young Jin; Cho, Byong K; Nam, In-Sik; Choung, Jin Woo; Yoo, Seungbeom

    2014-11-10

    A fast parallel screening method based on combinatorial chemistry (combichem) has been developed and applied in the screening tests of perovskite-based oxide (PBO) catalysts for NO oxidation to hit a promising PBO formulation for the oxidation of NO to NO2. This new method involves three consecutive steps: oxidation of NO to NO2 over a PBO catalyst, adsorption of NOx onto the PBO and K2O/Al2O3, and colorimetric assay of the NOx adsorbed thereon. The combichem experimental data have been used for determining the oxidation activity of NO over PBO catalysts as well as three critical parameters, such as the adsorption efficiency of K2O/Al2O3 for NO2 (α) and NO (β), and the time-average fraction of NO included in the NOx feed stream (ξ). The results demonstrated that the amounts of NO2 produced over PBO catalysts by the combichem method under transient conditions correlate well with those from a conventional packed-bed reactor under steady-state conditions. Among the PBO formulations examined, La0.5Ag0.5MnO3 has been identified as the best chemical formulation for oxidation of NO to NO2 by the present combichem method and also confirmed by the conventional packed-bed reactor tests. The superior efficiency of the combichem method for high-throughput catalyst screening test validated in this study is particularly suitable for saving the time and resources required in developing a new formulation of PBO catalyst whose chemical composition may have an enormous number of possible variations.

  18. A fast color image enhancement algorithm based on Max Intensity Channel

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  19. Determinants of Fast Food Consumption among Iranian High School Students Based on Planned Behavior Theory

    PubMed Central

    Sharifirad, Gholamreza; Yarmohammadi, Parastoo; Azadbakht, Leila; Morowatisharifabad, Mohammad Ali; Hassanzadeh, Akbar

    2013-01-01

    Objective. This study was conducted to identify some factors (beliefs and norms) which are related to fast food consumption among high school students in Isfahan, Iran. We used the framework of the theory planned behavior (TPB) to predict this behavior. Subjects & Methods. Cross-sectional data were available from high school students (n = 521) who were recruited by cluster randomized sampling. All of the students completed a questionnaire assessing variables of standard TPB model including attitude, subjective norms, perceived behavior control (PBC), and the additional variables past behavior, actual behavior control (ABC). Results. The TPB variables explained 25.7% of the variance in intentions with positive attitude as the strongest (β = 0.31, P < 0.001) and subjective norms as the weakest (β = 0.29, P < 0.001) determinant. Concurrently, intentions accounted for 6% of the variance for fast food consumption. Past behavior and ABC accounted for an additional amount of 20.4% of the variance in fast food consumption. Conclusion. Overall, the present study suggests that the TPB model is useful in predicting related beliefs and norms to the fast food consumption among adolescents. Subjective norms in TPB model and past behavior in TPB model with additional variables (past behavior and actual behavior control) were the most powerful predictors of fast food consumption. Therefore, TPB model may be a useful framework for planning intervention programs to reduce fast food consumption by students. PMID:23936635

  20. Terahertz-optical-asymmetric-demultiplexer (TOAD)-based arithmetic units for ultra-fast optical information processing

    NASA Astrophysics Data System (ADS)

    Cherri, Abdallah K.

    2010-04-01

    In this paper, designs of ultra-fast all-optical based Terahertz-optical-asymmetric-demultiplexer (TOAD)-based devices are reported. Using TOAD switches, adders/subtracters units are demonstrated. The high speed is achieved due to the use of the nonlinear optical materials and the nonbinary modified signed-digit (MSD) number representation. The proposed all-optical circuits are compared in terms of numbers TOAD switches, optical amplifiers and wavelength converters.

  1. Layout optimization of DRAM cells using rigorous simulation model for NTD

    NASA Astrophysics Data System (ADS)

    Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe

    2014-03-01

    DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a "super stable" core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by

  2. Structator: fast index-based search for RNA sequence-structure patterns

    PubMed Central

    2011-01-01

    Background The secondary structure of RNA molecules is intimately related to their function and often more conserved than the sequence. Hence, the important task of searching databases for RNAs requires to match sequence-structure patterns. Unfortunately, current tools for this task have, in the best case, a running time that is only linear in the size of sequence databases. Furthermore, established index data structures for fast sequence matching, like suffix trees or arrays, cannot benefit from the complementarity constraints introduced by the secondary structure of RNAs. Results We present a novel method and readily applicable software for time efficient matching of RNA sequence-structure patterns in sequence databases. Our approach is based on affix arrays, a recently introduced index data structure, preprocessed from the target database. Affix arrays support bidirectional pattern search, which is required for efficiently handling the structural constraints of the pattern. Structural patterns like stem-loops can be matched inside out, such that the loop region is matched first and then the pairing bases on the boundaries are matched consecutively. This allows to exploit base pairing information for search space reduction and leads to an expected running time that is sublinear in the size of the sequence database. The incorporation of a new chaining approach in the search of RNA sequence-structure patterns enables the description of molecules folding into complex secondary structures with multiple ordered patterns. The chaining approach removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our method runs up to two orders of magnitude faster than previous methods. Conclusions The presented method's sublinear expected running time makes it well suited for RNA sequence-structure pattern matching in large sequence databases. RNA molecules containing several

  3. DESIGN AND LAYOUT CONCEPTS FOR COMPACT, FACTORY-PRODUCED, TRANSPORTABLE, GENERATION IV REACTOR SYSTEMS

    SciTech Connect

    Mynatt Fred R.; Townsend, L.W.; Williamson, Martin; Williams, Wesley; Miller, Laurence W.; Khan, M. Khurram; McConn, Joe; Kadak, Andrew C.; Berte, Marc V.; Sawhney, Rapinder; Fife, Jacob; Sedler, Todd L.; Conway, Larry E.; Felde, Dave K.

    2003-11-12

    The purpose of this research project is to develop compact (100 to 400 MWe) Generation IV nuclear power plant design and layout concepts that maximize the benefits of factory-based fabrication and optimal packaging, transportation and siting. The reactor concepts selected were compact designs under development in the 2000 to 2001 period. This interdisciplinary project was comprised of three university-led nuclear engineering teams identified by reactor coolant type (water, gas, and liquid metal) and a fourth Industrial Engineering team. The reactors included a Modular Pebble Bed helium-cooled concept being developed at MIT, the IRIS water-cooled concept being developed by a team led by Westinghouse Electric Company, and a Lead-Bismuth-cooled concept developed by UT. In addition to the design and layout concepts this report includes a section on heat exchanger manufacturing simulations and a section on construction and cost impacts of proposed modular designs.

  4. Audio video based fast fixed-point independent vector analysis for multisource separation in a room environment

    NASA Astrophysics Data System (ADS)

    Liang, Yanfeng; Naqvi, Syed Mohsen; Chambers, Jonathon A.

    2012-12-01

    Fast fixed-point independent vector analysis (FastIVA) is an improved independent vector analysis (IVA) method, which can achieve faster and better separation performance than original IVA. As an example IVA method, it is designed to solve the permutation problem in frequency domain independent component analysis by retaining the higher order statistical dependency between frequencies during learning. However, the performance of all IVA methods is limited due to the dimensionality of the parameter space commonly encountered in practical frequency-domain source separation problems and the spherical symmetry assumed with the source model. In this article, a particular permutation problem encountered in using the FastIVA algorithm is highlighted, namely the block permutation problem. Therefore a new audio video based fast fixed-point independent vector analysis algorithm is proposed, which uses video information to provide a smart initialization for the optimization problem. The method cannot only avoid the ill convergence resulting from the block permutation problem but also improve the separation performance even in noisy and high reverberant environments. Different multisource datasets including the real audio video corpus AV16.3 are used to verify the proposed method. For the evaluation of the separation performance on real room recordings, a new pitch based evaluation criterion is also proposed.

  5. A New Ticket-Based Authentication Mechanism for Fast Handover in Mesh Network

    PubMed Central

    Lai, Yan-Ming; Cheng, Pu-Jen; Lee, Cheng-Chi; Ku, Chia-Yi

    2016-01-01

    Due to the ever-growing popularity mobile devices of various kinds have received worldwide, the demands on large-scale wireless network infrastructure development and enhancement have been rapidly swelling in recent years. A mobile device holder can get online at a wireless network access point, which covers a limited area. When the client leaves the access point, there will be a temporary disconnection until he/she enters the coverage of another access point. Even when the coverages of two neighboring access points overlap, there is still work to do to make the wireless connection smoothly continue. The action of one wireless network access point passing a client to another access point is referred to as the handover. During handover, for security concerns, the client and the new access point should perform mutual authentication before any Internet access service is practically gained/provided. If the handover protocol is inefficient, in some cases discontinued Internet service will happen. In 2013, Li et al. proposed a fast handover authentication mechanism for wireless mesh network (WMN) based on tickets. Unfortunately, Li et al.’s work came with some weaknesses. For one thing, some sensitive information such as the time and date of expiration is sent in plaintext, which increases security risks. For another, Li et al.’s protocol includes the use of high-quality tamper-proof devices (TPDs), and this unreasonably high equipment requirement limits its applicability. In this paper, we shall propose a new efficient handover authentication mechanism. The new mechanism offers a higher level of security on a more scalable ground with the client’s privacy better preserved. The results of our performance analysis suggest that our new mechanism is superior to some similar mechanisms in terms of authentication delay. PMID:27171160

  6. A New Ticket-Based Authentication Mechanism for Fast Handover in Mesh Network.

    PubMed

    Lai, Yan-Ming; Cheng, Pu-Jen; Lee, Cheng-Chi; Ku, Chia-Yi

    2016-01-01

    Due to the ever-growing popularity mobile devices of various kinds have received worldwide, the demands on large-scale wireless network infrastructure development and enhancement have been rapidly swelling in recent years. A mobile device holder can get online at a wireless network access point, which covers a limited area. When the client leaves the access point, there will be a temporary disconnection until he/she enters the coverage of another access point. Even when the coverages of two neighboring access points overlap, there is still work to do to make the wireless connection smoothly continue. The action of one wireless network access point passing a client to another access point is referred to as the handover. During handover, for security concerns, the client and the new access point should perform mutual authentication before any Internet access service is practically gained/provided. If the handover protocol is inefficient, in some cases discontinued Internet service will happen. In 2013, Li et al. proposed a fast handover authentication mechanism for wireless mesh network (WMN) based on tickets. Unfortunately, Li et al.'s work came with some weaknesses. For one thing, some sensitive information such as the time and date of expiration is sent in plaintext, which increases security risks. For another, Li et al.'s protocol includes the use of high-quality tamper-proof devices (TPDs), and this unreasonably high equipment requirement limits its applicability. In this paper, we shall propose a new efficient handover authentication mechanism. The new mechanism offers a higher level of security on a more scalable ground with the client's privacy better preserved. The results of our performance analysis suggest that our new mechanism is superior to some similar mechanisms in terms of authentication delay.

  7. Family-Joining: A Fast Distance-Based Method for Constructing Generally Labeled Trees

    PubMed Central

    Kalaghatgi, Prabhav; Pfeifer, Nico; Lengauer, Thomas

    2016-01-01

    The widely used model for evolutionary relationships is a bifurcating tree with all taxa/observations placed at the leaves. This is not appropriate if the taxa have been densely sampled across evolutionary time and may be in a direct ancestral relationship, or if there is not enough information to fully resolve all the branching points in the evolutionary tree. In this article, we present a fast distance-based agglomeration method called family-joining (FJ) for constructing so-called generally labeled trees in which taxa may be placed at internal vertices and the tree may contain polytomies. FJ constructs such trees on the basis of pairwise distances and a distance threshold. We tested three methods for threshold selection, FJ-AIC, FJ-BIC, and FJ-CV, which minimize Akaike information criterion, Bayesian information criterion, and cross-validation error, respectively. When compared with related methods on simulated data, FJ-BIC was among the best at reconstructing the correct tree across a wide range of simulation scenarios. FJ-BIC was applied to HIV sequences sampled from individuals involved in a known transmission chain. The FJ-BIC tree was found to be compatible with almost all transmission events. On average, internal branches in the FJ-BIC tree have higher bootstrap support than branches in the leaf-labeled bifurcating tree constructed using RAxML. 36% and 25% of the internal branches in the FJ-BIC tree and RAxML tree, respectively, have bootstrap support greater than 70%. To the best of our knowledge the method presented here is the first attempt at modeling evolutionary relationships using generally labeled trees. PMID:27436007

  8. Fast and automatic depth control of iterative bone ablation based on optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-07-01

    Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.

  9. A New Ticket-Based Authentication Mechanism for Fast Handover in Mesh Network.

    PubMed

    Lai, Yan-Ming; Cheng, Pu-Jen; Lee, Cheng-Chi; Ku, Chia-Yi

    2016-01-01

    Due to the ever-growing popularity mobile devices of various kinds have received worldwide, the demands on large-scale wireless network infrastructure development and enhancement have been rapidly swelling in recent years. A mobile device holder can get online at a wireless network access point, which covers a limited area. When the client leaves the access point, there will be a temporary disconnection until he/she enters the coverage of another access point. Even when the coverages of two neighboring access points overlap, there is still work to do to make the wireless connection smoothly continue. The action of one wireless network access point passing a client to another access point is referred to as the handover. During handover, for security concerns, the client and the new access point should perform mutual authentication before any Internet access service is practically gained/provided. If the handover protocol is inefficient, in some cases discontinued Internet service will happen. In 2013, Li et al. proposed a fast handover authentication mechanism for wireless mesh network (WMN) based on tickets. Unfortunately, Li et al.'s work came with some weaknesses. For one thing, some sensitive information such as the time and date of expiration is sent in plaintext, which increases security risks. For another, Li et al.'s protocol includes the use of high-quality tamper-proof devices (TPDs), and this unreasonably high equipment requirement limits its applicability. In this paper, we shall propose a new efficient handover authentication mechanism. The new mechanism offers a higher level of security on a more scalable ground with the client's privacy better preserved. The results of our performance analysis suggest that our new mechanism is superior to some similar mechanisms in terms of authentication delay. PMID:27171160

  10. Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control

    SciTech Connect

    Acharya, Naresh; Baone, Chaitanya; Veda, Santosh; Dai, Jing; Chaudhuri, Nilanjan; Leonardi, Bruno; Sanches-Gasca, Juan; Diao, Ruisheng; Wu, Di; Huang, Zhenyu; Zhang, Yu; Jin, Shuangshuang; Zheng, Bin; Chen, Yousu

    2014-12-31

    Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve grid resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed

  11. A pnCCD-based, fast direct single electron imaging camera for TEM and STEM

    NASA Astrophysics Data System (ADS)

    Ryll, H.; Simson, M.; Hartmann, R.; Holl, P.; Huth, M.; Ihle, S.; Kondo, Y.; Kotula, P.; Liebel, A.; Müller-Caspary, K.; Rosenauer, A.; Sagawa, R.; Schmidt, J.; Soltau, H.; Strüder, L.

    2016-04-01

    We report on a new camera that is based on a pnCCD sensor for applications in scanning transmission electron microscopy. Emerging new microscopy techniques demand improved detectors with regards to readout rate, sensitivity and radiation hardness, especially in scanning mode. The pnCCD is a 2D imaging sensor that meets these requirements. Its intrinsic radiation hardness permits direct detection of electrons. The pnCCD is read out at a rate of 1,150 frames per second with an image area of 264 x 264 pixel. In binning or windowing modes, the readout rate is increased almost linearly, for example to 4000 frames per second at 4× binning (264 x 66 pixel). Single electrons with energies from 300 keV down to 5 keV can be distinguished due to the high sensitivity of the detector. Three applications in scanning transmission electron microscopy are highlighted to demonstrate that the pnCCD satisfies experimental requirements, especially fast recording of 2D images. In the first application, 65536 2D diffraction patterns were recorded in 70 s. STEM images corresponding to intensities of various diffraction peaks were reconstructed. For the second application, the microscope was operated in a Lorentz-like mode. Magnetic domains were imaged in an area of 256 x 256 sample points in less than 37 seconds for a total of 65536 images each with 264 x 132 pixels. Due to information provided by the two-dimensional images, not only the amplitude but also the direction of the magnetic field could be determined. In the third application, millisecond images of a semiconductor nanostructure were recorded to determine the lattice strain in the sample. A speed-up in measurement time by a factor of 200 could be achieved compared to a previously used camera system.

  12. Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.

    2003-01-01

    The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and

  13. A fast key generation method based on dynamic biometrics to secure wireless body sensor networks for p-health.

    PubMed

    Zhang, G H; Poon, Carmen C Y; Zhang, Y T

    2010-01-01

    Body sensor networks (BSNs) have emerged as a new technology for healthcare applications, but the security of communication in BSNs remains a formidable challenge yet to be resolved. The paper discusses the typical attacks faced by BSNs and proposes a fast biometric based approach to generate keys for ensuing confidentiality and authentication in BSN communications. The approach was tested on 900 segments of electrocardiogram. Each segment was 4 seconds long and used to generate a 128-bit key. The results of the study found that entropy of 96% of the keys were above 0.95 and 99% of the hamming distances calculated from any two keys were above 50 bits. Based on the randomness and distinctiveness of these keys, it is concluded that the fast biometric based approach has great potential to be used to secure communication in BSNs for health applications. PMID:21096428

  14. A fast key generation method based on dynamic biometrics to secure wireless body sensor networks for p-health.

    PubMed

    Zhang, G H; Poon, Carmen C Y; Zhang, Y T

    2010-01-01

    Body sensor networks (BSNs) have emerged as a new technology for healthcare applications, but the security of communication in BSNs remains a formidable challenge yet to be resolved. The paper discusses the typical attacks faced by BSNs and proposes a fast biometric based approach to generate keys for ensuing confidentiality and authentication in BSN communications. The approach was tested on 900 segments of electrocardiogram. Each segment was 4 seconds long and used to generate a 128-bit key. The results of the study found that entropy of 96% of the keys were above 0.95 and 99% of the hamming distances calculated from any two keys were above 50 bits. Based on the randomness and distinctiveness of these keys, it is concluded that the fast biometric based approach has great potential to be used to secure communication in BSNs for health applications.

  15. Sub 10 ns fast switching and resistance control in lateral GeTe-based phase-change memory

    NASA Astrophysics Data System (ADS)

    Yin, You; Zhang, Yulong; Takehana, Yousuke; Kobayashi, Ryota; Zhang, Hui; Hosaka, Sumio

    2016-06-01

    In this study, we investigated the fast switching and resistance control in a lateral GeTe-based phase-change memory (PCM). The resistivity of GeTe as a function of annealing temperature showed that it changed by more than 6 orders of magnitude in a very narrow temperature range. X-ray diffraction patterns of GeTe films indicated that GeTe had only one crystal structure, that is, face-centered cubic. It was demonstrated that the lateral device with a top conducting layer had a good performance. The operation characteristics of the GeTe-based lateral PCM device showed that it could be operated even when sub-10-ns voltage pulses were applied, making it much faster than a Ge2Sb2Te5-based device. The device resistance was successfully controlled by applying a staircase-like pulse, which enables the device to be used for fast multilevel storage.

  16. The effect of design modifications to the typographical layout of the New York State elementary science learning standards on user preference and process time

    NASA Astrophysics Data System (ADS)

    Arnold, Jeffery E.

    The purpose of this study was to determine the effect of four different design layouts of the New York State elementary science learning standards on user processing time and preference. Three newly developed layouts contained the same information as the standards core curriculum. In this study, the layout of the core guide is referred to as Book. The layouts of the new documents are referred to as Chart, Map, and Tabloid based on the format used to convey content hierarchy information. Most notably, all the new layouts feature larger page sizes, color, page tabs, and an icon based navigation system (IBNS). A convenience sample of 48 New York State educators representing three educator types (16 pre-service teachers, 16 in-service teachers, and 16 administrators) participated in the study. After completing timed tasks accurately, participants scored each layout based on preference. Educator type and layout were the independent variables, and process time and user preference were the dependent variables. A two-factor experimental design with Educator Type as the between variable and with repeated measures on Layout, the within variable, showed a significant difference in process time for Educator Type and Layout. The main effect for Educator Type (F(2, 45) = 8.03, p <.001) was significant with an observed power of .94, and an effect size of .26. The pair-wise comparisons for process time showed that pre-service teachers (p = .02) and administrators (p =.009) completed the assigned tasks more quickly when compared to in-service teachers. The main effect for Layout (F(3, 135) = 4.47, p =.01) was also significant with an observed power of .80, and an effect size of .09. Pair-wise comparisons showed that the newly developed Chart (p = .019) and Map (p = .032) layouts reduced overall process time when compared to the existing state learning standards (Book). The Layout X Educator type interaction was not significant. The same two-factor experimental design on preference

  17. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; Son, Seung Woo

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  18. Shape determination and placement algorithms for hierarchical integrated circuit layout

    NASA Astrophysics Data System (ADS)

    Slutz, E. A.

    Algorithms for the automatic layout of integrated circuits are presented. The algorithms use a hierarchical decomposition of the circuit structure. Since this reduces the complexity of the design, it is an aid to the designer as well as the means of making possible the automated approach to layout. The layout method consists of two phases: a top-down phase during which the shapes of the components at each level are determined, followed by a bottomup phase where a final placement and routing for each level is computed. The data structure used to model the chip surface is central to the algorithms. This data structure is presented along with the alternative structures. Four basic operations of adding components, deleting components, sizing, and building the structure for a given placement are described. A file format for capturing integrated circuit design information is also described.

  19. Comprehensive physics-based compact model for fast p-i-n diode using MATLAB and Simulink

    NASA Astrophysics Data System (ADS)

    Xue, Peng; Fu, Guicui; Zhang, Dong

    2016-07-01

    In this study, a physics-based model for the fast p-i-n diode is proposed. The model is based on the 1-D Fourier-based solution of ambipolar diffusion equation (ADE) implemented in MATLAB and Simulink. The physical characteristics of fast diode design concepts such as local lifetime control (LLC), emitter control (EMCON) and deep field stop are taken into account. Based on these fast diode design concepts, the ADE is solved for all injection levels instead of high-level injection only as usually done. The variation of high-level lifetime due to local lifetime control is also included in the solution. With the deep field stop layer taken into consideration, the depletion behavior in the N-base during reverse recovery is redescribed. Some physical effects such as avalanche generation and carrier recombination in the depletion region are also taken into account. To be self contained, a parameter extraction method is proposed to extract all the parameters of the model. In the end, the static and reverse recovery experiments for a commercial EMCON diode and a LLC diode are used to validate the proposed model. The simulation results are compared with experiment results and good agreement is obtained.

  20. Study of the EAST Fast Control Power Supply Based on Carrier Phase-Shift PWM

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Tang, Ke; Gao, Ge; Fu, Peng; Huang, Haihong; Dong, Lin

    2013-09-01

    EAST (experimental advanced superconducting tokamak) fast control power supply is a high-capacity single-phase AC/DC/AC inverter power supply, which traces the displacement signal of plasma, and excites coils in a vacuum vessel to produce a magnetic field that realizes plasma stabilization. To meet the requirements of a large current and fast response, the multiple structure of the carrier phase-shift three-level inverter is presented, which realizes parallelled multi-inverters, raises the equivalent switching frequency of the inverters and improves the performance of output waves. In this work the design scheme is analyzed, and the output harmonic characteristic of parallel inverters is studied. The simulation and experimental results confirm that the scheme and control strategy is valid. The power supply system can supply a large current, and has a perfect performance on harmonic features as well as the ability of a fast response.

  1. BEAM DELIVERY LAYOUT FOR THE NEXT LINEAR COLLIDER

    SciTech Connect

    Seryi, A

    2004-07-13

    This paper presents the latest design and layout of the NLC Beam Delivery System (BDS) for the first and second interaction region (IR). This includes the beam switchyard, skew correction and emittance diagnostics section, the collimation system integrated with the final focus, the primary and post linac tune-up beam dumps, and the arcs of the second interaction region beamline. The layout and optics are optimized to deliver design luminosity in the entire energy range from 90 GeV to 1.3 TeV CM, with the first IR BDS also having the capability of being extended to multi-TeV.

  2. A novel methodology for triple/multiple-patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Ghaida, Rani S.; Agarwal, Kanak B.; Liebmann, Lars W.; Nassif, Sani R.; Gupta, Puneet

    2012-03-01

    Double patterning (DP) in a litho-etch-litho-etch (LELE) process is an attractive technique to scale the K1 factor below 0.25. For dense bidirectional layers such as the first metal layer (M1), however, density scaling with LELE suffers from poor tip-to-tip (T2T) and tip-to-side (T2S) spacing. As a result, triple-patterning (TP) in a LELELE process has emerged as a strong alternative. Because of the use of a third exposure/etch, LELELE can achieve good T2T and T2S scaling as well as improved pitch scaling over LELE in case further scaling is needed. TP layout decomposition, a.k.a. TP coloring, is much more challenging than DP layout decomposition. One of the biggest complexities of TP decomposition is that a stitch can be between different two-mask combinations (i.e. first/second, first/third, second/third) and, consequently, stitches are color-dependent and candidate stitch locations can be determined only during/after coloring. In this paper, we offer a novel methodology for TP layout decomposition. Rather than simplifying the TP stitching problem by using DP candidate stitches only (as in previous works), the methodology leverages TP stitching capability by considering additional candidate stitch locations to give coloring higher flexibility to resolve decomposition conflicts. To deal with TP coloring complexity, the methodology employs multiple DP coloring steps, which leverages existing infrastructure developed for DP layout decomposition. The method was used to decompose bidirectional M1 and M2 layouts at 45nm, 32nm, 22nm, and 14nm nodes. For reasonably dense layouts, the method achieves coloring solutions with no conflicts (or a reasonable number of conflicts solvable with manual legalization). For very dense and irregular M1 layouts, however, the method was unable to reach a conflict-free solution and a large number of conflicts was observed. Hence, layout simplifications for the M1 layer may be unavoidable to enable TP for the M1 layer. Although we apply

  3. Fast neural network surrogates for very high dimensional physics-based models in computational oceanography.

    PubMed

    van der Merwe, Rudolph; Leen, Todd K; Lu, Zhengdong; Frolov, Sergey; Baptista, Antonio M

    2007-05-01

    We present neural network surrogates that provide extremely fast and accurate emulation of a large-scale circulation model for the coupled Columbia River, its estuary and near ocean regions. The circulation model has O(10(7)) degrees of freedom, is highly nonlinear and is driven by ocean, atmospheric and river influences at its boundaries. The surrogates provide accurate emulation of the full circulation code and run over 1000 times faster. Such fast dynamic surrogates will enable significant advances in ensemble forecasts in oceanography and weather.

  4. Fast restoration approach for motion blurred image based on deconvolution under the blurring paths

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Song, Jie; Hua, Xia

    2015-12-01

    For the real-time motion deblurring, it is of utmost importance to get a higher processing speed with about the same image quality. This paper presents a fast Richardson-Lucy motion deblurring approach to remove motion blur which rotates blurred image under blurring paths. Hence, the computational time is reduced sharply by using one-dimensional Fast Fourier Transform in one-dimensional Richardson-Lucy method. In order to obtain accurate transformational results, interpolation method is incorporated to fetch the gray values. Experiment results demonstrate that the proposed approach is efficient and effective to reduce motion blur under the blur paths.

  5. Fast neural network surrogates for very high dimensional physics-based models in computational oceanography.

    PubMed

    van der Merwe, Rudolph; Leen, Todd K; Lu, Zhengdong; Frolov, Sergey; Baptista, Antonio M

    2007-05-01

    We present neural network surrogates that provide extremely fast and accurate emulation of a large-scale circulation model for the coupled Columbia River, its estuary and near ocean regions. The circulation model has O(10(7)) degrees of freedom, is highly nonlinear and is driven by ocean, atmospheric and river influences at its boundaries. The surrogates provide accurate emulation of the full circulation code and run over 1000 times faster. Such fast dynamic surrogates will enable significant advances in ensemble forecasts in oceanography and weather. PMID:17517493

  6. A photodiode-based neutral particle bolometer for characterizing charge-exchanged fast-ion behavior

    SciTech Connect

    Clary, R.; Smirnov, A.; Dettrick, S.; Knapp, K.; Korepanov, S.; Ruskov, E.; Heidbrink, W. W.; Zhu, Y.

    2012-10-15

    A neutral particle bolometer (NPB) has been designed and implemented on Tri Alpha Energy's C-2 device in order to spatially and temporally resolve the charge-exchange losses of fast-ion populations originating from neutral beam injection into field-reversed configuration plasmas. This instrument employs a silicon photodiode as the detection device with an integrated tungsten filter coating to reduce sensitivity to light radiation. Here we discuss the technical aspects and calibration of the NPB, and report typical NPB measurement results of wall recycling effects on fast-ion losses.

  7. A photodiode-based neutral particle bolometer for characterizing charge-exchanged fast-ion behaviora)

    NASA Astrophysics Data System (ADS)

    Clary, R.; Smirnov, A.; Dettrick, S.; Knapp, K.; Korepanov, S.; Ruskov, E.; Heidbrink, W. W.; Zhu, Y.

    2012-10-01

    A neutral particle bolometer (NPB) has been designed and implemented on Tri Alpha Energy's C-2 device in order to spatially and temporally resolve the charge-exchange losses of fast-ion populations originating from neutral beam injection into field-reversed configuration plasmas. This instrument employs a silicon photodiode as the detection device with an integrated tungsten filter coating to reduce sensitivity to light radiation. Here we discuss the technical aspects and calibration of the NPB, and report typical NPB measurement results of wall recycling effects on fast-ion losses.

  8. A photodiode-based neutral particle bolometer for characterizing charge-exchanged fast-ion behavior.

    PubMed

    Clary, R; Smirnov, A; Dettrick, S; Knapp, K; Korepanov, S; Ruskov, E; Heidbrink, W W; Zhu, Y

    2012-10-01

    A neutral particle bolometer (NPB) has been designed and implemented on Tri Alpha Energy's C-2 device in order to spatially and temporally resolve the charge-exchange losses of fast-ion populations originating from neutral beam injection into field-reversed configuration plasmas. This instrument employs a silicon photodiode as the detection device with an integrated tungsten filter coating to reduce sensitivity to light radiation. Here we discuss the technical aspects and calibration of the NPB, and report typical NPB measurement results of wall recycling effects on fast-ion losses.

  9. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.

    PubMed

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  10. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  11. Time cycle analysis and simulation of material flow in MOX process layout

    SciTech Connect

    Chakraborty, S.; Saraswat, A.; Danny, K.M.; Somayajulu, P.S.; Kumar, A.

    2013-07-01

    The (U,Pu)O{sub 2} MOX fuel is the driver fuel for the upcoming PFBR (Prototype Fast Breeder Reactor). The fuel has around 30% PuO{sub 2}. The presence of high percentages of reprocessed PuO{sub 2} necessitates the design of optimized fuel fabrication process line which will address both production need as well as meet regulatory norms regarding radiological safety criteria. The powder pellet route has highly unbalanced time cycle. This difficulty can be overcome by optimizing process layout in terms of equipment redundancy and scheduling of input powder batches. Different schemes are tested before implementing in the process line with the help of a software. This software simulates the material movement through the optimized process layout. The different material processing schemes have been devised and validity of the schemes are tested with the software. Schemes in which production batches are meeting at any glove box location are considered invalid. A valid scheme ensures adequate spacing between the production batches and at the same time it meets the production target. This software can be further improved by accurately calculating material movement time through glove box train. One important factor is considering material handling time with automation systems in place.

  12. TreePlus: interactive exploration of networks with enhanced tree layouts.

    PubMed

    Lee, Bongshin; Parr, Cynthia S; Plaisant, Catherine; Bederson, Benjamin B; Veksler, Vladislav D; Gray, Wayne D; Kotfila, Christopher

    2006-01-01

    Despite extensive research, it is still difficult to produce effective interactive layouts for large graphs. Dense layout and occlusion make food webs, ontologies, and social networks difficult to understand and interact with. We propose a new interactive Visual Analytics component called TreePlus that is based on a tree-style layout. TreePlus reveals the missing graph structure with visualization and interaction while maintaining good readability. To support exploration of the local structure of the graph and gathering of information from the extensive reading of labels, we use a guiding metaphor of "Plant a seed and watch it grow." It allows users to start with a node and expand the graph as needed, which complements the classic overview techniques that can be effective at (but often limited to) revealing clusters. We describe our design goals, describe the interface, and report on a controlled user study with 28 participants comparing TreePlus with a traditional graph interface for six tasks. In general, the advantage of TreePlus over the traditional interface increased as the density of the displayed data increased. Participants also reported higher levels of confidence in their answers with TreePlus and most of them preferred TreePlus. PMID:17073365

  13. TreePlus: interactive exploration of networks with enhanced tree layouts.

    PubMed

    Lee, Bongshin; Parr, Cynthia S; Plaisant, Catherine; Bederson, Benjamin B; Veksler, Vladislav D; Gray, Wayne D; Kotfila, Christopher

    2006-01-01

    Despite extensive research, it is still difficult to produce effective interactive layouts for large graphs. Dense layout and occlusion make food webs, ontologies, and social networks difficult to understand and interact with. We propose a new interactive Visual Analytics component called TreePlus that is based on a tree-style layout. TreePlus reveals the missing graph structure with visualization and interaction while maintaining good readability. To support exploration of the local structure of the graph and gathering of information from the extensive reading of labels, we use a guiding metaphor of "Plant a seed and watch it grow." It allows users to start with a node and expand the graph as needed, which complements the classic overview techniques that can be effective at (but often limited to) revealing clusters. We describe our design goals, describe the interface, and report on a controlled user study with 28 participants comparing TreePlus with a traditional graph interface for six tasks. In general, the advantage of TreePlus over the traditional interface increased as the density of the displayed data increased. Participants also reported higher levels of confidence in their answers with TreePlus and most of them preferred TreePlus.

  14. The constraints satisfaction problem approach in the design of an architectural functional layout

    NASA Astrophysics Data System (ADS)

    Zawidzki, Machi; Tateyama, Kazuyoshi; Nishikawa, Ikuko

    2011-09-01

    A design support system with a new strategy for finding the optimal functional configurations of rooms for architectural layouts is presented. A set of configurations satisfying given constraints is generated and ranked according to multiple objectives. The method can be applied to problems in architectural practice, urban or graphic design-wherever allocation of related geometrical elements of known shape is optimized. Although the methodology is shown using simplified examples-a single story residential building with two apartments each having two rooms-the results resemble realistic functional layouts. One example of a practical size problem of a layout of three apartments with a total of 20 rooms is demonstrated, where the generated solution can be used as a base for a realistic architectural blueprint. The discretization of design space is discussed, followed by application of a backtrack search algorithm used for generating a set of potentially 'good' room configurations. Next the solutions are classified by a machine learning method (FFN) as 'proper' or 'improper' according to the internal communication criteria. Examples of interactive ranking of the 'proper' configurations according to multiple criteria and choosing 'the best' ones are presented. The proposed framework is general and universal-the criteria, parameters and weights can be individually defined by a user and the search algorithm can be adjusted to a specific problem.

  15. A novel Fast Gas Chromatography based technique for higher time resolution measurements of speciated monoterpenes in air

    NASA Astrophysics Data System (ADS)

    Jones, C. E.; Kato, S.; Nakashima, Y.; Kajii, Y.

    2013-12-01

    Biogenic emissions supply the largest fraction of non-methane volatile organic compounds (VOC) from the biosphere to the atmospheric boundary layer, and typically comprise a complex mixture of reactive terpenes. Due to this chemical complexity, achieving comprehensive measurements of biogenic VOC (BVOC) in air within a satisfactory time resolution is analytically challenging. To address this, we have developed a novel, fully automated Fast Gas Chromatography (Fast-GC) based technique to provide higher time resolution monitoring of monoterpenes (and selected other C9-C15 terpenes) during plant emission studies and in ambient air. To our knowledge, this is the first study to apply a Fast-GC based separation technique to achieve quantification of terpenes in air. Three chromatography methods have been developed for atmospheric terpene analysis under different sampling scenarios. Each method facilitates chromatographic separation of selected BVOC within a significantly reduced analysis time compared to conventional GC methods, whilst maintaining the ability to quantify individual monoterpene structural isomers. Using this approach, the C10-C15 BVOC composition of single plant emissions may be characterised within a ~ 14 min analysis time. Moreover, in situ quantification of 12 monoterpenes in unpolluted ambient air may be achieved within an ~ 11 min chromatographic separation time (increasing to ~ 19 min when simultaneous quantification of multiple oxygenated C9-C10 terpenoids is required, and/or when concentrations of anthropogenic VOC are significant). This corresponds to a two- to fivefold increase in measurement frequency compared to conventional GC methods. Here we outline the technical details and analytical capability of this chromatographic approach, and present the first in situ Fast-GC observations of 6 monoterpenes and the oxygenated BVOC linalool in ambient air. During this field deployment within a suburban forest ~ 30 km west of central Tokyo, Japan, the

  16. Fast set-based association analysis using summary data from GWAS identifies novel gene loci for human complex traits

    PubMed Central

    Bakshi, Andrew; Zhu, Zhihong; Vinkhuyzen, Anna A. E.; Hill, W. David; McRae, Allan F.; Visscher, Peter M.; Yang, Jian

    2016-01-01

    We propose a method (fastBAT) that performs a fast set-based association analysis for human complex traits using summary-level data from genome-wide association studies (GWAS) and linkage disequilibrium (LD) data from a reference sample with individual-level genotypes. We demonstrate using simulations and analyses of real datasets that fastBAT is more accurate and orders of magnitude faster than the prevailing methods. Using fastBAT, we analyze summary data from the latest meta-analyses of GWAS on 150,064–339,224 individuals for height, body mass index (BMI), and schizophrenia. We identify 6 novel gene loci for height, 2 for BMI, and 3 for schizophrenia at PfastBAT < 5 × 10−8. The gain of power is due to multiple small independent association signals at these loci (e.g. the THRB and FOXP1 loci for schizophrenia). The method is general and can be applied to GWAS data for all complex traits and diseases in humans and to such data in other species. PMID:27604177

  17. Fast set-based association analysis using summary data from GWAS identifies novel gene loci for human complex traits.

    PubMed

    Bakshi, Andrew; Zhu, Zhihong; Vinkhuyzen, Anna A E; Hill, W David; McRae, Allan F; Visscher, Peter M; Yang, Jian

    2016-01-01

    We propose a method (fastBAT) that performs a fast set-based association analysis for human complex traits using summary-level data from genome-wide association studies (GWAS) and linkage disequilibrium (LD) data from a reference sample with individual-level genotypes. We demonstrate using simulations and analyses of real datasets that fastBAT is more accurate and orders of magnitude faster than the prevailing methods. Using fastBAT, we analyze summary data from the latest meta-analyses of GWAS on 150,064-339,224 individuals for height, body mass index (BMI), and schizophrenia. We identify 6 novel gene loci for height, 2 for BMI, and 3 for schizophrenia at PfastBAT < 5 × 10(-8). The gain of power is due to multiple small independent association signals at these loci (e.g. the THRB and FOXP1 loci for schizophrenia). The method is general and can be applied to GWAS data for all complex traits and diseases in humans and to such data in other species. PMID:27604177

  18. Microcomputer Page Layout (MicroPLA) Routine for Text-Graphic Materials: User's Guide. Technical Report 162.

    ERIC Educational Resources Information Center

    Galyon, Rosalind; And Others

    Based on an earlier user's guide to a minicomputer page layout system called PLA (Terrell, 1982), this guide is designed for use in the development and production of text-graphic materials for training relatively unskilled technicians to perform complex procedures. A microcomputer version of PLA, MicroPLA uses the Commodore 8032 microcomputer to…

  19. Fast 3D reconstruction of tool wear based on monocular vision and multi-color structured light illuminator

    NASA Astrophysics Data System (ADS)

    Wang, Zhongren; Li, Bo; Zhou, Yuebin

    2014-11-01

    Fast 3D reconstruction of tool wear from 2D images has great importance to 3D measuring and objective evaluating tool wear condition, determining accurate tool change and insuring machined part's quality. Extracting 3D information of tool wear zone based on monocular multi-color structured light can realize fast recovery of surface topography of tool wear, which overcomes the problems of traditional methods such as solution diversity and slow convergence when using SFS method and stereo match when using 3D reconstruction from multiple images. In this paper, a kind of new multi-color structured light illuminator was put forward. An information mapping model was established among illuminator's structure parameters, surface morphology and color images. The mathematical model to reconstruct 3D morphology based on monocular multi-color structured light was presented. Experimental results show that this method is effective and efficient to reconstruct the surface morphology of tool wear zone.

  20. Comparison of eight logger layouts for monitoring animal-level temperature and humidity during commercial feeder cattle transport.

    PubMed

    Goldhawk, C; Crowe, T; González, L A; Janzen, E; Kastelic, J; Pajor, E; Schwartzkopf-Genswein, K

    2014-09-01

    Measuring animal-level conditions during transit provides information regarding the true risk of environmental challenges to cattle welfare during transportation. However, due to constraints on placing loggers at the animal level, there is a need to identify appropriate proxy locations. The objective was to evaluate 8 distributions of ceiling-level loggers in the deck and belly compartments of pot-belly trailers for assessing animal-level temperature and humidity during 5 to 18 h commercial transportation of feeder cattle. Ambient conditions during transportation ranged from 3.6 to 45.2°C (20.3 ± 7.61°C, mean ± SD). When considering the entire journey, average differences between ceiling and animal-level temperatures were similar among logger layouts (P > 0.05). The uncertainty in the difference in temperature and humidity between locations was high relative to the magnitude of the difference between animal- and ceiling-level conditions. Single-logger layouts required larger adjustments to predict animal-level conditions within either compartment, during either the entire journey or when the trailer was stationary (P < 0.05). Within certain logger layouts, there were small but significant differences in the ability of regression equations to predict animal-level conditions that were associated with cattle weight and available space relative to body size. Furthermore, evaluation of logger layouts based solely on the entire journey without consideration of stationary periods did not adequately capture variability in layout performance. In conclusion, to adequately monitor animal-level temperature and humidity, 10 loggers distributed throughout the compartment was recommended over single-logger layouts within both the deck and belly compartments of pot-belly trailers transporting feeder cattle in warm weather.

  1. Design and Analysis of Fast Text Compression Based on Quasi-Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G; Vitter, Jeffrey Scott

    1994-01-01

    Describes a detailed algorithm for fast text compression. Related to the PPM (prediction by partial matching) method, it simplifies the modeling phase by eliminating the escape mechanism and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. Details of the use of quasi-arithmetic code tables are given, and their…

  2. Optical Layout Analysis of Polarization Interference Imaging Spectrometer by Jones Calculus in View of both Optical Throughput and Interference Fringe Visibility

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanni; Zhang, Chunmin

    2013-01-01

    A polarization interference imaging spectrometer based on Savart polariscope was presented. Its optical throughput was analyzed by Jones calculus. The throughput expression was given, and clearly showed that the optical throughput mainly depended on the intensity of incident light, transmissivity, refractive index and the layout of optical system. The simulation and analysis gave the optimum layout in view of both optical throughput and interference fringe visibility, and verified that the layout of our former design was optimum. The simulation showed that a small deviation from the optimum layout influenced interference fringe visibility little for the optimum one, but influenced severely for others, so a small deviation is admissible in the optimum, and this can mitigate the manufacture difficulty. These results pave the way for further research and engineering design.

  3. Fast and automated DNA assays on a compact disc (CD)-based microfluidic platform

    NASA Astrophysics Data System (ADS)

    Jia, Guangyao

    Nucleic acid-based molecular diagnostics offers enormous potential for the rapid and accurate diagnosis of infectious diseases. However, most of the existing commercial tests are time-consuming and technically complicated, and are thus incompatible with the need for rapid identification of infectious agents. We have successfully developed a CD-based microfluidic platform for fast and automated DNA array hybridization and a low cost, disposable plastic microfluidic platform for polymerase chain reaction (PCR). These platforms have proved to be a promising approach to meet the requirements in terms of detection speed and operational convenience in diagnosis of infectious diseases. In the CD-based microfluidic platform for DNA hybridization, convection is introduced to the system to enhance mass transport so as to accelerate the hybridization rate since DNA hybridization is a diffusion limited reaction. Centrifugal force is utilized for sample propulsion and surface force is used for liquid gating. Standard microscope glass slides are used as the substrates for capture probes owing to their compatibility with commercially available instrumentation (e.g. laser scanners) for detection. Microfabricated polydimethylsiloxane (PDMS) structures are used to accomplish the fluidic functions required by the protocols for DNA hybridization. The assembly of the PDMS structure and the glass slide forms a flow-through hybridization unit that can be accommodated onto the CD platform for reagent manipulation. The above scheme has been validated with oligonucleotides as the targets using commercially available enzyme-labeled fluorescence (ELF 97) for detection of the hybridization events, and tested with amplicons of genomic staphylococcus DNA labeled with Cy dye. In both experiments, significantly higher fluorescence intensities were observed in the flow-through hybridization unit compared to the passive assays. The CD fluidic scheme was also adapted to the immobilization of

  4. Silica encapsulated lipid-based drug delivery systems for reducing the fed/fasted variations of ziprasidone in vitro.

    PubMed

    Dening, Tahnee J; Rao, Shasha; Thomas, Nicky; Prestidge, Clive A

    2016-04-01

    Ziprasidone is a poorly water-soluble antipsychotic drug that demonstrates low fasted state oral bioavailability and a clinically significant two-fold increase in absorption when dosed postprandially. Owing to significant compliance challenges faced by schizophrenic patients, a novel oral formulation of ziprasidone that demonstrates improved fasted state absorption and a reduced food effect is of major interest, and is therefore the aim of this research. Three lipid-based drug delivery systems (LBDDS) were developed and investigated: (a) a self-nanoemulsifying drug delivery system (SNEDDS), (b) a solid SNEDDS formulation, and (c) silica-lipid hybrid (SLH) microparticles. SNEDDS was developed using Capmul MCM® and Tween 80®, and solid SNEDDS was fabricated by spray-drying SNEDDS with Aerosil 380® silica nanoparticles as the solid carrier. SLH microparticles were prepared in a similar manner to solid SNEDDS using a precursor lipid emulsion composed of Capmul MCM® and soybean lecithin. The performance of the developed formulations was evaluated under simulated digesting conditions using an in vitro lipolysis model, and pure (unformulated) ziprasidone was used as a control. While pure ziprasidone exhibited the lowest rate and extent of drug solubilization under fasting conditions and a significant 2.4-fold increase in drug solubilization under fed conditions, all three LBDDS significantly enhanced the extent of drug solubilization under fasting conditions between 18- and 43-folds in comparison to pure drug. No significant difference in drug solubilization for the fed and fasted states was observed for the three LBDDS systems. To highlight the potential of LBDDS, mechanism(s) of action and various performance characteristics are discussed. Importantly, LBDDS are identified as an appropriate formulation strategy to explore further for the improved oral delivery of ziprasidone.

  5. 13. Historic drawing of rocket engine test facility layout, including ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. Historic drawing of rocket engine test facility layout, including Buildings 202, 205, 206, and 206A, February 3, 1984. NASA GRC drawing number CF-101539. On file at NASA Glenn Research Center. - Rocket Engine Testing Facility, NASA Glenn Research Center, Cleveland, Cuyahoga County, OH

  6. IET exhaust gas duct, system layout, plan, and section. shows ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET exhaust gas duct, system layout, plan, and section. shows mounting brackets, concrete braces, divided portion of duct, other details. Ralph M. Parsons 902-5-ANP-712-S 429. Date: May 1954. Approved by INEEL Classification Office for public release. INEEL index code no. 035-0712-60-693-106980 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  7. 21. Historic drawing, Marine Railway. Equalizing Gear Layout, 1917. Photographic ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. Historic drawing, Marine Railway. Equalizing Gear Layout, 1917. Photographic copy of original. Boston National Historical Park Archives, Charlestown Navy Yard. BOSTS 13439, #551-4 - Charlestown Navy Yard, Marine Railway, Between Piers 2 & 3, on Charlestown Waterfront at west end of Navy Yard, Boston, Suffolk County, MA

  8. 26. Historic drawing, Marine Railway. Layout of Hauling Machinery, Building ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. Historic drawing, Marine Railway. Layout of Hauling Machinery, Building 24, 1917. Photographic copy of original. Boston National Historical Park Archives, Charlestown Navy Yark. BOSTS 13439, #551-15 - Charlestown Navy Yard, Marine Railway, Between Piers 2 & 3, on Charlestown Waterfront at west end of Navy Yard, Boston, Suffolk County, MA

  9. Implicit Learning of Viewpoint-Independent Spatial Layouts

    PubMed Central

    Tsuchiai, Taiga; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi

    2012-01-01

    We usually perceive things in our surroundings as unchanged despite viewpoint changes caused by self-motion. The visual system therefore must have a function to process objects independently of viewpoint. In this study, we examined whether viewpoint-independent spatial layout can be obtained implicitly. For this purpose, we used a contextual cueing effect, a learning effect of spatial layout in visual search displays known to be an implicit effect. We investigated the transfer of the contextual cueing effect to images from a different viewpoint by using visual search displays of 3D objects. For images from a different viewpoint, the contextual cueing effect was maintained with self-motion but disappeared when the display changed without self-motion. This indicates that there is an implicit learning effect in environment-centered coordinates and suggests that the spatial representation of object layouts can be obtained and updated implicitly. We also showed that binocular disparity plays an important role in the layout representations. PMID:22740837

  10. 18. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21' Space Command AL-2 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - SITE PLAN. DRAWING NO. AL-2 - SHEET 3 OF 21. - Cape Cod Air Station, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  11. Photocopy of original drawing showing Building 3 layout (drawing located ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of original drawing showing Building 3 layout (drawing located at NAWS China Lake, Division of Public Works). J.T. STAFFORD-J.H. DAVIES-H.L. GOGERTY: DISPENSARY, CONNECTING CORRIDORS, FLOOR PLAN, ELEVATIONS, AND DETAILS - Naval Ordnance Test Station Inyokern, Dispensary, Main Site, Lauritsen Road at McIntyre Street, Ridgecrest, Kern County, CA

  12. Developing a Web Page: Ethics, Prerequisites, Design and Layout.

    ERIC Educational Resources Information Center

    Scarcella, Joseph A.; Lane, Kenneth E.

    For educators interested in developing Web sites, four major issues should be addressed--ethics, prerequisites, design, and layout. By giving attention to these four areas, teachers will develop Web sites that improve their teaching and increase the opportunities for student learning. Each of these areas is addressed in detail, including:…

  13. Intravascular ultrasound image segmentation: a three-dimensional fast-marching method based on gray level distributions.

    PubMed

    Cardinal, Marie-Hélène Roy; Meunier, Jean; Soulez, Gilles; Maurice, Roch L; Therasse, Eric; Cloutier, Guy

    2006-05-01

    Intravascular ultrasound (IVUS) is a catheter based medical imaging technique particularly useful for studying atherosclerotic disease. It produces cross-sectional images of blood vessels that provide quantitative assessment of the vascular wall, information about the nature of atherosclerotic lesions as well as plaque shape and size. Automatic processing of large IVUS data sets represents an important challenge due to ultrasound speckle, catheter artifacts or calcification shadows. A new three-dimensional (3-D) IVUS segmentation model, that is based on the fast-marching method and uses gray level probability density functions (PDFs) of the vessel wall structures, was developed. The gray level distribution of the whole IVUS pullback was modeled with a mixture of Rayleigh PDFs. With multiple interface fast-marching segmentation, the lumen, intima plus plaque structure, and media layers of the vessel wall were computed simultaneously. The PDF-based fast-marching was applied to 9 in vivo IVUS pullbacks of superficial femoral arteries and to a simulated IVUS pullback. Accurate results were obtained on simulated data with average point to point distances between detected vessel wall borders and ground truth <0.072 mm. On in vivo IVUS, a good overall performance was obtained with average distance between segmentation results and manually traced contours <0.16 mm. Moreover, the worst point to point variation between detected and manually traced contours stayed low with Hausdorff distances <0.40 mm, indicating a good performance in regions lacking information or containing artifacts. In conclusion, segmentation results demonstrated the potential of gray level PDF and fast-marching methods in 3-D IVUS image processing.

  14. Optimization of site layout for change of plant operation

    SciTech Connect

    Reuwer, S.M.; Kasperski, E.; Joseph, T.D.

    1995-12-31

    Several of the Florida Power & Light operating fossil power plants have undergone significant site layout changes as well as changes in plant operation. The FPL Fort Lauderdale Plant was repowered in 1992 which consisted of using four (4) Westinghouse 501F Combustion Turbines rated at 158 Mw each, to repower two (2) existing steam turbines rates at 143 Mw each. In 1991, a physical security fence separation occurred between Turkey Point Plants`s fossil fueled Units 1&2, and its nuclear fueled Units 3&4. As a result of this separation, certain facilities common to both the nuclear side and fossil side of the plant required relocating. Also, the Sanford and Manatee Plants were evaluated for the use of a new fuel as an alternative source. Manatee Plant is currently in the licensing process for modifications to burn a new fuel, requiring expansion of backened clean-up equipment, with additional staff to operate this equipment. In order to address these plant changes, site development studies were prepared for each plant to determine the suitability of the existing ancillary facilities to support the operational changes, and to make recommendations for facility improvement if found inadequate. A standardized process was developed for all of the site studies. This proved to be a comprehensive process and approach, that gave FPL a successful result that all the various stake holders bought into. This process was objectively based, focused, and got us to where we need to be as quickly as possible. As a result, this paper details the outline and various methods developed to prepare a study following this process, that will ultimately provide the optimum site development plan for the changing plant operations.

  15. Dielectrophoresis based continuous-flow nano sorter: fast quality control of gene vaccines.

    PubMed

    Viefhues, Martina; Wegener, Sonja; Rischmüller, Anja; Schleef, Martin; Anselmetti, Dario

    2013-08-01

    We present a prototype nanofluidic device, developed for the continuous-flow dielectrophoretic (DEP) fractionation, purification, and quality control of sample suspensions for gene vaccine production. The device consists of a cross injector, two operation regions, and separate outlets where the analytes are collected. In each DEP operation region, an inhomogeneous electric field is generated at a channel spanning insulating ridge. The samples are driven by ac and dc voltages that generate a dielectrophoretic potential at the ridge as well as (linear) electrokinetics. Since the DEP potential differs at the two ridges, probes of three and more species can be iteratively fully fractionated. We demonstrate the fast and efficient separation of parental plasmid, miniplasmid, and minicircle DNA, where the latter is applicable as a gene vaccine. Since the present technique is virtually label-free, it offers a fast purification and in-process quality control with low consumption, in parallel, for the production of gene vaccines.

  16. Ground-based complex for detection and investigation of fast optical transients in wide field

    NASA Astrophysics Data System (ADS)

    Molinari, Emilio; Beskin, Grigory; Bondar, Sergey; Karpov, Sergey; Plokhotnichenko, Vladimir; de-Bur, Vjacheslav; Greco, Guiseppe; Bartolini, Corrado; Guarnieri, Adriano; Piccioni, Adalberto

    2008-07-01

    To study short stochastic optical flares of different objects (GRBs, SNs, etc) of unknown localizations as well as NEOs it is necessary to monitor large regions of sky with high time resolution. We developed a system which consists of wide-field camera (FOW is 400-600 sq.deg.) using TV-CCD with time resolution of 0.13 s to record and classify optical transients, and a fast robotic telescope aimed to perform their spectroscopic and photometric investigation just after detection. Such two telescope complex TORTOREM combining wide-field camera TORTORA and robotic telescope REM operated from May 2006 at La Silla ESO observatory. Some results of its operation, including first fast time resolution study of optical transient accompanying GRB and discovery of its fine time structure, are presented. Prospects for improving the complex efficiency are given.

  17. PHYML Online—a web server for fast maximum likelihood-based phylogenetic inference

    PubMed Central

    Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier

    2005-01-01

    PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at . PMID:15980534

  18. "A Fast Running Program For Minicomputer Based On Exact Derivative Of Optimization Criterions"

    NASA Astrophysics Data System (ADS)

    Hugues, Edgar; Babolat, Claude; Bacchus, J. M.

    1983-10-01

    The very fast evolution of the Hardware and the software brings the optical designer to choice betwen two attitudes. 1) To use the services of a specialized company which is continusly devoloping optical programs. 2) To write its own programs and improve them according to the needs. Theory and experience have to help themselves to realize an harmonious balance in order to get product improvements through programs improvements. CERCO has choosen the second alternative.

  19. PHYML Online--a web server for fast maximum likelihood-based phylogenetic inference.

    PubMed

    Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier

    2005-07-01

    PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at http://atgc.lirmm.fr/phyml.

  20. Fast Marching and Runge-Kutta Based Method for Centreline Extraction of Right Coronary Artery in Human Patients.

    PubMed

    Cui, Hengfei; Wang, Desheng; Wan, Min; Zhang, Jun-Mei; Zhao, Xiaodan; Tan, Ru San; Huang, Weimin; Xiong, Wei; Duan, Yuping; Zhou, Jiayin; Luo, Tong; Kassab, Ghassan S; Zhong, Liang

    2016-06-01

    The CT angiography (CTA) is a clinically indicated test for the assessment of coronary luminal stenosis that requires centerline extractions. There is currently no centerline extraction algorithm that is automatic, real-time and very accurate. Therefore, we sought to (i) develop a hybrid approach by incorporating fast marching and Runge-Kutta based methods for the extraction of coronary artery centerlines from CTA; (ii) evaluate the accuracy of the present method compared to Van's method by using ground truth centerline as a reference; (iii) evaluate the coronary lumen area of our centerline method in comparison with the intravascular ultrasound (IVUS) as the standard of reference. The proposed method was found to be more computationally efficient, and performed better than the Van's method in terms of overlap measures (i.e., OV: [Formula: see text] vs. [Formula: see text]; OF: [Formula: see text] vs. [Formula: see text]; and OT: [Formula: see text] vs. [Formula: see text], all [Formula: see text]). In comparison with IVUS derived coronary lumen area, the proposed approach was more accurate than the Van's method. This hybrid approach by incorporating fast marching and Runge-Kutta based methods could offer fast and accurate extraction of centerline as well as the lumen area. This method may garner wider clinical potential as a real-time coronary stenosis assessment tool. PMID:27140197

  1. Rolling element bearing fault diagnosis based on the combination of genetic algorithms and fast kurtogram

    NASA Astrophysics Data System (ADS)

    Zhang, Yongxiang; Randall, R. B.

    2009-07-01

    The rolling element bearing is a key part in many mechanical facilities and the diagnosis of its faults is very important in the field of predictive maintenance. Till date, the resonant demodulation technique (envelope analysis) has been widely exploited in practice. However, much practical diagnostic equipment for carrying out the analysis gives little flexibility to change the analysis parameters for different working conditions, such as variation in rotating speed and different fault types. Because the signals from a flawed bearing have features of non-stationarity, wide frequency range and weak strength, it can be very difficult to choose the best analysis parameters for diagnosis. However, the kurtosis of the vibration signals of a bearing is different from normal to bad condition, and is robust in varying conditions. The fast kurtogram gives rough analysis parameters very efficiently, but filter centre frequency and bandwidth cannot be chosen entirely independently. Genetic algorithms have a strong ability for optimization, but are slow unless initial parameters are close to optimal. Therefore, the authors present a model and algorithm to design the parameters for optimal resonance demodulation using the combination of fast kurtogram for initial estimates, and a genetic algorithm for final optimization. The feasibility and the effectiveness of the proposed method are demonstrated by experiment and give better results than the classical method of arbitrarily choosing a resonance to demodulate. The method gives more flexibility in choosing optimal parameters than the fast kurtogram alone.

  2. Fast-rolling shutter compensation based on piecewise quadratic approximation of a camera trajectory

    NASA Astrophysics Data System (ADS)

    Lee, Yun Gu; Kai, Guo

    2014-09-01

    Rolling shutter effect commonly exists in a video camera or a mobile phone equipped with a complementary metal-oxide semiconductor sensor, caused by a row-by-row exposure mechanism. As video resolution in both spatial and temporal domains increases dramatically, removing rolling shutter effect fast and effectively becomes a challenging problem, especially for devices with limited hardware resources. We propose a fast method to compensate rolling shutter effect, which uses a piecewise quadratic function to approximate a camera trajectory. The duration of a quadratic function in each segment is equal to one frame (or half-frame), and each quadratic function is described by an initial velocity and a constant acceleration. The velocity and acceleration of each segment are estimated using only a few global (or semiglobal) motion vectors, which can be simply predicted from fast motion estimation algorithms. Then geometric image distortion at each scanline is inferred from the predicted camera trajectory for compensation. Experimental results on mobile phones with full-HD video demonstrate that our method can not only be implemented in real time, but also achieve satisfactory visual quality.

  3. Reconstruction of images from compressive sensing based on the stagewise fast LASSO

    NASA Astrophysics Data System (ADS)

    Wu, Jiao; Liu, Fang; Jiao, Licheng

    2009-10-01

    Compressive sensing (CS) is a theory about that one may achieve a nearly exact signal reconstruction from the fewer samples, if the signal is sparse or compressible under some basis. The reconstruction of signal can be obtained by solving a convex program, which is equivalent to a LASSO problem with l1-formulation. In this paper, we propose a stage-wise fast LASSO (StF-LASSO) algorithm for the image reconstruction from CS. It uses an insensitive Huber loss function to the objective function of LASSO, and iteratively builds the decision function and updates the parameters by introducing a stagewise fast learning strategy. Simulation studies in the CS reconstruction of the natural images and SAR images widely applied in practice demonstrate that the good reconstruction performance both in evaluation indexes and visual effect can be achieved by StF-LASSO with the fast recovered speed among the algorithms which have been implemented in our simulations in most of the cases. Theoretical analysis and experiments show that StF-LASSO is a CS reconstruction algorithm with the low complexity and stability.

  4. The Effects of Fast Start Reading: A Fluency-Based Home Involvement Reading Program, on the Reading Achievement of Beginning Readers

    ERIC Educational Resources Information Center

    Rasinski, Timothy; Stevenson, Bruce

    2005-01-01

    This study tested the effects of a fluency-based home reading program called Fast Start. Thirty beginning first-grade students, representing a wide range of early reading abilities, were randomly assigned to experimental or control conditions for a period of 11 weeks. Parents and students in the experimental group received Fast Start training,…

  5. Computer-Based Video Instruction to Teach Students with Intellectual Disabilities to Verbally Respond to Questions and Make Purchases in Fast Food Restaurants

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Pridgen, Leslie S.; Cronin, Beth A.

    2005-01-01

    Computer-based video instruction (CBVI) was used to teach verbal responses to questions presented by cashiers and purchasing skills in fast food restaurants. A multiple probe design across participants was used to evaluate the effectiveness of CBVI. Instruction occurred through simulations of three fast food restaurants on the computer using video…

  6. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-06-16

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral). PMID:24977582

  7. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography

    PubMed Central

    Xu, Daguang; Huang, Yong; Kang, Jin U.

    2014-01-01

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial)×1000(lateral). PMID:24977582

  8. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-06-16

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).

  9. Fast generation of three-dimensional entanglement between two spatially separated atoms via invariant-based shortcut

    NASA Astrophysics Data System (ADS)

    Wu, Jin-Lei; Song, Chong; Ji, Xin; Zhang, Shou

    2016-10-01

    A scheme is proposed for the fast generation of three-dimensional entanglement between two atoms trapped in two cavities connected by a fiber via invariant-based shortcut to adiabatic passage. With the help of quantum Zeno dynamics, the technique of invariant-based shortcut to adiabatic passage is applied for the generation of two-atom three-dimensional entanglement. The numerical simulation results show that, within a short time, the scheme has a high fidelity and is robust against the decoherence caused by the atomic spontaneous emission, photon leakage, and the variations in the parameters selected. Moreover, the scheme may be possible to be implemented with the current experimental technology.

  10. Which Layout Do You Prefer? An Analysis of Readers' Preferences for Different Typographic Layouts of Structured Abstracts.

    ERIC Educational Resources Information Center

    Hartley, James; Sydes, Matthew

    1996-01-01

    Discusses studies involving over 400 participants to investigate reader preferences for typographic settings of the subheadings and overall position and layout of abstracts on a 2-column A4 page. Results suggest readers preferred subheadings in bold capital letters, a line-space above the main headings, and for abstracts to be centered over the…

  11. Real-time fMRI data analysis using region of interest selection based on fast ICA

    NASA Astrophysics Data System (ADS)

    Xie, Baoquan; Ma, Xinyue; Yao, Li; Long, Zhiying; Zhao, Xiaojie

    2011-03-01

    Real-time functional magnetic resonance imaging (rtfMRI) is a new technique which can present (feedback) brain activity during scanning. Through fast acquisition and online analysis of BOLD signal, fMRI data are processed within one TR. Current rtfMRI provides an activation map under specific task mainly through the GLM analysis to select region of interest (ROI). This study was based on independent component analysis (ICA) and used the result of fast ICA analysis to select the node of the functional network as the ROI. Real-time brain activity within the ROI was presented to the subject who needed to find strategies to control his brain activity. The whole real-time processes involved three parts: pre-processing (including head motion correction and smoothing), fast ICA analysis and feedback. In addition, the result of fast head motion correction was also presented to the experimenter in a curve diagram. Based on the above analysis processes, a real time feedback experiment with a motor imagery task was performed. An overt finger movement task as localizer session was adopted for ICA analysis to get the motor network. Supplementary motor area (SMA) in such network was selected as the ROI. During the feedback session, the average of BOLD signals within ROI was presented to the subjects for self-regulation under a motor imagery task. In this experiment, TR was 1.5 seconds, and the whole time of processing and presentation was within 1 second. Experimental results not only showed that the SMA was controllable, but also proved that the analysis method was effective.

  12. Fast O2 Binding at Dicopper Complexes Containing Schiff-Base Dinucleating Ligands

    PubMed Central

    Company, Anna; Gómez, Laura; Mas-Ballesté, Rubén; Korendovych, Ivan V.; Ribas, Xavi; Poater, Albert; Parella, Teodor; Fontrodona, Xavier; Benet-Buchholz, Jordi; Solà, Miquel; Que, Lawrence; Rybak-Akimova, Elena; Costas, Miquel

    2008-01-01

    A new family of dicopper(I) complexes [CuI2RL](X)2, (R = H, 1X, R = tBu, 2X and R = NO2, 3X, X = CF3SO3, ClO4, SbF6 or BArF, BArF = [B{3,5-(CF3)2-C6H3}4]−), where RL is a Schiff-base ligand containing two tridentate binding sites linked by a xylyl spacer have been prepared, characterized, and their reaction with O2 studied. The complexes were designed with the aim of reproducing structural aspects of the active site of type 3 dicopper proteins; they contain two three-coordinate copper sites and a rather flexible podand ligand backbone. The solid state structures of 1ClO4, 2CF3SO3, 2ClO4 and 3BArF·CH3CN have been established by single crystal X-ray diffraction analysis. 1ClO4 adopts a polymeric structure in solution while 2CF3SO3, 2ClO4 and 3BArF·CH3CN are monomeric. The complexes have been studied in solution by means of 1H and 19F NMR spectroscopy, which put forward the presence of dynamic processes in solution. 1-3BArF and 1-3CF3SO3 in acetone react rapidly with O2 to generate metaestable [CuIII2(μ-O)2(RL)]2+ 1-3(O2) and [CuIII2(μ-O)2(CF3SO3)(RL)]+ 1-3(O2)(CF3SO3) species, respectively that have been characterized by UV-vis spectroscopy and resonance Raman analysis. Instead, reaction of 1-3BArF with O2 in CH2Cl2 results in intermolecular O2 binding. DFT methods have been used to study the chemical identities and structural parameters of the O2 adducts, and the relative stability of the CuIII2(μ-O)2 form with respect to the CuII2(μ-η2: η2-peroxo) isomer. The reaction of 1X, X = CF3SO3 and BArF with O2 in acetone has been studied by stopped-flow exhibiting an unexpected very fast reaction rate (k = 3.82(4) × 103 M−1s−1, ΔH‡ = 4.9 ± 0.5 kJ·mol−1, ΔS‡ = −148 ± 5 J·K−1·mol−1), nearly three orders of magnitude faster than in the parent [CuI2(m-XYLMeAN)]2+. Thermal decomposition of 1-3(O2) does not result in aromatic hydroxylation. The mechanism and kinetics of O2 binding to 1X (X = CF3SO3 and BArF) is discussed and compared with those

  13. Fast and quantitative differentiation of single-base mismatched DNA by initial reaction rate of catalytic hairpin assembly.

    PubMed

    Li, Chenxi; Li, Yixin; Xu, Xiao; Wang, Xinyi; Chen, Yang; Yang, Xiaoda; Liu, Feng; Li, Na

    2014-10-15

    The widely used catalytic hairpin assembly (CHA) amplification strategy generally needs several hours to accomplish one measurement based on the prevailingly used maximum intensity detection mode, making it less practical for assays where high throughput or speed is desired. To make the best use of the kinetic specificity of toehold domain for circuit reaction initiation, we developed a mathematical model and proposed an initial reaction rate detection mode to quantitatively differentiate the single-base mismatch. Using the kinetic mode, assay time can be reduced substantially to 10 min for one measurement with the comparable sensitivity and single-base mismatch differentiating ability as were obtained by the maximum intensity detection mode. This initial reaction rate based approach not only provided a fast and quantitative differentiation of single-base mismatch, but also helped in-depth understanding of the CHA system, which will be beneficial to the design of highly sensitive and specific toehold-mediated hybridization reactions.

  14. Radar cross-section reduction based on an iterative fast Fourier transform optimized metasurface

    NASA Astrophysics Data System (ADS)

    Song, Yi-Chuan; Ding, Jun; Guo, Chen-Jiang; Ren, Yu-Hui; Zhang, Jia-Kai

    2016-07-01

    A novel polarization insensitive metasurface with over 25 dB monostatic radar cross-section (RCS) reduction is introduced. The proposed metasurface is comprised of carefully arranged unit cells with spatially varied dimension, which enables approximate uniform diffusion of incoming electromagnetic (EM) energy and reduces the threat from bistatic radar system. An iterative fast Fourier transform (FFT) method for conventional antenna array pattern synthesis is innovatively applied to find the best unit cell geometry parameter arrangement. Finally, a metasurface sample is fabricated and tested to validate RCS reduction behavior predicted by full wave simulation software Ansys HFSSTM and marvelous agreement is observed.

  15. Survey of Branch Support Methods Demonstrates Accuracy, Power, and Robustness of Fast Likelihood-based Approximation Schemes

    PubMed Central

    Anisimova, Maria; Gil, Manuel; Dufayard, Jean-François; Dessimoz, Christophe; Gascuel, Olivier

    2011-01-01

    Phylogenetic inference and evaluating support for inferred relationships is at the core of many studies testing evolutionary hypotheses. Despite the popularity of nonparametric bootstrap frequencies and Bayesian posterior probabilities, the interpretation of these measures of tree branch support remains a source of discussion. Furthermore, both methods are computationally expensive and become prohibitive for large data sets. Recent fast approximate likelihood-based measures of branch supports (approximate likelihood ratio test [aLRT] and Shimodaira–Hasegawa [SH]-aLRT) provide a compelling alternative to these slower conventional methods, offering not only speed advantages but also excellent levels of accuracy and power. Here we propose an additional method: a Bayesian-like transformation of aLRT (aBayes). Considering both probabilistic and frequentist frameworks, we compare the performance of the three fast likelihood-based methods with the standard bootstrap (SBS), the Bayesian approach, and the recently introduced rapid bootstrap. Our simulations and real data analyses show that with moderate model violations, all tests are sufficiently accurate, but aLRT and aBayes offer the highest statistical power and are very fast. With severe model violations aLRT, aBayes and Bayesian posteriors can produce elevated false-positive rates. With data sets for which such violation can be detected, we recommend using SH-aLRT, the nonparametric version of aLRT based on a procedure similar to the Shimodaira–Hasegawa tree selection. In general, the SBS seems to be excessively conservative and is much slower than our approximate likelihood-based methods. PMID:21540409

  16. [Study of fast acquisition technique for heavy ion CT system based on the measurement of residual range distribution].

    PubMed

    Mogaki, Tatsuya; Abe, Shinji; Sato, Hitoshi; Muraishi, Hiroshi; Hara, Hidetake; Hara, Satoshi; Miyake, Shoko; Himukai, Takeshi; Kanai, Tatsuaki

    2012-01-01

    We proposed a new technique for the fast acquisition heavy ion CT (HICT) system based on the measurement of residual range distribution using an intensifying screen and charge coupled device camera. The previously used fast acquisition HICT system had poor electron density resolution. In the new technique, the range shifter thickness is varied over the required dynamic range in the spill of the heavy ion beam at each projection angle and the residual range distribution is determined by a series of acquisition data. We examined the image quality using the contrast noise ratio and the noise power spectrum, and estimated the electron density resolution, using a low-contrast phantom for measurement of electron density resolution. The image quality of the new technique was superior to that of the previous fast acquisition HICT system. Furthermore, the relative electron density resolution was 0.011, which represented an improvement of about 12-fold. Therefore we showed that the new technique was potentially useful in clinical use of HICT, including treatment and quality assurance of heavy ion therapy. PMID:24592673

  17. 77 FR 55896 - Notice of Release Effecting Federal Grant Assurance Obligations Due to Airport Layout Plan...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ... Due to Airport Layout Plan Revision at Mather Airport, Sacramento, CA AGENCY: Federal Aviation... Administration (FAA) proposes to rule and invites public comment on the application for an Airport Layout Plan... Force to the County. As a result, the existing Airport Layout Plan will be revised to delete the...

  18. Fast pseudo-CT synthesis from MRI T1-weighted images using a patch-based approach

    NASA Astrophysics Data System (ADS)

    Torrado-Carvajal, A.; Alcain, E.; Montemayor, A. S.; Herraiz, J. L.; Rozenholc, Y.; Hernandez-Tamames, J. A.; Adalsteinsson, E.; Wald, L. L.; Malpica, N.

    2015-12-01

    MRI-based bone segmentation is a challenging task because bone tissue and air both present low signal intensity on MR images, making it difficult to accurately delimit the bone boundaries. However, estimating bone from MRI images may allow decreasing patient ionization by removing the need of patient-specific CT acquisition in several applications. In this work, we propose a fast GPU-based pseudo-CT generation from a patient-specific MRI T1-weighted image using a group-wise patch-based approach and a limited MRI and CT atlas dictionary. For every voxel in the input MR image, we compute the similarity of the patch containing that voxel with the patches of all MR images in the database, which lie in a certain anatomical neighborhood. The pseudo-CT is obtained as a local weighted linear combination of the CT values of the corresponding patches. The algorithm was implemented in a GPU. The use of patch-based techniques allows a fast and accurate estimation of the pseudo-CT from MR T1-weighted images, with a similar accuracy as the patient-specific CT. The experimental normalized cross correlation reaches 0.9324±0.0048 for an atlas with 10 datasets. The high NCC values indicate how our method can accurately approximate the patient-specific CT. The GPU implementation led to a substantial decrease in computational time making the approach suitable for real applications.

  19. Porous spherical CaO-based sorbents via PSS-assisted fast precipitation for CO2 capture.

    PubMed

    Wang, Shengping; Fan, Lijing; Li, Chun; Zhao, Yujun; Ma, Xinbin

    2014-10-22

    In this paper, we report the development of synthetic CaO-based sorbents via a fast precipitation method with the assistance of sodium poly(styrenesulfonate) (PSS). The effect of PSS on physical properties of the CaO sorbents and their CO2 capture performance were investigated. The presence of PSS dispersed the CaO particles effectively as well as increased their specific surface area and pore volume remarkably. The obtained porous spherical structure facilitated CO2 to diffuse and react with inner CaO effectively, resulting in a significant improvement in initial CO2 carbonation capacity. A proper amount of Mg(2+) precursor solution was doped during a fast precipitation process to gain CaO-based sorbents with a high anti-sintering property, which maintained the porous spherical structure with the high specific surface area. CaO-based sorbents derived from a MgxCa1-xCO3 precursor existed in the form of CaO and MgO. The homogeneous distribution of MgO in the CaO-based sorbents effectively prevented the CaO crystallite from growing and sintering, further resulting in the favorable long-term durability with carbonation capacity of about 52.0% after 30 carbonation/calcination cycles.

  20. Layout optimization with algebraic multigrid methods

    NASA Technical Reports Server (NTRS)

    Regler, Hans; Ruede, Ulrich

    1993-01-01

    Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.