Science.gov

Sample records for based fast layout

  1. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this

  2. Exploration of networks using overview+detail with constraint-based cooperative layout.

    PubMed

    Dwyer, Tim; Marriott, Kim; Schreiber, Falk; Stuckey, Peter; Woodward, Michael; Wybrow, Michael

    2008-01-01

    A standard approach to large network visualization is to provide an overview of the network and a detailed view of a small component of the graph centred around a focal node. The user explores the network by changing the focal node in the detailed view or by changing the level of detail of a node or cluster. For scalability, fast force-based layout algorithms are used for the overview and the detailed view. However, using the same layout algorithm in both views is problematic since layout for the detailed view has different requirements to that in the overview. Here we present a model in which constrained graph layout algorithms are used for layout in the detailed view. This means the detailed view has high-quality layout including sophisticated edge routing and is customisable by the user who can add placement constraints on the layout. Scalability is still ensured since the slower layout techniques are only applied to the small subgraph shown in the detailed view. The main technical innovations are techniques to ensure that the overview and detailed view remain synchronized, and modifying constrained graph layout algorithms to support smooth, stable layout. The key innovation supporting stability are new dynamic graph layout algorithms that preserve the topology or structure of the network when the user changes the focus node or the level of detail by in situ semantic zooming. We have built a prototype tool and demonstrate its use in two application domains, UML class diagrams and biological networks.

  3. Issues in Text Design and Layout for Computer Based Communications.

    ERIC Educational Resources Information Center

    Andresen, Lee W.

    1991-01-01

    Discussion of computer-based communications (CBC) focuses on issues involved with screen design and layout for electronic text, based on experiences with electronic messaging, conferencing, and publishing within the Australian Open Learning Information Network (AOLIN). Recommendations for research on design and layout for printed text are also…

  4. Directional 2D functions as models for fast layout pattern transfer verification

    NASA Astrophysics Data System (ADS)

    Torres, J. Andres; Hofmann, Mark; Otto, Oberdan

    2009-03-01

    As advanced manufacturing processes become more stable, the need to adapt new designs to fully utilize the available manufacturing technology becomes a key technologic differentiator. However, many times such gains can only be realized and evaluated during full chip analysis. It has been demonstrated that the most accurate layout verification methods require application of the actual OPC recipes along with most of the mask data preparation that defines the pattern transfer characteristics of the process. Still, this method in many instances is not sufficiently fast to be used in a layout creation environment which undergoes constant updates. By doing an analysis of typical mask data processing, it is possible to determine that the most CPUintensive computations are the OPC and contour simulation steps needed to perform layout printability checks. Several researchers have tried to reduce the time it takes to compute the OPC mask by introducing matrix convolutions of the layout with empirically calibrated two-dimensional functions. However, most of these approaches do not provide a sufficient speed-up since they only replace the OPC computation and still require a full contour computation. Another alternative is to try to find effective ways of pattern matching those topologies that will exhibit transfer difficulties4, but such methods lack the ability to be predictive beyond their calibration data. In this paper we present a methodology that includes common resolution enhancement techniques, such as retargeting and sub-resolution assist feature insertion, and which replaces the OPC computation and subsequent contour calculation with an edge bias function based on an empirically-calibrated, directional, two-dimensional function. Because the edge bias function does not provide adequate control over the corner locations, a spline-based smoothing process is applied. The outcome is a piecewise-linear curve similar to those obtained by full lithographic simulations. Our

  5. Pitch-based pattern splitting for 1D layout

    NASA Astrophysics Data System (ADS)

    Nakayama, Ryo; Ishii, Hiroyuki; Mikami, Koji; Tsujita, Koichiro; Yaegashi, Hidetami; Oyama, Kenichi; Smayling, Michael C.; Axelrad, Valery

    2015-07-01

    The pattern splitting algorithm for 1D Gridded-Design-Rules layout (1D layout) for sub-10 nm node logic devices is shown. It is performed with integer linear programming (ILP) based on the conflict graph created from a grid map for each designated pitch. The relation between the number of times for patterning and the minimum pitch is shown systematically with a sample pattern of contact layer for each node. From the result, the number of times for patterning for 1D layout is fewer than that for conventional 2D layout. Moreover, an experimental result including SMO and total integrated process with hole repair technique is presented with the sample pattern of contact layer whose pattern density is relatively high among critical layers (fin, gate, local interconnect, contact, and metal).

  6. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization.

    PubMed

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-04-17

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors.

  7. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization

    PubMed Central

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-01-01

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500

  8. Evidence-based ergonomics. A comparison of Japanese and American office layouts.

    PubMed

    Noro, Kageyu; Fujimaki, Goroh; Kishi, Shinsuke

    2003-01-01

    There is a variety of alternatives in office layouts. Yet the theoretical basis and criteria for predicting how well these layouts accommodate employees are poorly understood. The objective of this study was to evaluate criteria for selecting office layouts. Intensive computer workers worked in simulated office layouts in a controlled experimental laboratory. Eye movement measures indicate that knowledge work requires both concentration and interaction. Findings pointed to one layout as providing optimum balance between these 2 requirements. Recommendations for establishing a theoretical basis and design criteria for selecting office layouts based on work style are suggested.

  9. [Land layout for lake tourism based on ecological restraint].

    PubMed

    Wang, Jian-Ying; Li, Jiang-Feng; Zou, Li-Lin; Liu, Shi-Bin

    2012-10-01

    To avoid the decrease and deterioration of lake wetlands and the other ecological issues such as lake water pollution that were caused by the unreasonable exploration of lake tourism, a land layout for the tourism development of Liangzi Lake with the priority of ecological security pattern was proposed, based on the minimal cumulative resistance model and by using GIS technology. The study area was divided into four ecological function zones, i. e., core protection zone, ecological buffer zone, ecotone zone, and human activity zone. The core protection zone was the landscape region of ecological source. In the protection zone, new tourism land was forbidden to be increased, and some of the existing fundamental tourism facilities should be removed while some of them should be upgraded. The ecological buffer zone was the landscape region with resistance value ranged from 0 to 4562. In the buffer zone, expansion of tourism land should be forbidden, the existing tourism land should be downsized, and human activities should be isolated from ecological source by converting the human environment to the natural environment as far as possible. The ecotone zone was the landscape region with resistance value ranged from 4562 to 30797. In this zone, the existing tourism land was distributed in patches, tourism land could be expanded properly, and the lake forestry ecological tourism should be developed widely. The human activity zone was the landscape region with resistance value ranged from 30797 to 97334, which would be the key area for the land layout of lake tourism. It was suggested that the land layout for tourism with the priority of landscape ecological security pattern would be the best choice for the lake sustainable development.

  10. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  11. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  12. Native conflict awared layout decomposition in triple patterning lithography using bin-based library matching method

    NASA Astrophysics Data System (ADS)

    Ke, Xianhua; Jiang, Hao; Lv, Wen; Liu, Shiyuan

    2016-03-01

    Triple patterning (TP) lithography becomes a feasible technology for manufacturing as the feature size further scale down to sub 14/10 nm. In TP, a layout is decomposed into three masks followed with exposures and etches/freezing processes respectively. Previous works mostly focus on layout decomposition with minimal conflicts and stitches simultaneously. However, since any existence of native conflict will result in layout re-design/modification and reperforming the time-consuming decomposition, the effective method that can be aware of native conflicts (NCs) in layout is desirable. In this paper, a bin-based library matching method is proposed for NCs detection and layout decomposition. First, a layout is divided into bins and the corresponding conflict graph in each bin is constructed. Then, we match the conflict graph in a prebuilt colored library, and as a result the NCs can be located and highlighted quickly.

  13. Surrogate based wind farm layout optimization using manifold mapping

    NASA Astrophysics Data System (ADS)

    Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester

    2016-09-01

    High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.

  14. Graph-based layout analysis for PDF documents

    NASA Astrophysics Data System (ADS)

    Xu, Canhui; Tang, Zhi; Tao, Xin; Li, Yun; Shi, Cao

    2013-03-01

    To increase the flexibility and enrich the reading experience of e-book on small portable screens, a graph based method is proposed to perform layout analysis on Portable Document Format (PDF) documents. Digital born document has its inherent advantages like representing texts and fractional images in explicit form, which can be straightforwardly exploited. To integrate traditional image-based document analysis and the inherent meta-data provided by PDF parser, the page primitives including text, image and path elements are processed to produce text and non text layer for respective analysis. Graph-based method is developed in superpixel representation level, and page text elements corresponding to vertices are used to construct an undirected graph. Euclidean distance between adjacent vertices is applied in a top-down manner to cut the graph tree formed by Kruskal's algorithm. And edge orientation is then used in a bottom-up manner to extract text lines from each sub tree. On the other hand, non-textual objects are segmented by connected component analysis. For each segmented text and non-text composite, a 13-dimensional feature vector is extracted for labelling purpose. The experimental results on selected pages from PDF books are presented.

  15. Optimization of Orchestral Layouts Based on Instrument Directivity Patterns

    NASA Astrophysics Data System (ADS)

    Stroud, Nathan Paul

    The experience of hearing an exceptional symphony orchestra perform in an excel- lent concert hall can be profound and moving, causing a level of excitement not often reached for listeners. Romantic period style orchestral music, recognized for validating the use of intense emotion for aesthetic pleasure, was the last significant development in the history of the orchestra. In an age where orchestral popularity is waning, the possibil- ity of evolving the orchestral sound in our modern era exists through the combination of our current understanding of instrument directivity patterns and their interaction with architectural acoustics. With the aid of wave field synthesis (WFS), newly proposed variations on orchestral layouts are tested virtually using a 64-channel WFS array. Each layout is objectively and subjectively compared for determination of which layout could optimize the sound of the orchestra and revitalize the excitement of the performance.

  16. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    NASA Astrophysics Data System (ADS)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  17. The 3Ls of Introductory Web-Based Instructional Design: Linking, Layout, and Learner Support.

    ERIC Educational Resources Information Center

    Dunlap, Joanna C.

    This paper presents guidelines for World Wide Web-based instructional design, based on the 3Ls (i.e., linking, layout, and learner support). The first section, focusing on macro level design, discusses nodes and links, including how nodes work, determining nodes, node size, presentation format, characteristics of links, and kinds of links. The…

  18. Model-based strategy for cell culture seed train layout verified at lab scale.

    PubMed

    Kern, Simon; Platas-Barradas, Oscar; Pörtner, Ralf; Frahm, Björn

    2016-08-01

    Cell culture seed trains-the generation of a sufficient viable cell number for the inoculation of the production scale bioreactor, starting from incubator scale-are time- and cost-intensive. Accordingly, a seed train offers potential for optimization regarding its layout and the corresponding proceedings. A tool has been developed to determine the optimal points in time for cell passaging from one scale into the next and it has been applied to two different cell lines at lab scale, AGE1.HN AAT and CHO-K1. For evaluation, experimental seed train realization has been evaluated in comparison to its layout. In case of the AGE1.HN AAT cell line, the results have also been compared to the formerly manually designed seed train. The tool provides the same seed train layout based on the data of only two batches.

  19. Layout design-based research on optimization and assessment method for shipbuilding workshop

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Meng, Mei; Liu, Shuang

    2013-06-01

    The research study proposes to examine a three-dimensional visualization program, emphasizing on improving genetic algorithms through the optimization of a layout design-based standard and discrete shipbuilding workshop. By utilizing a steel processing workshop as an example, the principle of minimum logistic costs will be implemented to obtain an ideological equipment layout, and a mathematical model. The objectiveness is to minimize the total necessary distance traveled between machines. An improved control operator is implemented to improve the iterative efficiency of the genetic algorithm, and yield relevant parameters. The Computer Aided Tri-Dimensional Interface Application (CATIA) software is applied to establish the manufacturing resource base and parametric model of the steel processing workshop. Based on the results of optimized planar logistics, a visual parametric model of the steel processing workshop is constructed, and qualitative and quantitative adjustments then are applied to the model. The method for evaluating the results of the layout is subsequently established through the utilization of AHP. In order to provide a mode of reference to the optimization and layout of the digitalized production workshop, the optimized discrete production workshop will possess a certain level of practical significance.

  20. Automatic indexing of scanned documents: a layout-based approach

    NASA Astrophysics Data System (ADS)

    Esser, Daniel; Schuster, Daniel; Muthmann, Klemens; Berger, Michael; Schill, Alexander

    2012-01-01

    Archiving official written documents such as invoices, reminders and account statements in business and private area gets more and more important. Creating appropriate index entries for document archives like sender's name, creation date or document number is a tedious manual work. We present a novel approach to handle automatic indexing of documents based on generic positional extraction of index terms. For this purpose we apply the knowledge of document templates stored in a common full text search index to find index positions that were successfully extracted in the past.

  1. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA.

    PubMed

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method.

  2. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA

    PubMed Central

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method. PMID:26884745

  3. Towards a more accurate extraction of the SPICE netlist from MAGIC based layouts

    SciTech Connect

    Geronimo, G.D.

    1998-08-01

    The extraction of the SPICE netlist form MAGIC based layouts is investigated. It is assumed that the layout is fully coherent with the corresponding mask representation. The process of the extraction can be made in three steps: (1) extraction of .EXT file from layout, through MAGIC command extract; (2) extraction of the netlist from .EXT file through ext2spice extractor; and (3) correction of the netlist through ext2spice.corr program. Each of these steps introduces some approximations, most of which can be optimized, and some errors, most of which can be corrected. Aim of this work is the description of each step, of the approximations and errors on each step, and of the corresponding optimizations and corrections to be made in order to improve the accuracy of the extraction. The HP AMOS14TB 0.5 {micro}m process with linear capacitor and silicide block options and the corresponding SCN3MLC{_}SUBM.30.tech27 technology file will be used in the following examples.

  4. Virtual reality based support system for layout planning and programming of an industrial robotic work cell.

    PubMed

    Yap, Hwa Jen; Taha, Zahari; Dawal, Siti Zawiah Md; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell.

  5. Full-Chip Layout Optimization for Process Margin Enhancement Using Model-Based Hotspot Fixing System

    NASA Astrophysics Data System (ADS)

    Kobayashi, Sachiko; Kyoh, Suigen; Kotani, Toshiya; Takekawa, Yoko; Inoue, Soichi; Nakamae, Koji

    2010-06-01

    As the design rule of integrated circuits is shrinking rapidly, it is necessary to use low-k1 lithography technologies. With low-k1 lithography, even if aggressive optical proximity correction is adopted, many sites become marginless spots, known as “hotspots”. For this problem, hotspot fixer (HSF) in design-for-manufacturability flow has been studied. In our previous work, we indicated the feasibility of layout modification using a simple line/space sizing rule for metal layers in 65-nm-node logic devices. However, in view of the continuous design-rule shrinkage and design complication, a more flexible modification method has become necessary to fix various types of hotspots. In this work, we have developed a brute-force model-based HSF. To further reduce the processing time, the hybrid flow of rule- and model-based HSFs is studied. The feasibility of such hybrid flow is studied by applying it to the full-chip layout modification of a logic test chip.

  6. Virtual Reality Based Support System for Layout Planning and Programming of an Industrial Robotic Work Cell

    PubMed Central

    Yap, Hwa Jen; Taha, Zahari; Md Dawal, Siti Zawiah; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell. PMID:25360663

  7. High-Quality Ultra-Compact Grid Layout of Grouped Networks.

    PubMed

    Yoghourdjian, Vahan; Dwyer, Tim; Gange, Graeme; Kieffer, Steve; Klein, Karsten; Marriott, Kim

    2016-01-01

    Prior research into network layout has focused on fast heuristic techniques for layout of large networks, or complex multi-stage pipelines for higher quality layout of small graphs. Improvements to these pipeline techniques, especially for orthogonal-style layout, are difficult and practical results have been slight in recent years. Yet, as discussed in this paper, there remain significant issues in the quality of the layouts produced by these techniques, even for quite small networks. This is especially true when layout with additional grouping constraints is required. The first contribution of this paper is to investigate an ultra-compact, grid-like network layout aesthetic that is motivated by the grid arrangements that are used almost universally by designers in typographical layout. Since the time when these heuristic and pipeline-based graph-layout methods were conceived, generic technologies (MIP, CP and SAT) for solving combinatorial and mixed-integer optimization problems have improved massively. The second contribution of this paper is to reassess whether these techniques can be used for high-quality layout of small graphs. While they are fast enough for graphs of up to 50 nodes we found these methods do not scale up. Our third contribution is a large-neighborhood search meta-heuristic approach that is scalable to larger networks.

  8. HOLA: Human-like Orthogonal Network Layout.

    PubMed

    Kieffer, Steve; Dwyer, Tim; Marriott, Kim; Wybrow, Michael

    2016-01-01

    Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new "human-centred" methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.

  9. Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments

    NASA Astrophysics Data System (ADS)

    Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin

    The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.

  10. Fourier-based layout for grating function structure in spatial filtering velocimetry

    NASA Astrophysics Data System (ADS)

    Schaeper, M.; Damaschke, N.

    2017-04-01

    Optical spatial filtering velocimetry (SFV) has been used for several decades for velocity measurements. Since the 1990s, charge-coupled device (CCD) line sensors have been used for the realization of spatial filtering systems by the inherent implementation of grating functions using a specialized clock regime. Another approach is the realization of optical SFV systems by utilizing array detectors (CCD or CMOS) with software-implemented grating functions, especially for two-dimensional velocity measurements. Choosing a suitable grating function for the observed scene can be an obstacle when using SFV, and relies on the experience of the user. With this in mind, this contribution presents an overview of how to assemble an optical spatial filtering system. After a general description of signal generation in spatial filtering systems, a straightforward approach to identifying matching harmonic grating functions by using Fourier analysis is presented. This approach has particular advantages for observed scenes with a periodically structured pattern, which were problematic when using SFV in connection with a fixed grating function. Matching periods of harmonic grating functions can be found as peaks in the spectral density distribution of the imaged scene. Once a matching grating function has been found, the signal processing can be made with SFV, which is simpler than calculating the cross-correlation of full frames and is suitable for real-time application. Criteria for the layout of an array-detector-based spatial filtering velocimeter are then discussed.

  11. WFST-based ground truth alignment for difficult historical documents with text modification and layout variations

    NASA Astrophysics Data System (ADS)

    Al Azawi, Mayce; Liwicki, Marcus; Breuel, Thomas M.

    2013-01-01

    This work proposes several approaches that can be used for generating correspondences between real scanned books and their transcriptions which might have different modifications and layout variations, also taking OCR errors into account. Our approaches for the alignment between the manuscript and the transcription are based on weighted finite state transducers (WFST). In particular, we propose adapted WFSTs to represent the transcription to be aligned with the OCR lattices. The character-level alignment has edit rules to allow edit operations (insertion, deletion, substitution). Those edit operations allow the transcription model to deal with OCR segmentation and recognition errors, and also with the task of aligning with different text editions. We implemented an alignment model with a hyphenation model, so it can adapt the non-hyphenated transcription. Our models also work with Fraktur ligatures, which are typically found in historical Fraktur documents. We evaluated our approach on Fraktur documents from Wanderungen durch die Mark Brandenburg" volumes (1862-1889) and observed the performance of those models under OCR errors. We compare the performance of our model for three different scenarios: having no information about the correspondence at the word (i), line (ii), sentence (iii) or page (iv) level.

  12. Optimal multi-floor plant layout based on the mathematical programming and particle swarm optimization.

    PubMed

    Lee, Chang Jun

    2015-01-01

    In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study.

  13. Intelligent Graph Layout Using Many Users' Input.

    PubMed

    Yuan, Xiaoru; Che, Limei; Hu, Yifan; Zhang, Xin

    2012-12-01

    In this paper, we propose a new strategy for graph drawing utilizing layouts of many sub-graphs supplied by a large group of people in a crowd sourcing manner. We developed an algorithm based on Laplacian constrained distance embedding to merge subgraphs submitted by different users, while attempting to maintain the topological information of the individual input layouts. To facilitate collection of layouts from many people, a light-weight interactive system has been designed to enable convenient dynamic viewing, modification and traversing between layouts. Compared with other existing graph layout algorithms, our approach can achieve more aesthetic and meaningful layouts with high user preference.

  14. Maximization of the annual energy production of wind power plants by optimization of layout and yaw-based wake control: Maximization of wind plant AEP by optimization of layout and wake control

    SciTech Connect

    Gebraad, Pieter; Thomas, Jared J.; Ning, Andrew; Fleming, Paul; Dykes, Katherine

    2016-05-24

    This paper presents a wind plant modeling and optimization tool that enables the maximization of wind plant annual energy production (AEP) using yaw-based wake steering control and layout changes. The tool is an extension of a wake engineering model describing the steady-state effects of yaw on wake velocity profiles and power productions of wind turbines in a wind plant. To make predictions of a wind plant's AEP, necessary extensions of the original wake model include coupling it with a detailed rotor model and a control policy for turbine blade pitch and rotor speed. This enables the prediction of power production with wake effects throughout a range of wind speeds. We use the tool to perform an example optimization study on a wind plant based on the Princess Amalia Wind Park. In this case study, combined optimization of layout and wake steering control increases AEP by 5%. The power gains from wake steering control are highest for region 1.5 inflow wind speeds, and they continue to be present to some extent for the above-rated inflow wind speeds. The results show that layout optimization and wake steering are complementary because significant AEP improvements can be achieved with wake steering in a wind plant layout that is already optimized to reduce wake losses.

  15. Fast dual graph-based hotspot detection

    NASA Astrophysics Data System (ADS)

    Kahng, Andrew B.; Park, Chul-Hong; Xu, Xu

    2006-10-01

    As advanced technologies in wafer manufacturing push patterning processes toward lower-k I subwavelength printing, lithography for mass production potentially suffers from decreased patterning fidelity. This results in generation of many hotspots, which are actual device patterns with relatively large CD and image errors with respect to on-wafer targets. Hotspots can be formed under a variety of conditions such as the original design being unfriendly to the RET that is applied, unanticipated pattern combinations in rule-based OPC, or inaccuracies in model-based OPC. When these hotspots fall on locations that are critical to the electrical performance of a device, device performance and parametric yield can be significantly degraded. Previous rule-based hotspot detection methods suffer from long runtimes for complicated patterns. Also, the model generation process that captures process variation within simulation-based approaches brings significant overheads in terms of validation, measurement and parameter calibration. In this paper, we first describe a novel detection algorithm for hotspots induced by lithographic uncertainty. Our goal is to rapidly detect all lithographic hotspots without significant accuracy degradation. In other words, we propose a filtering method: as long as there are no "false negatives", i.e., we successfully have a superset of actual hotspots, then our method can dramatically reduce the layout area for golden hotspot analysis. The first step of our hotspot detection algorithm is to build a layout graph which reflects pattern-related CD variation. Given a layout L, the layout graph G = (V, E c union E p) consists of nodes V, corner edges E c and proximity edges E p. A face in the layout graph includes several close features and the edges between them. Edge weight can be calculated from a traditional 2-D model or a lookup table. We then apply a three-level hotspot detection: (1) edge-level detection finds the hotspot caused by two close

  16. Rapid graph layout using space filling curves.

    PubMed

    Muelder, Chris; Ma, Kwan-Liu

    2008-01-01

    Network data frequently arises in a wide variety of fields, and node-link diagrams are a very natural and intuitive representation of such data. In order for a node-link diagram to be effective, the nodes must be arranged well on the screen. While many graph layout algorithms exist for this purpose, they often have limitations such as high computational complexity or node colocation. This paper proposes a new approach to graph layout through the use of space filling curves which is very fast and guarantees that there will be no nodes that are colocated. The resulting layout is also aesthetic and satisfies several criteria for graph layout effectiveness.

  17. TopoLayout: multilevel graph layout by topological features.

    PubMed

    Archambault, Daniel; Munzner, Tamara; Auber, David

    2007-01-01

    We describe TopoLayout, a feature-based, multilevel algorithm that draws undirected graphs based on the topological features they contain. Topological features are detected recursively inside the graph, and their subgraphs are collapsed into single nodes, forming a graph hierarchy. Each feature is drawn with an algorithm tuned for its topology. As would be expected from a feature-based approach, the runtime and visual quality of TopoLayout depends on the number and types of topological features present in the graph. We show experimental results comparing speed and visual quality for TopoLayout against four other multilevel algorithms on a variety of data sets with a range of connectivities and sizes. TopoLayout frequently improves the results in terms of speed and visual quality on these data sets.

  18. PXIE Optics and Layout

    SciTech Connect

    Lebedev, V.A.; Nagaitsev, S.; Ostiguy, J.-F.; Shemyakin, A.V.; Shteynas, B.G.; Solyak, N.; Solyak, N.; /Fermilab

    2012-05-01

    The Project X Injector Experiment (PXIE) will serve as a prototype for the Project X front end. The aim is to validate the Project-X design and to decrease technical risks mainly related to the front end. The paper discusses the main requirements and constraints motivating the facility layout and optics. Final adjustments to the Project X front end design, if needed, will be based on operational experience gained with PXIE.

  19. Discrete tuning concept for fiber-integrated lasers based on tailored FBG arrays and a theta cavity layout.

    PubMed

    Tiess, Tobias; Becker, Martin; Rothhardt, Manfred; Bartelt, Hartmut; Jäger, Matthias

    2017-03-15

    We demonstrate a novel tuning concept for pulsed fiber-integrated lasers with a fiber Bragg grating (FBG) array as a discrete and tailored spectral filter, as well as a modified laser design. Based on a theta cavity layout, the structural delay lines originating from the FBG array are balanced, enabling a constant repetition rate and stable pulse properties over the full tuning range. The emission wavelength is electrically tuned with respect to the filter properties based on an adapted temporal gating scheme using an acousto-optic modulator. This concept has been investigated with an Yb-doped fiber laser, demonstrating excellent emission properties with high signal contrast (>35  dB) and narrow linewidth (<150  pm) over a tuning range of 25 nm.

  20. ESPRESSO instrument control electronics: a PLC based distributed layout for a second generation instrument at ESO VLT

    NASA Astrophysics Data System (ADS)

    Baldini, V.; Cirami, R.; Coretti, I.; Cristiani, S.; Di Marcantonio, P.; Mannetta, M.; Santin, P.; Mégevand, D.; Zerbi, F.

    2014-07-01

    ESPRESSO is an ultra-stable fiber-fed spectrograph designed to combine incoherently the light coming from up to 4 Unit Telescopes of the ESO VLT. From the Nasmyth focus of each telescope the light, through an optical path, is fed by the Coudé Train subsystems to the Front End Unit placed in the Combined Coudé Laboratory. The Front End is composed by one arm for each telescope and its task is to convey the incoming light, after a calibration process, into the spectrograph fibers. To perform these operations a large number of functions are foreseen, like motorized stages, lamps, digital and analog sensors that, coupled with dedicated Technical CCDs (two per arms), allow to stabilize the incoming beam up to the level needed to exploit the ESPRESSO scientific requirements. The Instrument Control Electronics goal is to properly control all the functions in the Combined Coudé Laboratory and the spectrograph itself. It is fully based on a distributed PLC architecture, abandoning in this way the VME-based technology previously adopted for the ESO VLT instruments. In this paper we will describe the ESPRESSO Instrument Control Electronics architecture, focusing on the distributed layout and its interfaces with the other ESPRESSO subsystems.

  1. Feasibility study, software design, layout and simulation of a two-dimensional fast Fourier transform machine for use in optical array interferometry

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin; Chen, Wei

    1990-01-01

    The NASA-Cornell Univ.-Worcester Polytechnic Institute Fast Fourier Transform (FFT) chip based on the architecture of the systolic FFT computation as presented by Boriakoff is implemented into an operating device design. The kernel of the system, a systolic inner product floating point processor, was designed to be assembled into a systolic network that would take incoming data streams in pipeline fashion and provide an FFT output at the same rate, word by word. It was thoroughly simulated for proper operation, and it has passed a comprehensive set of tests showing no operational errors. The black box specifications of the chip, which conform to the initial requirements of the design as specified by NASA, are given. The five subcells are described and their high level function description, logic diagrams, and simulation results are presented. Some modification of the Read Only Memory (ROM) design were made, since some errors were found in it. Because a four stage pipeline structure was used, simulating such a structure is more difficult than an ordinary structure. Simulation methods are discussed. Chip signal protocols and chip pinout are explained.

  2. Staking Terraces Online: A Terrace Layout Program

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Terrace construction in Missouri exceeded 3 million feet at a cost of over $8 million in 2008. Up to 50 % of the total construction and design time is spent on the terrace layout itself. A web-based computer program, MOTERR, has been developed to design terrace layouts. The program utilizes digital ...

  3. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  4. Heuristics for connectivity-based brain parcellation of SMA/pre-SMA through force-directed graph layout.

    PubMed

    Crippa, Alessandro; Cerliani, Leonardo; Nanetti, Luca; Roerdink, Jos B T M

    2011-02-01

    We propose the use of force-directed graph layout as an explorative tool for connectivity-based brain parcellation studies. The method can be used as a heuristic to find the number of clusters intrinsically present in the data (if any) and to investigate their organisation. It provides an intuitive representation of the structure of the data and facilitates interactive exploration of properties of single seed voxels as well as relations among (groups of) voxels. We validate the method on synthetic data sets and we investigate the changes in connectivity in the supplementary motor cortex, a brain region whose parcellation has been previously investigated via connectivity studies. This region is supposed to present two easily distinguishable connectivity patterns, putatively denoted by SMA (supplementary motor area) and pre-SMA. Our method provides insights with respect to the connectivity patterns of the premotor cortex. These present a substantial variation among subjects, and their subdivision into two well-separated clusters is not always straightforward.

  5. A Rule Based Approach to ISS Interior Volume Control and Layout

    NASA Technical Reports Server (NTRS)

    Peacock, Brian; Maida, Jim; Fitts, David; Dory, Jonathan

    2001-01-01

    Traditional human factors design involves the development of human factors requirements based on a desire to accommodate a certain percentage of the intended user population. As the product is developed human factors evaluation involves comparison between the resulting design and the specifications. Sometimes performance metrics are involved that allow leniency in the design requirements given that the human performance result is satisfactory. Clearly such approaches may work but they give rise to uncertainty and negotiation. An alternative approach is to adopt human factors design rules that articulate a range of each design continuum over which there are varying outcome expectations and interactions with other variables, including time. These rules are based on a consensus of human factors specialists, designers, managers and customers. The International Space Station faces exactly this challenge in interior volume control, which is based on anthropometric, performance and subjective preference criteria. This paper describes the traditional approach and then proposes a rule-based alternative. The proposed rules involve spatial, temporal and importance dimensions. If successful this rule-based concept could be applied to many traditional human factors design variables and could lead to a more effective and efficient contribution of human factors input to the design process.

  6. A comparison of user-generated and automatic graph layouts.

    PubMed

    Dwyer, Tim; Lee, Bongshin; Fisher, Danyel; Quinn, Kori Inkpen; Isenberg, Petra; Robertson, George; North, Chris

    2009-01-01

    The research presented in this paper compares user-generated and automatic graph layouts. Following the methods suggested by van Ham et al. (2008), a group of users generated graph layouts using both multi-touch interaction on a tabletop display and mouse interaction on a desktop computer. Users were asked to optimize their layout for aesthetics and analytical tasks with a social network. We discuss characteristics of the user-generated layouts and interaction methods employed by users in this process. We then report on a web-based study to compare these layouts with the output of popular automatic layout algorithms. Our results demonstrate that the best of the user-generated layouts performed as well as or better than the physics-based layout. Orthogonal and circular automatic layouts were found to be considerably less effective than either the physics-based layout or the best of the user-generated layouts. We highlight several attributes of the various layouts that led to high accuracy and improved task completion time, as well as aspects in which traditional automatic layout methods were unsuccessful for our tasks.

  7. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization.

    PubMed

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.

  8. Placement with Symmetry Constraints for Analog IC Layout Design Based on Tree Representation

    NASA Astrophysics Data System (ADS)

    Hirakawa, Natsumi; Fujiyoshi, Kunihiro

    Symmetry constrains are the constraints that the given cells should be placed symmetrically in design of analog ICs. We use O-tree to represent placements and propose a decoding algorithm which can obtain one of the minimum placements satisfying the constraints. The decoding algorithm uses linear programming, which is too much time consuming. Therefore we propose a graph based method to recognize if there exists no placement satisfying both the given symmetry and O-tree constraints, and use the method before application of linear programming. The effectiveness of the proposed method was shown by computational experiments.

  9. Gene regulatory network clustering for graph layout based on microarray gene expression data.

    PubMed

    Kojima, Kaname; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2010-01-01

    We propose a statistical model realizing simultaneous estimation of gene regulatory network and gene module identification from time series gene expression data from microarray experiments. Under the assumption that genes in the same module are densely connected, the proposed method detects gene modules based on the variational Bayesian technique. The model can also incorporate existing biological prior knowledge such as protein subcellular localization. We apply the proposed model to the time series data from a synthetically generated network and verified the effectiveness of the proposed model. The proposed model is also applied the time series microarray data from HeLa cell. Detected gene module information gives the great help on drawing the estimated gene network.

  10. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.

  11. ILP-based co-optimization of cut mask layout, dummy fill, and timing for sub-14nm BEOL technology

    NASA Astrophysics Data System (ADS)

    Han, Kwangsoo; Kahng, Andrew B.; Lee, Hyein; Wang, Lutong

    2015-10-01

    Self-aligned multiple patterning (SAMP), due to its low overlay error, has emerged as the leading option for 1D gridded back-end-of-line (BEOL) in sub-14nm nodes. To form actual routing patterns from a uniform "sea of wires", a cut mask is needed for line-end cutting or realization of space between routing segments. Constraints on cut shapes and minimum cut spacing result in end-of-line (EOL) extensions and non-functional (i.e. dummy fill) patterns; the resulting capacitance and timing changes must be consistent with signoff performance analyses and their impacts should be minimized. In this work, we address the co-optimization of cut mask layout, dummy fill, and design timing for sub-14nm BEOL design. Our central contribution is an optimizer based on integer linear programming (ILP) to minimize the timing impact due to EOL extensions, considering (i) minimum cut spacing arising in sub-14nm nodes; (ii) cut assignment to different cut masks (color assignment); and (iii) the eligibility to merge two unit-size cuts into a bigger cut. We also propose a heuristic approach to remove dummy fills after the ILP-based optimization by extending the usage of cut masks. Our heuristic can improve critical path performance under minimum metal density and mask density constraints. In our experiments, we study the impact of number of cut masks, minimum cut spacing and metal density under various constraints. Our studies of optimized cut mask solutions in these varying contexts give new insight into the tradeoff of performance and cost that is afforded by cut mask patterning technology options.

  12. Suggestions for Layout and Functional Behavior of Software-Based Voice Switch Keysets

    NASA Technical Reports Server (NTRS)

    Scott, David W.

    2010-01-01

    Marshall Space Flight Center (MSFC) provides communication services for a number of real time environments, including Space Shuttle Propulsion support and International Space Station (ISS) payload operations. In such settings, control team members speak with each other via multiple voice circuits or loops. Each loop has a particular purpose and constituency, and users are assigned listen and/or talk capabilities for a given loop based on their role in fulfilling the purpose. A voice switch is a given facility's hardware and software that supports such communication, and may be interconnected with other facilities switches to create a large network that, from an end user perspective, acts like a single system. Since users typically monitor and/or respond to several voice loops concurrently for hours on end and real time operations can be very dynamic and intense, it s vital that a control panel or keyset for interfacing with the voice switch be a servant that reduces stress, not a master that adds it. Implementing the visual interface on a computer screen provides tremendous flexibility and configurability, but there s a very real risk of overcomplication. (Remember how office automation made life easier, which led to a deluge of documents that made life harder?) This paper a) discusses some basic human factors considerations related to keysets implemented as application software windows, b) suggests what to standardize at the facility level and what to leave to the user's preference, and c) provides screen shot mockups for a robust but reasonably simple user experience. Concepts apply to keyset needs in almost any type of operations control or support center.

  13. Trends in Newspaper Layout and Design.

    ERIC Educational Resources Information Center

    Reiley, Kenneth C.; Erb, Lyle L.

    With the increasing competition from television in recent years, the newspaper industry has finally realized that it doesn't have the news field as its sole domain. The competition, especially from colored television, and the fast pace of contemporary society have influenced the layout and printing format of the national newspapers in several…

  14. Layout pattern analysis using the Voronoi diagram of line segments

    NASA Astrophysics Data System (ADS)

    Dey, Sandeep Kumar; Cheilaris, Panagiotis; Gabrani, Maria; Papadopoulou, Evanthia

    2016-01-01

    Early identification of problematic patterns in very large scale integration (VLSI) designs is of great value as the lithographic simulation tools face significant timing challenges. To reduce the processing time, such a tool selects only a fraction of possible patterns which have a probable area of failure, with the risk of missing some problematic patterns. We introduce a fast method to automatically extract patterns based on their structure and context, using the Voronoi diagram of line-segments as derived from the edges of VLSI design shapes. Designers put line segments around the problematic locations in patterns called "gauges," along which the critical distance is measured. The gauge center is the midpoint of a gauge. We first use the Voronoi diagram of VLSI shapes to identify possible problematic locations, represented as gauge centers. Then we use the derived locations to extract windows containing the problematic patterns from the design layout. The problematic locations are prioritized by the shape and proximity information of the design polygons. We perform experiments for pattern selection in a portion of a 22-nm random logic design layout. The design layout had 38,584 design polygons (consisting of 199,946 line segments) on layer Mx, and 7079 markers generated by an optical rule checker (ORC) tool. The optical rules specify requirements for printing circuits with minimum dimension. Markers are the locations of some optical rule violations in the layout. We verify our approach by comparing the coverage of our extracted patterns to the ORC-generated markers. We further derive a similarity measure between patterns and between layouts. The similarity measure helps to identify a set of representative gauges that reduces the number of patterns for analysis.

  15. Study on Soil Special Variability and Crops Optimal Layout Based on Gis in The West-Northern Plateau of Hebei Province

    NASA Astrophysics Data System (ADS)

    Feng, Lixiao; Du, Xiong; Zhang, Jizong; Li, Cundong; Zhang, Lifeng

    A geo-statistics method combined with GIS was applied to study the soil special variation characteristics of a typical basin on the Northwestern Plateau of Hebei Province. It indicates that the rational crop layout is determined by the soil texture. And the crop layout optimization schemes were put forward for different production objectives based on the GIS platform with the soilcrops special co-adaptation. The results showed that basin altitude was the primary cause for the special variation of soil texture and nutrient content. The crop layout optimization schemes showed that the economic efficiency was increased by 26.9%~48.5% in large-scale of crop production, while the economic efficiency was increased by only 8.0%~8.5% with ecological construction of de-farming. It was concluded that the agricultural production progress should mostly rely on the renovation of farming organization systemand the scale of economy should be the primary impetus for agricultural development in the Northwestern Plateau of Hebei Province.

  16. Feasibility study, software design, layout and simulation of a two-dimensional Fast Fourier Transform machine for use in optical array interferometry

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin

    1994-01-01

    The goal of this project was the feasibility study of a particular architecture of a digital signal processing machine operating in real time which could do in a pipeline fashion the computation of the fast Fourier transform (FFT) of a time-domain sampled complex digital data stream. The particular architecture makes use of simple identical processors (called inner product processors) in a linear organization called a systolic array. Through computer simulation the new architecture to compute the FFT with systolic arrays was proved to be viable, and computed the FFT correctly and with the predicted particulars of operation. Integrated circuits to compute the operations expected of the vital node of the systolic architecture were proven feasible, and even with a 2 micron VLSI technology can execute the required operations in the required time. Actual construction of the integrated circuits was successful in one variant (fixed point) and unsuccessful in the other (floating point).

  17. Hi-trees and their layout.

    PubMed

    Marriott, Kim; Sbarski, Peter; van Gelder, Tim; Prager, Daniel; Bulka, Andy

    2011-03-01

    We introduce hi-trees, a new visual representation for hierarchical data in which, depending on the kind of parent node, the child relationship is represented using either containment or links. We give a drawing convention for hi-trees based on the standard layered drawing convention for rooted trees, then show how to extend standard bottom-up tree layout algorithms to draw hi-trees in this convention. We also explore a number of other more compact layout styles for layout of larger hi-trees and give algorithms for computing these. Finally, we describe two applications of hi-trees: argument mapping and business decision support.

  18. Research study: Device technology STAR router user's guide. [automated layout of large scale integration discretionary interconnection masks

    NASA Technical Reports Server (NTRS)

    Wright, R. A.

    1979-01-01

    The STAR Router program developed to perform automated layout of LSI discretionary interconnection masks is described. The input and output for the router are standard PR2D data files. A state-of-the-art cellular path-finding procedure, based on Lee's algorithm, which produces fast, shortest distance routing of microcircuit net data is included.

  19. Application of layout DOE in RET flow

    NASA Astrophysics Data System (ADS)

    Zhang, Yunqiang; van Adrichem, Paul; Li, Ji; Yang, Amy; Lucas, Kevin

    2008-03-01

    At low k1 lithography and strong off-axis illumination, it is very hard to achieve edge-placement tolerances and 2-D image fidelity requirements for some layout configurations. Quite often these layouts are within simple design rules constraint for a given technology node. Evidently it is important to have these layouts included during early RET flow development. Simple shrinkage from previous technology node is quite common, although often not enough. For logic designs, it is hard to control design styles. Moreover for engineers in fabless design groups, it is difficult to assess the manufacturability of their layouts because of the lack of understanding of the litho process. Assist features (AF) are frequently placed according to pre-determined rules to improve lithography process window. These rules are usually derived from lithographic models. Direct validation of AF rules is required at development phase.To ensure good printability through process window, process aware optical proximity correction (OPC) recipes were developed. Generally rules based correction is performed before model based correction. Furthermore, there are also lots of other options and parameters in OPC recipes for an advanced technology, thus making it difficult to holistically optimize performance of recipe bearing all these variables in mind. In this paper we demonstrate the application of layout DOE in RET flow development. Layout pattern libraries are generated using the Synopsys Test Pattern Generator (STPG), which is embedded in a layout tool (ICWB). Assessment gauges are generated together with patterns for quick correction accuracy assessment. OPC verification through full process is also deployed. Several groups of test pattern libraries for different applications are developed, ranging from simple 1D pattern for process capability study and settings of process aware parameters to a full set of patterns for the assessment of rules based correction, line end and corner interaction

  20. Dynamic response of a carbon nanotube-based rotary nano device with different carbon-hydrogen bonding layout

    NASA Astrophysics Data System (ADS)

    Yin, Hang; Cai, Kun; Wan, Jing; Gao, Zhaoliang; Chen, Zhen

    2016-03-01

    In a nano rotational transmission system (RTS) which consists of a single walled carbon nanotube (SWCNT) as the motor and a coaxially arranged double walled carbon nanotube (DWCNT) as a bearing, the interaction between the motor and the rotor in bearing, which has great effects on the response of the RTS, is determined by their adjacent edges. Using molecular dynamics (MD) simulation, the interaction is analyzed when the adjacent edges have different carbon-hydrogen (Csbnd H) bonding layouts. In the computational models, the rotor in bearing and the motor with a specific input rotational speed are made from the same armchair SWCNT. Simulation results demonstrate that a perfect rotational transmission could happen when the motor and rotor have the same Csbnd H bonding layout on their adjacent ends. If only half or less of the carbon atoms on the adjacent ends are bonded with hydrogen atoms, the strong attraction between the lower speed (100 GHz) motor and rotor leads to a synchronous rotational transmission. If only the motor or the rotor has Csbnd H bonds on their adjacent ends, no rotational transmission happens due to weak interaction between the bonded hydrogen atoms on one end with the sp1 bonded carbon atoms on the other end.

  1. A GIS-based approach: Influence of the ventilation layout to the environmental conditions in an underground mine.

    PubMed

    Bascompta, Marc; Castañón, Ana María; Sanmiquel, Lluís; Oliva, Josep

    2016-11-01

    Gases such as CO, CO2 or NOx are constantly generated by the equipment in any underground mine and the ventilation layout can play an important role in keeping low concentrations in the working faces. Hence, a method able to control the workplace environment is crucial. This paper proposes a geographical information system (GIS) for such goal. The system created provides the necessary tools to manage and analyse an underground environment, connecting pollutants and temperatures with the ventilation characteristics over time. Data concerning the ventilation system, in a case study, has been taken every month since 2009 and integrated into the management system, which has quantified the gasses concentration throughout the mine due to the characteristics and evolution of the ventilation layout. Three different zones concerning CO, CO2, NOx and effective temperature have been found as well as some variations among workplaces within the same zone that suggest local airflow recirculations. The system proposed could be a useful tool to improve the workplace conditions and efficiency levels.

  2. Site Recommendation Subsurface Layout

    SciTech Connect

    C.L. Linden

    2000-06-28

    The purpose of this analysis is to develop a Subsurface Facility layout that is capable of accommodating the statutory capacity of 70,000 metric tons of uranium (MTU), as well as an option to expand the inventory capacity, if authorized, to 97,000 MTU. The layout configuration also requires a degree of flexibility to accommodate potential changes in site conditions or program requirements. The objective of this analysis is to provide a conceptual design of the Subsurface Facility sufficient to support the development of the Subsurface Facility System Description Document (CRWMS M&O 2000e) and the ''Emplacement Drift System Description Document'' (CRWMS M&O 2000i). As well, this analysis provides input to the Site Recommendation Consideration Report. The scope of this analysis includes: (1) Evaluation of the existing facilities and their integration into the Subsurface Facility design. (2) Identification and incorporation of factors influencing Subsurface Facility design, such as geological constraints, thermal loading, constructibility, subsurface ventilation, drainage control, radiological considerations, and the Test and Evaluation Facilities. (3) Development of a layout showing an available area in the primary area sufficient to support both the waste inventories and individual layouts showing the emplacement area required for 70,000 MTU and, if authorized, 97,000 MTU.

  3. Automatic layout and visualization of biclusters

    PubMed Central

    Grothaus, Gregory A; Mufti, Adeel; Murali, TM

    2006-01-01

    Background Biclustering has emerged as a powerful algorithmic tool for analyzing measurements of gene expression. A number of different methods have emerged for computing biclusters in gene expression data. Many of these algorithms may output a very large number of biclusters with varying degrees of overlap. There are no systematic methods that create a two-dimensional layout of the computed biclusters and display overlaps between them. Results We develop a novel algorithm for laying out biclusters in a two-dimensional matrix whose rows (respectively, columns) are rows (respectively, columns) of the original dataset. We display each bicluster as a contiguous submatrix in the layout. We allow the layout to have repeated rows and/or columns from the original matrix as required, but we seek a layout of the smallest size. We also develop a web-based search interface for the user to query the genes and samples of interest and visualise the layout of biclusters matching the queries. Conclusion We demonstrate the usefulness of our approach on gene expression data for two types of leukaemia and on protein-DNA binding data for two growth conditions in Saccharomyces cerevisiae. The software implementing the layout algorithm is available at . PMID:16952321

  4. Routing System for Building Block Layout

    NASA Astrophysics Data System (ADS)

    Chen, Nang-Ping

    With the advent VLSI, layout techniques become more and more crucial to IC design. An automatic building block layout system is a useful tool to deal with the increasing complexity of custom chip layout problem. An approach to the routing part of this layout system is proposed. This routing system can handle arbitrarily shaped rectilinear blocks with pins on the boundary. A feature of this system is its ability to shift blocks at any moment so that better placement and hence better routing can be achieved. The system minimizes layout area and assures 100% routing completion. A relative placement is the input to this routing system. The prerouting analysis will calculate the expected routing density around each block and the routing space is allocated accordingly. The "bottleneck" idea is introduced to represent the critical regions of the layout plane where the congestion of routing is most likely to occur. It also serves as a link between blocks whereby all information are easily updated when some blocks have to move their positions. A weighted "global routing graph" is constructed to reflect the current routing situation associated with bottlenecks. The global routing of each signal is done by a "Steiner-Tree-on-Graphs" (STOG) algorithm. The basic element of STOG is a three-point Steiner-Tree-on-Graphs algorithm. Some theoretical results are derived and an efficient algorithm is developed based on them. STOG has reasonable computational complexities and yields very good results from experimental tests. In the detailed routing phase, an existing channel router and a switch-box router are called for track assignment. Special emphasis has been put on terminal position alignment between two neighboring channels to avoid unnecessary jogs. The power and ground will be allowed different wire width and routed on the metal layer. Several examples have been tested against this routing system. It has achieved very compact layout in short running time.

  5. Terrace Layout Using a Computer Assisted System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  6. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  7. Automatic page layout using genetic algorithms for electronic albuming

    NASA Astrophysics Data System (ADS)

    Geigel, Joe; Loui, Alexander C. P.

    2000-12-01

    In this paper, we describe a flexible system for automatic page layout that makes use of genetic algorithms for albuming applications. The system is divided into two modules, a page creator module which is responsible for distributing images amongst various album pages, and an image placement module which positions images on individual pages. Final page layouts are specified in a textual form using XML for printing or viewing over the Internet. The system makes use of genetic algorithms, a class of search and optimization algorithms that are based on the concepts of biological evolution, for generating solutions with fitness based on graphic design preferences supplied by the user. The genetic page layout algorithm has been incorporated into a web-based prototype system for interactive page layout over the Internet. The prototype system is built using client-server architecture and is implemented in java. The system described in this paper has demonstrated the feasibility of using genetic algorithms for automated page layout in albuming and web-based imaging applications. We believe that the system adequately proves the validity of the concept, providing creative layouts in a reasonable number of iterations. By optimizing the layout parameters of the fitness function, we hope to further improve the quality of the final layout in terms of user preference and computation speed.

  8. Extensible layout in functional documents

    NASA Astrophysics Data System (ADS)

    Lumley, John; Gimson, Roger; Rees, Owen

    2006-02-01

    Highly customised variable-data documents make automatic layout of the resulting publication hard. Architectures for defining and processing such documents can benefit if the repertoire of layout methods available can be extended smoothly and easily to accommodate new styles of customisation. The Document Description Framework incorporates a model for declarative document layout and processing where documents are treated as functional programs. A canonical XML tree contains nodes describing layout instructions which will modify and combine their children component parts to build sections of the final presentation. Leaf components such as images, vector graphic fragments and text blocks are 'rendered' to make consistent graphical atoms. These parts are then processed by layout agents, described and parameterised by their parent nodes, which can range from simple layouts like translations, flows, encapsulations and tables through to highly complex arrangements such as constraint-solution or pagination. The result then becomes a 'molecule' for processing at a higher level of the layout tree. A variable and reference mechanism is included for resolving rendering interdependency and supporting component reuse. Addition of new layout types involves definition of a new combinator node and attachment of a suitable agent.

  9. Interactive layout mechanisms for image database retrieval

    SciTech Connect

    MacCuish, J.; McPherson, A.; Barros, J.; Kelly, P.

    1996-01-29

    In this paper we present a user interface, CANDID Camera, for image retrieval using query-by-example technology. Included in the interface are several new layout algorithms based on multidimensional scaling techniques that visually display global and local relationships between images within a large image database. We use the CANDID project algorithms to create signatures of the images, and then measure the dissimilarity between the signatures. The layout algorithms are of two types. The first are those that project the all-pairs dissimilarities to two dimensions, presenting a many-to-many relationship for a global view of the entire database. The second are those that relate a query image to a small set of matched images for a one-to-many relationship that provides a local inspection of the image relationships. Both types are based on well-known multidimensional scaling techniques that have been modified and used together for efficiency and effectiveness. They include nonlinear projection and classical projection. The global maps are hybrid algorithms using classical projection together with nonlinear projection. We have developed several one-to-many layouts based on a radial layout, also using modified nonlinear and classical projection.

  10. Game level layout generation using evolved cellular automata

    NASA Astrophysics Data System (ADS)

    Pech, Andrew; Masek, Martin; Lam, Chiou-Peng; Hingston, Philip

    2016-01-01

    Design of level layouts typically involves the production of a set of levels which are different, yet display a consistent style based on the purpose of a particular level. In this paper, a new approach to the generation of unique level layouts, based on a target set of attributes, is presented. These attributes, which are learned automatically from an example layout, are used for the off-line evolution of a set of cellular automata rules. These rules can then be used for the real-time generation of level layouts that meet the target parameters. The approach is demonstrated on a set of maze-like level layouts. Results are presented to show the effect of various CA parameters and rule representation.

  11. 2D design rule and layout analysis using novel large-area first-principles-based simulation flow incorporating lithographic and stress effects

    NASA Astrophysics Data System (ADS)

    Prins, Steven L.; Blatchford, James; Olubuyide, Oluwamuyiwa; Riley, Deborah; Chang, Simon; Hong, Qi-Zhong; Kim, T. S.; Borges, Ricardo; Lin, Li

    2009-03-01

    As design rules and corresponding logic standard cell layouts continue to shrink node-on-node in accordance with Moore's law, complex 2D interactions, both intra-cell and between cells, become much more prominent. For example, in lithography, lack of scaling of λ/NA implies aggressive use of resolution enhancement techniques to meet logic scaling requirements-resulting in adverse effects such as 'forbidden pitches'-and also implies an increasing range of optical influence relative to cell size. These adverse effects are therefore expected to extend well beyond the cell boundary, leading to lithographic marginalities that occur only when a given cell is placed "in context" with other neighboring cells in a variable design environment [1]. This context dependence is greatly exacerbated by increased use of strain engineering techniques such as SiGe and dual-stress liners (DSL) to enhance transistor performance, both of which also have interaction lengths on the order of microns. The use of these techniques also breaks the formerly straightforward connection between lithographic 'shapes' and end-of-line electrical performance, thus making the formulation of design rules that are robust to process variations and complex 2D interactions more difficult. To address these issues, we have developed a first-principles-based simulation flow to study contextdependent electrical effects in layout, arising not only from lithography, but also from stress and interconnect parasitic effects. This flow is novel in that it can be applied to relatively large layout clips- required for context-dependent analysis-without relying on semi-empirical or 'black-box' models for the fundamental electrical effects. The first-principles-based approach is ideal for understanding contextdependent effects early in the design phase, so that they can be mitigated through restrictive design rules. The lithographic simulations have been discussed elsewhere [1] and will not be presented in detail. The

  12. Scintillator-based fast ion loss measurements in the EAST

    NASA Astrophysics Data System (ADS)

    Chang, J. F.; Isobe, M.; Ogawa, K.; Huang, J.; Wu, C. R.; Xu, Z.; Jin, Z.; Lin, S. Y.; Hu, L. Q.

    2016-11-01

    A new scintillator-based fast ion loss detector (FILD) has been installed on Experimental Advanced Superconducting Tokamak (EAST) to investigate the fast ion loss behavior in high performance plasma with neutral beam injection (NBI) and ion cyclotron resonance heating (ICRH). A two dimensional 40 mm × 40 mm scintillator-coated (ZnS:Ag) stainless plate is mounted in the front of the detector, capturing the escaping fast ions. Photons from the scintillator plate are imaged with a Phantom V2010 CCD camera. The lost fast ions can be measured with the pitch angle from 60° to 120° and the gyroradius from 10 mm to 180 mm. This paper will describe the details of FILD diagnostic on EAST and describe preliminary measurements during NBI and ICRH heating.

  13. Spacecraft Component Adaptive Layout Environment (SCALE): An efficient optimization tool

    NASA Astrophysics Data System (ADS)

    Fakoor, Mahdi; Ghoreishi, Seyed Mohammad Navid; Sabaghzadeh, Hossein

    2016-11-01

    For finding the optimum layout of spacecraft subsystems, important factors such as the center of gravity, moments of inertia, thermal distribution, natural frequencies, etc. should be taken into account. This large number of effective parameters makes the optimum layout process of spacecraft subsystems complex and time consuming. In this paper, an automatic tool, based on multi-objective optimization methods, is proposed for a three dimensional layout of spacecraft subsystems. In this regard, an efficient Spacecraft Component Adaptive Layout Environment (SCALE) is produced by integration of some modeling, FEM, and optimization software. SCALE automatically provides optimal solutions for a three dimensional layout of spacecraft subsystems with considering important constraints such as center of gravity, moment of inertia, thermal distribution, natural frequencies and structural strength. In order to show the superiority and efficiency of SCALE, layout of a telecommunication spacecraft and a remote sensing spacecraft are performed. The results show that, the objective functions values for obtained layouts by using SCALE are in a much better condition than traditional one i.e. Reference Baseline Solution (RBS) which is proposed by the engineering system team. This indicates the good performance and ability of SCALE for finding the optimal layout of spacecraft subsystems.

  14. GPU-based fast gamma index calculation

    NASA Astrophysics Data System (ADS)

    Gu, Xuejun; Jia, Xun; Jiang, Steve B.

    2011-03-01

    The γ-index dose comparison tool has been widely used to compare dose distributions in cancer radiotherapy. The accurate calculation of γ-index requires an exhaustive search of the closest Euclidean distance in the high-resolution dose-distance space. This is a computational intensive task when dealing with 3D dose distributions. In this work, we combine a geometric method (Ju et al 2008 Med. Phys. 35 879-87) with a radial pre-sorting technique (Wendling et al 2007 Med. Phys. 34 1647-54) and implement them on computer graphics processing units (GPUs). The developed GPU-based γ-index computational tool is evaluated on eight pairs of IMRT dose distributions. The γ-index calculations can be finished within a few seconds for all 3D testing cases on one single NVIDIA Tesla C1060 card, achieving 45-75× speedup compared to CPU computations conducted on an Intel Xeon 2.27 GHz processor. We further investigated the effect of various factors on both CPU and GPU computation time. The strategy of pre-sorting voxels based on their dose difference values speeds up the GPU calculation by about 2.7-5.5 times. For n-dimensional dose distributions, γ-index calculation time on CPU is proportional to the summation of γn over all voxels, while that on GPU is affected by γn distributions and is approximately proportional to the γn summation over all voxels. We found that increasing the resolution of dose distributions leads to a quadratic increase of computation time on CPU, while less-than-quadratic increase on GPU. The values of dose difference and distance-to-agreement criteria also have an impact on γ-index calculation time.

  15. User Preferences for Web-Based Module Design Layout and Design Impact on Information Recall Considering Age

    ERIC Educational Resources Information Center

    Pomales-García, Cristina; Rivera-Nivar, Mericia

    2015-01-01

    Research in design of Web-based modules should incorporate aging as an important factor given the diversity of the current workforce. This work aims to understand how Web-Based Learning modules can be designed to accommodate young (25-35 years) as well as older (55-65 years) users by: (1) identifying how information sources (instructor video,…

  16. Location selection and layout for LB10, a lunar base at the Lunar North Pole with a liquid mirror observatory

    NASA Astrophysics Data System (ADS)

    Detsis, Emmanouil; Doule, Ondrej; Ebrahimi, Aliakbar

    2013-04-01

    We present the site selection process and urban planning of a Lunar Base for a crew of 10 (LB10), with an infrared astronomical telescope, based on the concept of the Lunar LIquid Mirror Telescope. LB10 is a base designated for permanent human presence on the Moon. The base architecture is based on utilization of inflatable, rigid and regolith structures for different purposes. The location for the settlement is identified through a detailed analysis of surface conditions and terrain parameters around the Lunar North and South Poles. A number of selection criteria were defined regarding construction, astronomical observations, landing and illumination conditions. The location suggested for the settlement is in the vicinity of the North Pole, utilizing the geographical morphology of the area. The base habitat is on a highly illuminated and relatively flat plateau. The observatory in the vicinity of the base, approximately 3.5 kilometers from the Lunar North Pole, inside a crater to shield it from Sunlight. An illustration of the final form of the habitat is also depicted, inspired by the baroque architectural form.

  17. Fast Electromechanical Switches Based on Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Kaul, Anupama; Wong, Eric; Epp, Larry

    2008-01-01

    Electrostatically actuated nanoelectromechanical switches based on carbon nanotubes have been fabricated and tested in a continuing effort to develop high-speed switches for a variety of stationary and portable electronic equipment. As explained below, these devices offer advantages over electrostatically actuated microelectromechanical switches, which, heretofore, have represented the state of the art of rapid, highly miniaturized electromechanical switches. Potential applications for these devices include computer memories, cellular telephones, communication networks, scientific instrumentation, and general radiation-hard electronic equipment. A representative device of the present type includes a single-wall carbon nanotube suspended over a trench about 130 nm wide and 20 nm deep in an electrically insulating material. The ends of the carbon nanotube are connected to metal electrodes, denoted the source and drain electrodes. At bottom of the trench is another metal electrode, denoted the pull electrode (see figure). In the off or open switch state, no voltage is applied, and the nanotube remains out of contact with the pull electrode. When a sufficiently large electric potential (switching potential) is applied between the pull electrode and either or both of the source and drain electrodes, the resulting electrostatic attraction bends and stretches the nanotube into contact with the pull electrode, thereby putting the switch into the "on" or "closed" state, in which substantial current (typically as much as hundreds of nanoamperes) is conducted. Devices of this type for use in initial experiments were fabricated on a thermally oxidized Si wafer, onto which Nb was sputter-deposited for use as the pull-electrode layer. Nb was chosen because its refractory nature would enable it to withstand the chemical and thermal conditions to be subsequently imposed for growing carbon nanotubes. A 200- nm-thick layer of SiO2 was formed on top of the Nb layer by plasma

  18. Facility Layout Problems Using Bays: A Survey

    NASA Astrophysics Data System (ADS)

    Davoudpour, Hamid; Jaafari, Amir Ardestani; Farahani, Leila Najafabadi

    2010-06-01

    Layout design is one of the most important activities done by industrial Engineers. Most of these problems have NP hard Complexity. In a basic layout design, each cell is represented by a rectilinear, but not necessarily convex polygon. The set of fully packed adjacent polygons is known as a block layout (Asef-Vaziri and Laporte 2007). Block layout is divided by slicing tree and bay layout. In bay layout, departments are located in vertical columns or horizontal rows, bays. Bay layout is used in real worlds especially in concepts such as semiconductor and aisles. There are several reviews in facility layout; however none of them focus on bay layout. The literature analysis given here is not limited to specific considerations about bay layout design. We present a state of art review for bay layout considering some issues such as the used objectives, the techniques of solving and the integration methods in bay.

  19. Basic concepts underlying fast-neutron-based contraband interrogation technology

    SciTech Connect

    Fink, C.L.; Guenther, P.T.; Smith, D.L.

    1992-01-01

    All accelerator-based fast-neutron contraband interrogation systems have many closely interrelated subsystems, whose performance parameters will be critically interdependent. For optimal overall performance, a systems analysis design approach is required. This paper provides a general overview of the interrelationships and the tradeoffs to be considered for optimization of nonaccelerator subsystems.

  20. Fast simulation method for airframe analysis based on big data

    NASA Astrophysics Data System (ADS)

    Liu, Dongliang; Zhang, Lixin

    2016-10-01

    In this paper, we employ the big data method to structural analysis by considering the correlations between loads and loads, loads and results and results and results. By means of fundamental mathematics and physical rules, the principle, feasibility and error control of the method are discussed. We then establish the analysis process and procedures. The method is validated by two examples. The results show that the fast simulation method based on big data is fast and precise when it is applied to structural analysis.

  1. 48 CFR 52.236-17 - Layout of Work.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 1984) The Contractor shall lay out its work from Government-established base lines and bench marks indicated on the drawings, and shall be responsible for all measurements in connection with the layout....

  2. Content-Based Image Retrieval Using Spatial Layout Information in Brain Tumor T1-Weighted Contrast-Enhanced MR Images

    PubMed Central

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Gao, Yang; Chen, Yang; Feng, Qianjin; Chen, Wufan; Lu, Zhentai

    2014-01-01

    This study aims to develop content-based image retrieval (CBIR) system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR) images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW) model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML) is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). Using the BoVW model with partition learning, the mean average precision (mAP) of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images. PMID:25028970

  3. GEM-based detectors for thermal and fast neutrons

    NASA Astrophysics Data System (ADS)

    Croci, G.; Claps, G.; Cazzaniga, C.; Foggetta, L.; Muraro, A.; Valente, P.

    2015-06-01

    Lately the problem of 3He replacement for neutron detection stimulated an intense activity research on alternative technologies based on alternative neutron converters. This paper presents briefly the results obtained with new GEM detectors optimized for fast and thermal neutrons. For thermal neutrons, we realized a side-on GEM detector based on a series of boron-coated alumina sheets placed perpendicularly to the incident neutron beam direction. This prototype has been tested at n@BTF photo-production neutron facilty in order to test its effectiveness under a very high flux gamma background. For fast neutrons, we developed new GEM detectors (called nGEM) for the CNESM diagnostic system of the SPIDER NBI prototype for ITER (RFX-Consortium, Italy) and as beam monitor for fast neutrons lines at spallation sources. The nGEM is a Triple GEM gaseous detector equipped with a polyethylene layer used to convert fast neutrons into recoil protons through the elastic scattering process. This paper describes the results obtained by testing a medium size (30 × 25 cm2 active area) nGEM detector at the ISIS spallation source on the VESUVIO beam line.

  4. Channel routing for VLSI layout

    NASA Astrophysics Data System (ADS)

    Schory, Michael

    1988-12-01

    Channel routing for VLSI layout is reviewed and a set of features required of an industrial channel router is defined. A channel router, CAR, was implemented, based on the Greedy and Detour routers. Integrated circuit design is discussed, with attention to the various channel routing problems and models. The major requirements for an industrial channel router to be integrated within general cells and standard cells routing environments are discussed and their fulfillment in CAR is considered. CAR comprises: the Greedy router functionality; the Detour router's obstacle, obstruction and switch box extensions; rectilinear channels; ports located not on standard and immediately surrounding layers; middle ports within the channel; jog on conflict-only to reduce jog use; single layer jogs; and partial pre-routing and dynamic layer optimization. Special features of CAR include: extension of the net definition with a short range tendency; definition of net preferred track; net visibility range in rectilinear channels; an extended area mechanism to deal with obstacles, rectilinear edges, pre-routing and ports on unusual layers; unified jog cost evaluation functions; unified, efficient jog selection; a general evaluation function for track worth; and a net connectivity part to control and handle split nets. Examples are presented of CAR operations.

  5. Library API for Z-Order Memory Layout

    SciTech Connect

    Bethel, E. Wes

    2015-02-01

    This library provides a simple-to-use API for implementing an altnerative to traditional row-major order in-memory layout, one based on a Morton- order space filling curve (SFC) , specifically, a Z-order variant of the Morton order curve. The library enables programmers to, after a simple initialization step, to convert a multidimensional array from row-major to Z- order layouts, then use a single, generic API call to access data from any arbitrary (i,j,k) location from within the array, whether it it be stored in row- major or z-order format. The motivation for using a SFC in-memory layout is for improved spatial locality, which results in increased use of local high speed cache memory. The basic idea is that with row-major order layouts, a data access to some location that is nearby in index space is likely far away in physical memory, resulting in poor spatial locality and slow runtime. On the other hand, with a SFC-based layout, accesses that are nearby in index space are much more likely to also be nearby in physical memory, resulting in much better spatial locality, and better runtime performance. Numerous studies over the years have shown significant runtime performance gains are realized by using a SFC-based memory layout compared to a row-major layout, sometimes by as much as 50%, which result from the better use of the memory and cache hierarchy that are attendant with a SFC-based layout (see, for example, [Beth2012]). This library implementation is intended for use with codes that work with structured, array-based data in 2 or 3 dimensions. It is not appropriate for use with unstructured or point-based data.

  6. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  7. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  8. DFM viewpoints of cell-level layout assessments and indications for concurrent layout optimization

    NASA Astrophysics Data System (ADS)

    Fu, Chung-Min; Yeh, Ping-Heng; Cheng, Yi-Kan; Klaver, Simon

    2008-10-01

    Design-for-manufacturing (DFM) is becoming an actual design practice among IC manufacturers, designers and EDA companies. Layout assessment by design-rule-check (DRC) using EDA tools is a common practice today to ensure well-manufactured design geometries. Standalone DFM tools, which require iteration loops of DFM analysis and fixing, do not fit well in design flows and are considered cumbersome. A better layout assessment method for DFM issues is required: one that gives actionable feedback, and that can be used with automatic optimization in early design stages. The latter is needed to avoid costly design re-spins that will consume critical time-to-market as well as use a lot of engineering resources, reticles and wafer material costs. For example, a DFM checking tool may report the hotspot types and locations, but this information is not sufficient for designers to decide tradeoffs between different fixing choices and to take care of trade-off between physical and electrical design constraints at the same time. When model-based properties are introduced such as lithographic contour, the tradeoffs between rule-based and model-based properties can only be resolved by the automatic and concurrent optimization. This work demonstrates a methodology of DFM scoring of layout based on preferred rules compliance, lithography GATE printability, as well as the layout fixing. The electrical impact on gates is analyzed and showed reduced variability (compared to nominal behavior) in gate performance. Designers can get visual feedback of the layout quality, as well as improvement suggestions. Takumi TKE software is used to demonstrate automatic and concurrent optimization. The method applies to both cell-level and custom designs.

  9. Electrical studies on silver based fast ion conducting glassy materials

    SciTech Connect

    Rao, B. Appa Kumar, E. Ramesh Kumari, K. Rajani Bhikshamaiah, G.

    2014-04-24

    Among all the available fast ion conductors, silver based glasses exhibit high conductivity. Further, glasses containing silver iodide enhances fast ion conducting behavior at room temperature. Glasses of various compositions of silver based fast ion conductors in the AgI−Ag{sub 2}O−[(1−x)B{sub 2}O{sub 3}−xTeO{sub 2}] (x=0 to1 mol% in steps of 0.2) glassy system have been prepared by melt quenching method. The glassy nature of the compounds has been confirmed by X-ray diffraction. The electrical conductivity (AC) measurements have been carried out in the frequency range of 1 KHz–3MHz by Impedance Analyzer in the temperature range 303–423K. The DC conductivity measurements were also carried out in the temperature range 300–523K. From both AC and DC conductivity studies, it is found that the conductivity increases and activation energy decreases with increasing the concentration of TeO{sub 2} as well as with temperature. The conductivity of the present glass system is found to be of the order of 10{sup −2} S/cm at room temperature. The ionic transport number of these glasses is found to be 0.999 indicating that these glasses can be used as electrolyte in batteries.

  10. Probabilistic Graph Layout for Uncertain Network Visualization.

    PubMed

    Schulz, Christoph; Nocaj, Arlind; Goertler, Jochen; Deussen, Oliver; Brandes, Ulrik; Weiskopf, Daniel

    2017-01-01

    We present a novel uncertain network visualization technique based on node-link diagrams. Nodes expand spatially in our probabilistic graph layout, depending on the underlying probability distributions of edges. The visualization is created by computing a two-dimensional graph embedding that combines samples from the probabilistic graph. A Monte Carlo process is used to decompose a probabilistic graph into its possible instances and to continue with our graph layout technique. Splatting and edge bundling are used to visualize point clouds and network topology. The results provide insights into probability distributions for the entire network-not only for individual nodes and edges. We validate our approach using three data sets that represent a wide range of network types: synthetic data, protein-protein interactions from the STRING database, and travel times extracted from Google Maps. Our approach reveals general limitations of the force-directed layout and allows the user to recognize that some nodes of the graph are at a specific position just by chance.

  11. Learning Layouts for Single-Page Graphic Designs.

    PubMed

    O'Donovan, Peter; Agarwala, Aseem; Hertzmann, Aaron

    2014-08-01

    This paper presents an approach for automatically creating graphic design layouts using a new energy-based model derived from design principles. The model includes several new algorithms for analyzing graphic designs, including the prediction of perceived importance, alignment detection, and hierarchical segmentation. Given the model, we use optimization to synthesize new layouts for a variety of single-page graphic designs. Model parameters are learned with Nonlinear Inverse Optimization (NIO) from a small number of example layouts. To demonstrate our approach, we show results for applications including generating design layouts in various styles, retargeting designs to new sizes, and improving existing designs. We also compare our automatic results with designs created using crowdsourcing and show that our approach performs slightly better than novice designers.

  12. The Aurora Project: A new sail layout

    NASA Astrophysics Data System (ADS)

    Genta, Giancarlo; Brusa, Eugenio

    1999-05-01

    Aurora spacecraft is a scientific probe propelled by a "fast" solar sail whose first goal is to perform a technology assessment mission. The main characteristic of the sail is its low mass, which implies the absence of a plastic backing of the aluminum film and the lightness of the whole structure. In previous structural studies the limiting factor has been shown to be the elastic stability of a number of structural members subject to compressive loads. An alternative structural layout is here suggested: an inflatable beam, which is kept pressurized also after the deployment, relieves all compressive stresses, allowing a very simple configuration and a straightforward deployment procedure. However, as the mission profile requires a trajectory passing close to the Sun, a configuration different from the 'parachute' sail proposed in another paper, must be used.

  13. Cache-oblivious mesh layouts

    SciTech Connect

    Yoon, Sung-Eui; Lindstrom, Peter; Pascucci, Valerio; Manocha, Dinesh

    2005-07-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications.

  14. Correlation-coefficient-based fast template matching through partial elimination.

    PubMed

    Mahmood, Arif; Khan, Sohaib

    2012-04-01

    Partial computation elimination techniques are often used for fast template matching. At a particular search location, computations are prematurely terminated as soon as it is found that this location cannot compete with an already known best match location. Due to the nonmonotonic growth pattern of the correlation-based similarity measures, partial computation elimination techniques have been traditionally considered inapplicable to speed up these measures. In this paper, we show that partial elimination techniques may be applied to a correlation coefficient by using a monotonic formulation, and we propose basic-mode and extended-mode partial correlation elimination algorithms for fast template matching. The basic-mode algorithm is more efficient on small template sizes, whereas the extended mode is faster on medium and larger templates. We also propose a strategy to decide which algorithm to use for a given data set. To achieve a high speedup, elimination algorithms require an initial guess of the peak correlation value. We propose two initialization schemes including a coarse-to-fine scheme for larger templates and a two-stage technique for small- and medium-sized templates. Our proposed algorithms are exact, i.e., having exhaustive equivalent accuracy, and are compared with the existing fast techniques using real image data sets on a wide variety of template sizes. While the actual speedups are data dependent, in most cases, our proposed algorithms have been found to be significantly faster than the other algorithms.

  15. Uncluttering graph layouts using anisotropic diffusion and mass transport.

    PubMed

    Frishman, Yaniv; Tal, Ayellet

    2009-01-01

    Many graph layouts include very dense areas, making the layout difficult to understand. In this paper, we propose a technique for modifying an existing layout in order to reduce the clutter in dense areas. A physically inspired evolution process based on a modified heat equation is used to create an improved layout density image, making better use of available screen space. Using results from optimal mass transport problems, a warp to the improved density image is computed. The graph nodes are displaced according to the warp. The warp maintains the overall structure of the graph, thus limiting disturbances to the mental map, while reducing the clutter in dense areas of the layout. The complexity of the algorithm depends mainly on the resolution of the image visualizing the graph and is linear in the size of the graph. This allows scaling the computation according to required running times. It is demonstrated how the algorithm can be significantly accelerated using a graphics processing unit (GPU), resulting in the ability to handle large graphs in a matter of seconds. Results on several layout algorithms and applications are demonstrated.

  16. Fast Waves at the Base of the Cochlea

    PubMed Central

    Recio-Spinoso, Alberto; Rhode, William S.

    2015-01-01

    Georg von Békésy observed that the onset times of responses to brief-duration stimuli vary as a function of distance from the stapes, with basal regions starting to move earlier than apical ones. He noticed that the speed of signal propagation along the cochlea is slow when compared with the speed of sound in water. Fast traveling waves have been recorded in the cochlea, but their existence is interpreted as the result of an experiment artifact. Accounts of the timing of vibration onsets at the base of the cochlea generally agree with Békésy’s results. Some authors, however, have argued that the measured delays are too short for consistency with Békésy’s theory. To investigate the speed of the traveling wave at the base of the cochlea, we analyzed basilar membrane (BM) responses to clicks recorded at several locations in the base of the chinchilla cochlea. The initial component of the BM response matches remarkably well the initial component of the stapes response, after a 4-μs delay of the latter. A similar conclusion is reached by analyzing onset times of time-domain gain functions, which correspond to BM click responses normalized by middle-ear input. Our results suggest that BM responses to clicks arise from a combination of fast and slow traveling waves. PMID:26062000

  17. Fast Waves at the Base of the Cochlea.

    PubMed

    Recio-Spinoso, Alberto; Rhode, William S

    2015-01-01

    Georg von Békésy observed that the onset times of responses to brief-duration stimuli vary as a function of distance from the stapes, with basal regions starting to move earlier than apical ones. He noticed that the speed of signal propagation along the cochlea is slow when compared with the speed of sound in water. Fast traveling waves have been recorded in the cochlea, but their existence is interpreted as the result of an experiment artifact. Accounts of the timing of vibration onsets at the base of the cochlea generally agree with Békésy's results. Some authors, however, have argued that the measured delays are too short for consistency with Békésy's theory. To investigate the speed of the traveling wave at the base of the cochlea, we analyzed basilar membrane (BM) responses to clicks recorded at several locations in the base of the chinchilla cochlea. The initial component of the BM response matches remarkably well the initial component of the stapes response, after a 4-μs delay of the latter. A similar conclusion is reached by analyzing onset times of time-domain gain functions, which correspond to BM click responses normalized by middle-ear input. Our results suggest that BM responses to clicks arise from a combination of fast and slow traveling waves.

  18. Schematic driven layout of Reed Solomon encoders

    NASA Technical Reports Server (NTRS)

    Arave, Kari; Canaris, John; Miles, Lowell; Whitaker, Sterling

    1992-01-01

    Two Reed Solomon error correcting encoders are presented. Schematic driven layout tools were used to create the encoder layouts. Special consideration had to be given to the architecture and logic to provide scalability of the encoder designs. Knowledge gained from these projects was used to create a more flexible schematic driven layout system.

  19. Parameter tuning for the NFFT based fast Ewald summation

    NASA Astrophysics Data System (ADS)

    Nestler, Franziska

    2016-07-01

    The computation of the Coulomb potentials and forces in charged particle systems under 3d-periodic boundary conditions is possible in an efficient way by utilizing the Ewald summation formulas and applying the fast Fourier transform (FFT). In this paper we consider the particle-particle NFFT (P^2NFFT) approach, which is based on the fast Fourier transform for nonequispaced data (NFFT) and compare the error behaviors regarding different window functions, which are used in order to approximate the given continuous charge distribution by a mesh based charge density. Typically B-splines are applied in the scope of particle mesh methods, as for instance within the well known particle-particle particle-mesh (P^3M) algorithm. The publicly available P^2NFFT algorithm allows the application of an oversampled FFT as well as the usage of different window functions. We consider for the first time also an approximation by Bessel functions and show how the resulting root mean square errors in the forces can be predicted precisely and efficiently. The results show that, if the parameters are tuned appropriately, the Bessel window function is in many cases even the better choice in terms of computational costs. Moreover, the results indicate that it is often advantageous in terms of efficiency to spend some oversampling within the NFFT while using a window function with a smaller support.

  20. A fast quad-tree based two dimensional hierarchical clustering.

    PubMed

    Rajadurai, Priscilla; Sankaranarayanan, Swamynathan

    2012-01-01

    Recently, microarray technologies have become a robust technique in the area of genomics. An important step in the analysis of gene expression data is the identification of groups of genes disclosing analogous expression patterns. Cluster analysis partitions a given dataset into groups based on specified features. Euclidean distance is a widely used similarity measure for gene expression data that considers the amount of changes in gene expression. However, the huge number of genes and the intricacy of biological networks have highly increased the challenges of comprehending and interpreting the resulting group of data, increasing processing time. The proposed technique focuses on a QT based fast 2-dimensional hierarchical clustering algorithm to perform clustering. The construction of the closest pair data structure is an each level is an important time factor, which determines the processing time of clustering. The proposed model reduces the processing time and improves analysis of gene expression data.

  1. A fast image encryption algorithm based on chaotic map

    NASA Astrophysics Data System (ADS)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  2. A PDE-Based Fast Local Level Set Method

    NASA Astrophysics Data System (ADS)

    Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo

    1999-11-01

    We develop a fast method to localize the level set method of Osher and Sethian (1988, J. Comput. Phys.79, 12) and address two important issues that are intrinsic to the level set method: (a) how to extend a quantity that is given only on the interface to a neighborhood of the interface; (b) how to reset the level set function to be a signed distance function to the interface efficiently without appreciably moving the interface. This fast local level set method reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement. Our approach differs from previous related works in that we extract all the information needed from the level set function (or functions in multiphase flow) and do not need to find explicitly the location of the interface in the space domain. The complexity of our method to do tasks such as extension and distance reinitialization is O(N), where N is the number of points in space, not O(N log N) as in works by Sethian (1996, Proc. Nat. Acad. Sci. 93, 1591) and Helmsen and co-workers (1996, SPIE Microlithography IX, p. 253). This complexity estimation is also valid for quite general geometrically based front motion for our localized method.

  3. A flexible fast 3D profilometry based on modulation measurement

    NASA Astrophysics Data System (ADS)

    Dou, Yunfu; Su, Xianyu; Chen, Yanfei; Wang, Ying

    2011-03-01

    This paper proposes a flexible fast profilometry based on modulation measurement. Two orthogonal gratings through a beam splitter are vertically projected on an object surface, and the measured object is placed between the imaging planes of the two gratings. Then the image of the object surface modulated by the orthogonal gratings can be obtained by a CCD camera in the same direction as the grating projection. This image is processed by the operations consisting of performing the Fourier transform, spatial frequency filtering and inverse Fourier transform. Using the modulation distributions of two grating patterns, we can reconstruct the 3D shape of the object. In the measurement process, we only need to capture one fringe pattern, so it is faster than the MMP and remains the advantages of it. In the article, the principle of this method, the setup of the measurement system, some simulations and primary experiment results are given. The simulative and experimental result proves it can restore the 3D shape of the complex object fast and comparatively accurate. Because only one fringe pattern is needed in the testing, our method has a promising extensive application prospect in real-time acquiring and dynamic measurement of 3D data of complex objects.

  4. Fast Field Calibration of MIMU Based on the Powell Algorithm

    PubMed Central

    Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang

    2014-01-01

    The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801

  5. Improvement of the user interface of multimedia applications by automatic display layout

    NASA Astrophysics Data System (ADS)

    Lueders, Peter; Ernst, Rolf

    1995-03-01

    Multimedia research has mainly focussed on real-time data capturing and display combined with compression, storage and transmission of these data. However, there is another problem considering real-time selecting and arranging a possibly large amount of data from multiple media on the computer screen together with textual and graphical data of regular software. This problem has already been known from complex software systems, such as CASE and hypertest, and will even be aggravated in multimedia systems. The aim of our work is to alleviate the user from the burden of continuously selecting, placing and sizing windows and their contents, but without introducing solutions limited to only few applications. We present an experimental system which controls the computer screen contents and layouts, directed by a user and/or tool provided information filter and prioritization. To be application independent, the screen layout is based on general layout optimization algorithms adapted from the VLSI layout which are controlled by application specific objective functions. In this paper, we discuss the problems of a comprehensible screen layout including the stability of optical information in time, the information filtering, the layout algorithms and the adaptation of the objective function to include a specific application. We give some examples of different standard applications with layout problems ranging from hierarchical graph layout to window layout. The results show that the automatic tool independent display layout will be possible in a real time interactive environment.

  6. SOI layout decomposition for double patterning lithography on high-performance computer platforms

    NASA Astrophysics Data System (ADS)

    Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir

    2014-12-01

    In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.

  7. Fast complex memory polynomial-based adaptive digital predistorter

    NASA Astrophysics Data System (ADS)

    Singh Sappal, Amandeep; Singh Patterh, Manjeet; Sharma, Sanjay

    2011-07-01

    Today's 3G wireless systems require both high linearity and high power amplifier (PA) efficiency. The high peak-to-average ratios of the digital modulation schemes used in 3G wireless systems require that the RF PA maintain high linearity over a large range while maintaining this high efficiency; these two requirements are often at odds with each other with many of the traditional amplifier architectures. In this article, a fast and easy-to-implement adaptive digital predistorter has been presented for Wideband Code Division Multiplexed signals using complex memory polynomial work function. The proposed algorithm has been implemented to test a Motorola LDMOSFET PA. The proposed technique also takes care of the memory effects of the PA, which have been ignored in many proposed techniques in the literature. The results show that the new complex memory polynomial-based adaptive digital predistorter has better linearisation performance than conventional predistortion techniques.

  8. [Fast Implementation Method of Protein Spots Detection Based on CUDA].

    PubMed

    Xiong, Bangshu; Ye, Yijia; Ou, Qiaofeng; Zhang, Haodong

    2016-02-01

    In order to improve the efficiency of protein spots detection, a fast detection method based on CUDA was proposed. Firstly, the parallel algorithms of the three most time-consuming parts in the protein spots detection algorithm: image preprocessing, coarse protein point detection and overlapping point segmentation were studied. Then, according to single instruction multiple threads executive model of CUDA to adopted data space strategy of separating two-dimensional (2D) images into blocks, various optimizing measures such as shared memory and 2D texture memory are adopted in this study. The results show that the operative efficiency of this method is obviously improved compared to CPU calculation. As the image size increased, this method makes more improvement in efficiency, such as for the image with the size of 2,048 x 2,048, the method of CPU needs 52,641 ms, but the GPU needs only 4,384 ms.

  9. Fast spectral color image segmentation based on filtering and clustering

    NASA Astrophysics Data System (ADS)

    Xing, Min; Li, Hongyu; Jia, Jinyuan; Parkkinen, Jussi

    2009-10-01

    This paper proposes a fast approach to spectral image segmentation. In the algorithm, two popular techniques are extended and applied to spectral color images: the mean-shift filtering and the kernel-based clustering. We claim that segmentation should be completed under illuminant F11 rather than directly using the original spectral reflectance, because such illumination can reduce data variability and expedite the following filtering. The modes obtained in the mean-shift filtering represent the local features of spectral images, and will be applied to segmentation in place of pixels. Since the modes are generally small in number, the eigendecomposition of kernel matrices, the crucial step in the kernelbased clustering, becomes much easier. The combination of these two techniques can efficiently enhance the performance of segmentation. Experiments show that the proposed segmentation method is feasible and very promising for spectral color images.

  10. Vertical Object Layout and Compression for Fixed Heaps

    NASA Astrophysics Data System (ADS)

    Titzer, Ben L.; Palsberg, Jens

    Research into embedded sensor networks has placed increased focus on the problem of developing reliable and flexible software for microcontroller-class devices. Languages such as nesC [10] and Virgil [20] have brought higher-level programming idioms to this lowest layer of software, thereby adding expressiveness. Both languages are marked by the absence of dynamic memory allocation, which removes the need for a runtime system to manage memory. While nesC offers code modules with statically allocated fields, arrays and structs, Virgil allows the application to allocate and initialize arbitrary objects during compilation, producing a fixed object heap for runtime. This paper explores techniques for compressing fixed object heaps with the goal of reducing the RAM footprint of a program. We explore table-based compression and introduce a novel form of object layout called vertical object layout. We provide experimental results that measure the impact on RAM size, code size, and execution time for a set of Virgil programs. Our results show that compressed vertical layout has better execution time and code size than table-based compression while achieving more than 20% heap reduction on 6 of 12 benchmark programs and 2-17% heap reduction on the remaining 6. We also present a formalization of vertical object layout and prove tight relationships between three styles of object layout.

  11. Adjoint Optimization of Wind Plant Layouts

    DOE PAGES

    King, Ryan N.; Dykes, Katherine; Graf, Peter; ...

    2016-08-31

    Using adjoint optimization and three-dimensional Reynolds-averaged Navier Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The RANS flow model is implemented in the Python finite element package FEniCS and the derivation of the adjoint equations is automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated on idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 windspeed bins.« less

  12. Adjoint Optimization of Wind Plant Layouts

    SciTech Connect

    King, Ryan N.; Dykes, Katherine; Graf, Peter; Hamlington, Peter E.

    2016-08-31

    Using adjoint optimization and three-dimensional Reynolds-averaged Navier Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flow physics within a wind plant. The RANS flow model is implemented in the Python finite element package FEniCS and the derivation of the adjoint equations is automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated on idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 windspeed bins.

  13. Refractive index fiber sensor based on Brillouin fast light

    NASA Astrophysics Data System (ADS)

    Chen, Jiali; Gan, Jiulin; Zhang, Zhishen; Yang, Tong; Deng, Huaqiu; Yang, Zhongmin

    2014-01-01

    A new type of refractive index fiber sensor was invented by combining the evanescent-field scattering sensing mechanism with the Brillouin fast light scheme. Superluminal light was realized using Brillouin lasing oscillation in a fiber ring cavity. The refractive index of the solution around the microfiber within the cavity is related to the group velocity of the fast light. This fast light refractive index sensor offers an alternative for high-accuracy sensing applications.

  14. Graphic composite segmentation for PDF documents with complex layouts

    NASA Astrophysics Data System (ADS)

    Xu, Canhui; Tang, Zhi; Tao, Xin; Shi, Cao

    2013-01-01

    Converting the PDF books to re-flowable format has recently attracted various interests in the area of e-book reading. Robust graphic segmentation is highly desired for increasing the practicability of PDF converters. To cope with various layouts, a multi-layer concept is introduced to segment graphic composites including photographic images, drawings with text insets or surrounded with text elements. Both image based analysis and inherent digital born document advantages are exploited in this multi-layer based layout analysis method. By combining low-level page elements clustering applied on PDF documents and connected component analysis on synthetically generated PNG image document, graphic composites can be segmented for PDF documents with complex layouts. The experimental results on graphic composite segmentation of PDF document pages have shown satisfactory performance.

  15. Fast and Secure Chaos-Based Cryptosystem for Images

    NASA Astrophysics Data System (ADS)

    Farajallah, Mousa; El Assad, Safwan; Deforges, Olivier

    Nonlinear dynamic cryptosystems or chaos-based cryptosystems have been attracting a large amount of research since 1990. The critical aspect of cryptography is to face the growth of communication and to achieve the design of fast and secure cryptosystems. In this paper, we introduce three versions of a chaos-based cryptosystem based on a similar structure of the Zhang and Fridrich cryptosystems. Each version is composed of two layers: a confusion layer and a diffusion layer. The confusion layer is achieved by using a modified 2-D cat map to overcome the fixed-point problem and some other weaknesses, and also to increase the dynamic key space. The 32-bit logistic map is used as a diffusion layer for the first version, which is more robust than using it in 8-bit. In the other versions, the logistic map is replaced by a modified Finite Skew Tent Map (FSTM) for three reasons: to increase the nonlinearity properties of the diffusion layer, to overcome the fixed-point problem, and to increase the dynamic key space. Finally, all versions of the proposed cryptosystem are more resistant against known attacks and faster than Zhang cryptosystems. Moreover, the dynamic key space is much larger than the one used in Zhang cryptosystems. Performance and security analysis prove that the proposed cryptosystems are suitable for securing real-time applications.

  16. An online planning tool for designing terrace layouts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A web-based conservation planning tool, WebTERLOC (web-based Terrace Location Program), was developed to provide multiple terrace layout options using digital elevation model (DEM) and geographic information systems (GIS). Development of a terrace system is complicated by the time-intensive manual ...

  17. A flexible layout design method for passive micromixers.

    PubMed

    Deng, Yongbo; Liu, Zhenyu; Zhang, Ping; Liu, Yongshun; Gao, Qingyong; Wu, Yihui

    2012-10-01

    This paper discusses a flexible layout design method of passive micromixers based on the topology optimization of fluidic flows. Being different from the trial and error method, this method obtains the detailed layout of a passive micromixer according to the desired mixing performance by solving a topology optimization problem. Therefore, the dependence on the experience of the designer is weaken, when this method is used to design a passive micromixer with acceptable mixing performance. Several design disciplines for the passive micromixers are considered to demonstrate the flexibility of the layout design method for passive micromixers. These design disciplines include the approximation of the real 3D micromixer, the manufacturing feasibility, the spacial periodic design, and effects of the Péclet number and Reynolds number on the designs obtained by this layout design method. The capability of this design method is validated by several comparisons performed between the obtained layouts and the optimized designs in the recently published literatures, where the values of the mixing measurement is improved up to 40.4% for one cycle of the micromixer.

  18. SpicyNodes: radial layout authoring for the general public.

    PubMed

    Douma, Michael; Ligierko, Grzegorz; Ancuta, Ovidiu; Gritsai, Pavel; Liu, Sean

    2009-01-01

    Trees and graphs are relevant to many online tasks such as visualizing social networks, product catalogs, educational portals, digital libraries, the semantic web, concept maps and personalized information management. SpicyNodes is an information-visualization technology that builds upon existing research on radial tree layouts and graph structures. Users can browse a tree, clicking from node to node, as well as successively viewing a node, immediately related nodes and the path back to the "home" nodes. SpicyNodes' layout algorithms maintain balanced layouts using a hybrid mixture of a geometric layout (a succession of spanning radial trees) and force-directed layouts to minimize overlapping nodes, plus several other improvements over prior art. It provides XML-based API and GUI authoring tools. The goal of the SpicyNodes project is to implement familiar principles of radial maps and focus+context with an attractive and inviting look and feel in an open system that is accessible to virtually any Internet user.

  19. Efficient Video Stitching Based on Fast Structure Deformation.

    PubMed

    Li, Jing; Xu, Wei; Zhang, Jianguo; Zhang, Maojun; Wang, Zhengming; Li, Xuelong

    2015-12-01

    In computer vision, video stitching is a very challenging problem. In this paper, we proposed an efficient and effective wide-view video stitching method based on fast structure deformation that is capable of simultaneously achieving quality stitching and computational efficiency. For a group of synchronized frames, firstly, an effective double-seam selection scheme is designed to search two distinct but structurally corresponding seams in the two original images. The seam location of the previous frame is further considered to preserve the interframe consistency. Secondly, along the double seams, 1-D feature detection and matching is performed to capture the structural relationship between the two adjacent views. Thirdly, after feature matching, we propose an efficient algorithm to linearly propagate the deformation vectors to eliminate structure misalignment. At last, image intensity misalignment is corrected by rapid gradient fusion based on the successive over relaxation iteration (SORI) solver. A principled solution to the initialization of the SORI significantly reduced the number of iterations required. We have compared favorably our method with seven state-of-the-art image and video stitching algorithms as well as traditional ones. Experimental results show that our method outperforms the existing ones compared in terms of overall stitching quality and computational efficiency.

  20. Online medical journal article layout analysis

    NASA Astrophysics Data System (ADS)

    Zou, Jie; Le, Daniel; Thoma, George R.

    2007-01-01

    We describe a physical and logical layout analysis algorithm, which is applied to segment and label online medical journal articles (regular HTML and PDF-Converted-HTML files). For these articles, the geometric layout of the Web page is the most important cue for physical layout analysis. The key to physical layout analysis is then to render the HTML file in a Web browser, so that the visual information in zones (composed of one or a set of HTML DOM nodes), especially their relative position, can be utilized. The recursive X-Y cut algorithm is adopted to construct a hierarchical zone tree structure. In logical layout analysis, both geometric and linguistic features are used. The HTML documents are modeled by a Hidden Markov Model with 16 states, and the Viterbi algorithm is then used to find the optimal label sequence, concluding the logical layout analysis.

  1. A pipette-based calibration system for fast-scan cyclic voltammetry with fast response times.

    PubMed

    Ramsson, Eric S

    2016-01-01

    Fast-scan cyclic voltammetry (FSCV) is an electrochemical technique that utilizes the oxidation and/or reduction of an analyte of interest to infer rapid changes in concentrations. In order to calibrate the resulting oxidative or reductive current, known concentrations of an analyte must be introduced under controlled settings. Here, I describe a simple and cost-effective method, using a Petri dish and pipettes, for the calibration of carbon fiber microelectrodes (CFMs) using FSCV.

  2. Layout techniques for integrated circuits

    SciTech Connect

    Tsay, C.Y.

    1986-01-01

    Several techniques are presented for solving circuit-layout problems. In particular, a channel-placement algorithm is first introduced to reduce the channel density (d) so that a channel router can complete the routing requirements in fewer tracks. A 4-layer channel-routing model is then formulated so that a general channel routing problem (CRP) with cyclic conflicts and long critical paths can be completed with d/2. Finally, the 4-layer, 2-dimensional switchbox routing problem needed to enhance the channel routing in general circuit layout is investigated from the graph-theoretical viewpoint. The channel-placement technique consists of two phases. Using the principle of decomposition, the initial placement phase effectively reduces the complexity of the problem and, therefore, improves the efficiency of the second phase, which is called the iterative improvement placement. The main feature of this phase is its hill-climbing ability to avoid being trapped at local minima. The combination of these two phases leads to an efficient technique for standard cell placement. To utilize multi-layer technology, a new 4-layer channel routing model is introduced to minimize the channel width of more-generalized CRP's. The 2-dimensional switchbox routing problem is transformed to an equivalent graph-theoretical problem.

  3. Layout modification for library cell Alt-PSM composability

    NASA Astrophysics Data System (ADS)

    Cao, Ke; Hu, Jiang; Cheng, Mosong

    2004-05-01

    In sub-wavelength lithography, light field Alt-PSM (Alternating Phase Shifting Mask) is an essential technology for poly layer printability. In a standard cell based design, the problem of obtaining Alt-PSM compliance for an individual cell layout has been solved well [3]. However, placing Alt-PSM compliant cells together can not guarantee Alt-PSM compliance of the entire chip/block layout due to phase interactions among adjacent cells. A simple solution to this Alt-PSM composability problem is to wrap blank area around each cell, which is very inefficient on chip area usage. In this paper, we formulate the composability problem as a graph model and propose a polynomial time optimal algorithm to achieve Alt-PSM composability with the least impact on cell layout.

  4. Impact of data layouts on the efficiency of GPU-accelerated IDW interpolation.

    PubMed

    Mei, Gang; Tian, Hong

    2016-01-01

    This paper focuses on evaluating the impact of different data layouts on the computational efficiency of GPU-accelerated Inverse Distance Weighting (IDW) interpolation algorithm. First we redesign and improve our previous GPU implementation that was performed by exploiting the feature of CUDA dynamic parallelism (CDP). Then we implement three versions of GPU implementations, i.e., the naive version, the tiled version, and the improved CDP version, based upon five data layouts, including the Structure of Arrays (SoA), the Array of Structures (AoS), the Array of aligned Structures (AoaS), the Structure of Arrays of aligned Structures (SoAoS), and the Hybrid layout. We also carry out several groups of experimental tests to evaluate the impact. Experimental results show that: the layouts AoS and AoaS achieve better performance than the layout SoA for both the naive version and tiled version, while the layout SoA is the best choice for the improved CDP version. We also observe that: for the two combined data layouts (the SoAoS and the Hybrid), there are no notable performance gains when compared to other three basic layouts. We recommend that: in practical applications, the layout AoaS is the best choice since the tiled version is the fastest one among three versions. The source code of all implementations are publicly available.

  5. Fast recognition of musical sounds based on timbre.

    PubMed

    Agus, Trevor R; Suied, Clara; Thorpe, Simon J; Pressnitzer, Daniel

    2012-05-01

    Human listeners seem to have an impressive ability to recognize a wide variety of natural sounds. However, there is surprisingly little quantitative evidence to characterize this fundamental ability. Here the speed and accuracy of musical-sound recognition were measured psychophysically with a rich but acoustically balanced stimulus set. The set comprised recordings of notes from musical instruments and sung vowels. In a first experiment, reaction times were collected for three target categories: voice, percussion, and strings. In a go/no-go task, listeners reacted as quickly as possible to members of a target category while withholding responses to distractors (a diverse set of musical instruments). Results showed near-perfect accuracy and fast reaction times, particularly for voices. In a second experiment, voices were recognized among strings and vice-versa. Again, reaction times to voices were faster. In a third experiment, auditory chimeras were created to retain only spectral or temporal features of the voice. Chimeras were recognized accurately, but not as quickly as natural voices. Altogether, the data suggest rapid and accurate neural mechanisms for musical-sound recognition based on selectivity to complex spectro-temporal signatures of sound sources.

  6. Fast CEUS image segmentation based on self organizing maps

    NASA Astrophysics Data System (ADS)

    Paire, Julie; Sauvage, Vincent; Albouy-Kissi, Adelaïde; Ladam Marcus, Viviane; Marcus, Claude; Hoeffel, Christine

    2014-03-01

    Contrast-enhanced ultrasound (CEUS) has recently become an important technology for lesion detection and characterization. CEUS is used to investigate the perfusion kinetics in tissue over time, which relates to tissue vascularization. In this paper, we present an interactive segmentation method based on the neural networks, which enables to segment malignant tissue over CEUS sequences. We use Self-Organizing-Maps (SOM), an unsupervised neural network, to project high dimensional data to low dimensional space, named a map of neurons. The algorithm gathers the observations in clusters, respecting the topology of the observations space. This means that a notion of neighborhood between classes is defined. Adjacent observations in variables space belong to the same class or related classes after classification. Thanks to this neighborhood conservation property and associated with suitable feature extraction, this map provides user friendly segmentation tool. It will assist the expert in tumor segmentation with fast and easy intervention. We implement SOM on a Graphics Processing Unit (GPU) to accelerate treatment. This allows a greater number of iterations and the learning process to converge more precisely. We get a better quality of learning so a better classification. Our approach allows us to identify and delineate lesions accurately. Our results show that this method improves markedly the recognition of liver lesions and opens the way for future precise quantification of contrast enhancement.

  7. [Fast discrimination of edible vegetable oil based on Raman spectroscopy].

    PubMed

    Zhou, Xiu-Jun; Dai, Lian-Kui; Li, Sheng

    2012-07-01

    A novel method to fast discriminate edible vegetable oils by Raman spectroscopy is presented. The training set is composed of different edible vegetable oils with known classes. Based on their original Raman spectra, baseline correction and normalization were applied to obtain standard spectra. Two characteristic peaks describing the unsaturated degree of vegetable oil were selected as feature vectors; then the centers of all classes were calculated. For an edible vegetable oil with unknown class, the same pretreatment and feature extraction methods were used. The Euclidian distances between the feature vector of the unknown sample and the center of each class were calculated, and the class of the unknown sample was finally determined by the minimum distance. For 43 edible vegetable oil samples from seven different classes, experimental results show that the clustering effect of each class was more obvious and the class distance was much larger with the new feature extraction method compared with PCA. The above classification model can be applied to discriminate unknown edible vegetable oils rapidly and accurately.

  8. Fast Minimum Variance Beamforming Based on Legendre Polynomials.

    PubMed

    Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae

    2016-09-01

    Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.

  9. Offshore wind farm layout optimization

    NASA Astrophysics Data System (ADS)

    Elkinton, Christopher Neil

    Offshore wind energy technology is maturing in Europe and is poised to make a significant contribution to the U.S. energy production portfolio. Building on the knowledge the wind industry has gained to date, this dissertation investigates the influences of different site conditions on offshore wind farm micrositing---the layout of individual turbines within the boundaries of a wind farm. For offshore wind farms, these conditions include, among others, the wind and wave climates, water depths, and soil conditions at the site. An analysis tool has been developed that is capable of estimating the cost of energy (COE) from offshore wind farms. For this analysis, the COE has been divided into several modeled components: major costs (e.g. turbines, electrical interconnection, maintenance, etc.), energy production, and energy losses. By treating these component models as functions of site-dependent parameters, the analysis tool can investigate the influence of these parameters on the COE. Some parameters result in simultaneous increases of both energy and cost. In these cases, the analysis tool was used to determine the value of the parameter that yielded the lowest COE and, thus, the best balance of cost and energy. The models have been validated and generally compare favorably with existing offshore wind farm data. The analysis technique was then paired with optimization algorithms to form a tool with which to design offshore wind farm layouts for which the COE was minimized. Greedy heuristic and genetic optimization algorithms have been tuned and implemented. The use of these two algorithms in series has been shown to produce the best, most consistent solutions. The influences of site conditions on the COE have been studied further by applying the analysis and optimization tools to the initial design of a small offshore wind farm near the town of Hull, Massachusetts. The results of an initial full-site analysis and optimization were used to constrain the boundaries of

  10. Control room layout, 1941, drawn by Waddell and Hardesty, New ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Control room layout, 1941, drawn by Waddell and Hardesty, New York, New York. Drawing in collection of Caretaker Site Office, Philadelphia Naval Business Center, Philadelphia, Pennsylvania. - Naval Base Philadelphia-Philadelphia Naval Shipyard, Lift Bridge, Mouth of Reserve Basin, League Island, Philadelphia, Philadelphia County, PA

  11. Layout and Design. Module 2. Commercial Art. Instructor's Guide.

    ERIC Educational Resources Information Center

    Benke, Tom; And Others

    This module is the second of five in the Commercial Art series. The curriculum guide is designed for competency-based teaching and testing. Within this module on layout and design are eight instructional units. A cross-reference table reveals how the instructional components of the module relate to Missouri competencies. Each unit includes some or…

  12. Optimized layout generator for microgyroscope

    NASA Astrophysics Data System (ADS)

    Tay, Francis E.; Li, Shifeng; Logeeswaran, V. J.; Ng, David C.

    2000-10-01

    This paper presents an optimized out-of-plane microgyroscope layout generator using AutoCAD R14 and MS ExcelTM as a first attempt to automating the design of resonant micro- inertial sensors. The out-of-plane microgyroscope with two degrees of freedom lumped parameter model was chosen as the synthesis topology. Analytical model for the open loop operating has been derived for the gyroscope performance characteristics. Functional performance parameters such as sensitivity are ensured to be satisfied while simultaneously optimizing a design objective such as minimum area. A single algorithm will optimize the microgyroscope dimensions, while simultaneously maximizing or minimizing the objective functions: maximum sensitivity and minimum area. The multi- criteria objective function and optimization methodology was implemented using the Generalized Reduced Gradient algorithm. For data conversion a DXF to GDS converter was used. The optimized theoretical design performance parameters show good agreement with finite element analysis.

  13. Economics of wind farm layout

    SciTech Connect

    Germain, A.C.; Bain, D.A.

    1997-12-31

    The life cycle cost of energy (COE) is the primary determinant of the economic viability of a wind energy generation facility. The cost of wind turbines and associated hardware is counterbalanced by the energy which can be generated. This paper focuses on the turbine layout design process, considering the cost and energy capture implications of potential spacing options from the viewpoint of a practicing project designer. It is argued that lateral spacings in the range of 1.5 to 5 diameters are all potentially optimal, but only when matched to wind resource characteristics and machine design limits. The effect of wakes on energy capture is quantified while the effect on turbine life and maintenance cost is discussed qualitatively. Careful optimization can lower COE and project designers are encouraged to integrate the concepts in project designs.

  14. Automatic Layout Design for Power Module

    SciTech Connect

    Ning, Puqi; Wang, Fei; Ngo, Khai

    2013-01-01

    The layout of power modules is one of the key points in power module design, especially for high power densities, where couplings are increased. In this paper, along with the design example, an automatic design processes by using a genetic algorithm are presented. Some practical considerations and implementations are introduced in the optimization of module layout design.

  15. Offshore wind farm electrical cable layout optimization

    NASA Astrophysics Data System (ADS)

    Pillai, A. C.; Chick, J.; Johanning, L.; Khorasanchi, M.; de Laleu, V.

    2015-12-01

    This article explores an automated approach for the efficient placement of substations and the design of an inter-array electrical collection network for an offshore wind farm through the minimization of the cost. To accomplish this, the problem is represented as a number of sub-problems that are solved in series using a combination of heuristic algorithms. The overall problem is first solved by clustering the turbines to generate valid substation positions. From this, a navigational mesh pathfinding algorithm based on Delaunay triangulation is applied to identify valid cable paths, which are then used in a mixed-integer linear programming problem to solve for a constrained capacitated minimum spanning tree considering all realistic constraints. The final tree that is produced represents the solution to the inter-array cable problem. This method is applied to a planned wind farm to illustrate the suitability of the approach and the resulting layout that is generated.

  16. Progressive classification scheme for document layout recognition

    NASA Astrophysics Data System (ADS)

    Minguillon, Julian; Pujol, Jaume; Zeger, Kenneth

    1999-06-01

    In this paper, we present a progressive classification scheme for a document layout recognition system using three stages. The first stages, preprocessing, extracts statistical information that may be used for background detection and removal. The second stage, a tree based classified, uses a variable block size and a set of probabilistic rules to classify segmented blocks that are independently classified. The third, state, postprocessing, uses the label map generated in the second state with a set of context rules to label unclassified blocks, trying also to solve some of the misclassification errors that may have been generated during the previous stage. The progressive scheme used in the second and third stages allows the user to stop the classification process at any block size, depending on this requirements. Experiments show that a progressive scheme combined with a set of postprocessing rules increases the percentage of correctly classified blocks and reduces the number of block computations.

  17. Printed circuit board layout by microcomputer

    NASA Astrophysics Data System (ADS)

    Krausman, E. W.

    1983-12-01

    Printed circuit board artwork is usually prepared manually because of the unavailability of computer-aided-design tools. This thesis presents the design of a microcomputer based printed circuit board layout system that is easy to use and cheap. Automatic routing and component placement routines will significantly speed up the process. The design satisfies the following requirements: Microcomputer implementation, portable, algorithm independent, interactive, and user friendly. When it is fully implemented a user will be able to select components and a board outline from an automated catalog, enter a schematic diagram, position the components on the board, and completely route the board from a single graphics terminal. Currently, the user interface and the outer level command processor have been implemented in Pascal. Future versions will be written in C for better portability.

  18. A Randomized Field Trial of the Fast ForWord Language Computer-Based Training Program

    ERIC Educational Resources Information Center

    Borman, Geoffrey D.; Benson, James G.; Overman, Laura

    2009-01-01

    This article describes an independent assessment of the Fast ForWord Language computer-based training program developed by Scientific Learning Corporation. Previous laboratory research involving children with language-based learning impairments showed strong effects on their abilities to recognize brief and fast sequences of nonspeech and speech…

  19. Fast and accurate line scanner based on white light interferometry

    NASA Astrophysics Data System (ADS)

    Lambelet, Patrick; Moosburger, Rudolf

    2013-04-01

    White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is

  20. Nonuniform fast Fourier transform-based fast back-projection algorithm for stepped frequency continuous wave ground penetrating radar imaging

    NASA Astrophysics Data System (ADS)

    Qu, Lele; Yin, Yuqing

    2016-10-01

    Stepped frequency continuous wave ground penetrating radar (SFCW-GPR) systems are becoming increasingly popular in the GPR community due to the wider dynamic range and higher immunity to radio interference. The traditional back-projection (BP) algorithm is preferable for SFCW-GPR imaging in layered mediums scenarios for its convenience and robustness. However, the existing BP imaging algorithms are usually very computationally intensive, which limits their practical applications to SFCW-GPR imaging. To solve the above problem, a fast SFCW-GPR BP imaging algorithm based on nonuniform fast Fourier transform (NUFFT) technique is proposed in this paper. By reformulating the traditional BP imaging algorithm into the evaluations of NUFFT, the computational efficiency of NUFFT is exploited to reduce the computational complexity of the imaging reconstruction. Both simulation and experimental results have verified the effectiveness and improvement of computational efficiency of the proposed imaging method.

  1. Mining-Induced Coal Permeability Change Under Different Mining Layouts

    NASA Astrophysics Data System (ADS)

    Zhang, Zetian; Zhang, Ru; Xie, Heping; Gao, Mingzhong; Xie, Jing

    2016-09-01

    To comprehensively understand the mining-induced coal permeability change, a series of laboratory unloading experiments are conducted based on a simplifying assumption of the actual mining-induced stress evolution processes of three typical longwall mining layouts in China, i.e., non-pillar mining (NM), top-coal caving mining (TCM) and protective coal-seam mining (PCM). A theoretical expression of the mining-induced permeability change ratio (MPCR) is derived and validated by laboratory experiments and in situ observations. The mining-induced coal permeability variation under the three typical mining layouts is quantitatively analyzed using the MPCR based on the test results. The experimental results show that the mining-induced stress evolution processes of different mining layouts do have an influence on the mechanical behavior and evolution of MPCR of coal. The coal mass in the PCM simulation has the lowest stress concentration but the highest peak MPCR (approximately 4000 %), whereas the opposite trends are observed for the coal mass under NM. The results of the coal mass under TCM fall between those for PCM and NM. The evolution of the MPCR of coal under different layouts can be divided into three sections, i.e., stable increasing section, accelerated increasing section and reducing section, but the evolution processes are slightly different for the different mining layouts. A coal bed gas intensive extraction region is recommended based on the MPCR distribution of coal seams obtained by simplifying assumptions and the laboratory testing results. The presented results are also compared with existing conventional triaxial compression test results to fully comprehend the effect of actual mining-induced stress evolution on coal property tests.

  2. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling.

    PubMed

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-12-07

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.

  3. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling

    PubMed Central

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-01-01

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600

  4. Multi-level graph layout on the GPU.

    PubMed

    Frishman, Yaniv; Tal, Ayellet

    2007-01-01

    This paper presents a new algorithm for force directed graph layout on the GPU. The algorithm, whose goal is to compute layouts accurately and quickly, has two contributions. The first contribution is proposing a general multi-level scheme, which is based on spectral partitioning. The second contribution is computing the layout on the GPU. Since the GPU requires a data parallel programming model, the challenge is devising a mapping of a naturally unstructured graph into a well-partitioned structured one. This is done by computing a balanced partitioning of a general graph. This algorithm provides a general multi-level scheme, which has the potential to be used not only for computation on the GPU, but also on emerging multi-core architectures. The algorithm manages to compute high quality layouts of large graphs in a fraction of the time required by existing algorithms of similar quality. An application for visualization of the topologies of ISP (Internet Service Provider) networks is presented.

  5. Optimised layout and roadway support planning with integrated intelligent software

    SciTech Connect

    Kouniali, S.; Josien, J.P.; Piguet, J.P.

    1996-12-01

    Experience with knowledge-based systems for Layout planning and roadway support dimensioning is on hand in European coal mining since 1985. The systems SOUT (Support choice and dimensioning, 1989), SOUT 2, PLANANK (planning of bolt-support), Exos (layout planning diagnosis. 1994), Sout 3 (1995) have been developed in close cooperation by CdF{sup 1}. INERIS{sup 2} , EMN{sup 3} (France) and RAG{sup 4}, DMT{sup 5}, TH - Aachen{sup 6} (Germany); ISLSP (Integrated Software for Layout and support planning) development is in progress (completion scheduled for July 1996). This new software technology in combination with conventional programming systems, numerical models and existing databases turned out to be suited for setting-up an intelligent decision aid for layout and roadway support planning. The system enhances reliability of planning and optimises the safety-to-cost ratio for (1) deformation forecast for roadways in seam and surrounding rocks, consideration of the general position of the roadway in the rock mass (zones of increased pressure, position of operating and mined panels); (2) support dimensioning; (3) yielding arches, rigid arches, porch sets, rigid rings, yielding rings and bolting/shotcreting for drifts; (4) yielding arches, rigid arches and porch sets for roadways in seam; and (5) bolt support for gateroads (assessment of exclusion criteria and calculation of the bolting pattern) bolting of face-end zones (feasibility and safety assessment; stability guarantee).

  6. Luminaire layout: Design and implementation

    NASA Technical Reports Server (NTRS)

    Both, A. J.

    1994-01-01

    The information contained in this report was presented during the discussion regarding guidelines for PAR uniformity in greenhouses. The data shows a lighting uniformity analysis in a research greenhouse for rose production at the Cornell University campus. The luminaire layout was designed using the computer program Lumen-Micro. After implementation of the design, accurate measurements were taken in the greenhouse and the uniformity analysis for both the design and implementation were compared. A study of several supplemental lighting installations resulted in the following recommendations: include only the actual growing area in the lighting uniformity analysis; for growing areas up to 20 square meters, take four measurements per square meter; for growing areas above 20 square meters, take one measurement per square meter; use one of the uniformity criteria and frequency graphs to compare lighting uniformity amongst designs; and design for uniformity criterion of a least 0.75 and the fraction within +/- 15% of the average PAR value should be close to one.

  7. Accurate Anisotropic Fast Marching for Diffusion-Based Geodesic Tractography

    PubMed Central

    Jbabdi, S.; Bellec, P.; Toro, R.; Daunizeau, J.; Pélégrini-Issac, M.; Benali, H.

    2008-01-01

    Using geodesics for inferring white matter fibre tracts from diffusion-weighted MR data is an attractive method for at least two reasons: (i) the method optimises a global criterion, and hence is less sensitive to local perturbations such as noise or partial volume effects, and (ii) the method is fast, allowing to infer on a large number of connexions in a reasonable computational time. Here, we propose an improved fast marching algorithm to infer on geodesic paths. Specifically, this procedure is designed to achieve accurate front propagation in an anisotropic elliptic medium, such as DTI data. We evaluate the numerical performance of this approach on simulated datasets, as well as its robustness to local perturbation induced by fiber crossing. On real data, we demonstrate the feasibility of extracting geodesics to connect an extended set of brain regions. PMID:18299703

  8. High power, fast, microwave components based on beam generated plasmas

    NASA Astrophysics Data System (ADS)

    Manheimer, W. M.; Fernsler, R. F.; Gitlin, M. S.

    1998-10-01

    It is shown that the agile mirror plasma, under development as a device to simply and cheaply give electronic steering to microwave beams, also has application as a fast, electronically controlled, high power reflector, or phase shifter. In a radar system, this can lead to such applications as pulse to pulse polarization agility and electronic control of antenna gain, as well as to innovative approaches to high power millimeter wave circulators. The basic theory of the enhanced glow plasma is also developed.

  9. A novel fast full inversion based breast ultrasound elastography technique.

    PubMed

    Karimi, Hirad; Fenster, Aaron; Samani, Abbas

    2013-04-07

    Cancer detection and classification have been the focus of many imaging and therapeutic research studies. Elastography is a non-invasive technique to visualize suspicious soft tissue areas where tissue stiffness is used as image contrast mechanism. In this study, a breast ultrasound elastography system including software and hardware is proposed. Unlike current elastography systems that image the tissue strain and present it as an approximation to relative tissue stiffness, this system is capable of imaging the breast absolute Young's modulus in fast fashion. To improve the quality of elastography images, a novel system consisting of two load cells has been attached to the ultrasound probe. The load cells measure the breast surface forces to be used for calculating the tissue stress distribution throughout the breast. To facilitate fast imaging, this stress calculation is conducted by an accelerated finite element method. Acquired tissue displacements and surface force data are used as input to the proposed Young's modulus reconstruction technique. Numerical and tissue mimicking phantom studies were conducted for validating the proposed system. These studies indicated that fast imaging of breast tissue absolute Young's modulus using the proposed ultrasound elastography system is feasible. The tissue mimicking phantom study indicated that the system is capable of providing reliable absolute Young's modulus values for both normal tissue and tumour as the maximum Young's modulus reconstruction error was less than 6%. This demonstrates that the proposed system has a good potential to be used for clinical breast cancer assessment.

  10. A novel fast full inversion based breast ultrasound elastography technique

    NASA Astrophysics Data System (ADS)

    Karimi, Hirad; Fenster, Aaron; Samani, Abbas

    2013-04-01

    Cancer detection and classification have been the focus of many imaging and therapeutic research studies. Elastography is a non-invasive technique to visualize suspicious soft tissue areas where tissue stiffness is used as image contrast mechanism. In this study, a breast ultrasound elastography system including software and hardware is proposed. Unlike current elastography systems that image the tissue strain and present it as an approximation to relative tissue stiffness, this system is capable of imaging the breast absolute Young’s modulus in fast fashion. To improve the quality of elastography images, a novel system consisting of two load cells has been attached to the ultrasound probe. The load cells measure the breast surface forces to be used for calculating the tissue stress distribution throughout the breast. To facilitate fast imaging, this stress calculation is conducted by an accelerated finite element method. Acquired tissue displacements and surface force data are used as input to the proposed Young’s modulus reconstruction technique. Numerical and tissue mimicking phantom studies were conducted for validating the proposed system. These studies indicated that fast imaging of breast tissue absolute Young’s modulus using the proposed ultrasound elastography system is feasible. The tissue mimicking phantom study indicated that the system is capable of providing reliable absolute Young’s modulus values for both normal tissue and tumour as the maximum Young’s modulus reconstruction error was less than 6%. This demonstrates that the proposed system has a good potential to be used for clinical breast cancer assessment.

  11. Non-Manhattan layout extraction algorithm

    NASA Astrophysics Data System (ADS)

    Satkhozhina, Aziza; Ahmadullin, Ildus; Allebach, Jan P.; Lin, Qian; Liu, Jerry; Tretter, Daniel; O'Brien-Strain, Eamonn; Hunter, Andrew

    2013-03-01

    Automated publishing requires large databases containing document page layout templates. The number of layout templates that need to be created and stored grows exponentially with the complexity of the document layouts. A better approach for automated publishing is to reuse layout templates of existing documents for the generation of new documents. In this paper, we present an algorithm for template extraction from a docu- ment page image. We use the cost-optimized segmentation algorithm (COS) to segment the image, and Voronoi decomposition to cluster the text regions. Then, we create a block image where each block represents a homo- geneous region of the document page. We construct a geometrical tree that describes the hierarchical structure of the document page. We also implement a font recognition algorithm to analyze the font of each text region. We present a detailed description of the algorithm and our preliminary results.

  12. Automatic metro map layout using multicriteria optimization.

    PubMed

    Stott, Jonathan; Rodgers, Peter; Martínez-Ovando, Juan Carlos; Walker, Stephen G

    2011-01-01

    This paper describes an automatic mechanism for drawing metro maps. We apply multicriteria optimization to find effective placement of stations with a good line layout and to label the map unambiguously. A number of metrics are defined, which are used in a weighted sum to find a fitness value for a layout of the map. A hill climbing optimizer is used to reduce the fitness value, and find improved map layouts. To avoid local minima, we apply clustering techniques to the map-the hill climber moves both stations and clusters when finding improved layouts. We show the method applied to a number of metro maps, and describe an empirical study that provides some quantitative evidence that automatically-drawn metro maps can help users to find routes more efficiently than either published maps or undistorted maps. Moreover, we have found that, in these cases, study subjects indicate a preference for automatically-drawn maps over the alternatives.

  13. Automatic layout of structured hierarchical reports.

    PubMed

    Bakke, Eirik; Karger, David R; Miller, Robert C

    2013-12-01

    Domain-specific database applications tend to contain a sizable number of table-, form-, and report-style views that must each be designed and maintained by a software developer. A significant part of this job is the necessary tweaking of low-level presentation details such as label placements, text field dimensions, list or table styles, and so on. In this paper, we present a horizontally constrained layout management algorithm that automates the display of structured hierarchical data using the traditional visual idioms of hand-designed database UIs: tables, multi-column forms, and outline-style indented lists. We compare our system with pure outline and nested table layouts with respect to space efficiency and readability, the latter with an online user study on 27 subjects. Our layouts are 3.9 and 1.6 times more compact on average than outline layouts and horizontally unconstrained table layouts, respectively, and are as readable as table layouts even for large datasets.

  14. Vision-based fast navigation of micro aerial vehicles

    NASA Astrophysics Data System (ADS)

    Loianno, Giuseppe; Kumar, Vijay

    2016-05-01

    We address the key challenges for autonomous fast flight for Micro Aerial Vehicles (MAVs) in 3-D, cluttered environments. For complete autonomy, the system must identify the vehicle's state at high rates, using either absolute or relative asynchronous on-board sensor measurements, use these state estimates for feedback control, and plan trajectories to the destination. State estimation requires information from different sensors to be fused, exploiting information from different, possible asynchronous sensors at different rates. In this work, we present techniques in the area of planning, control and visual-inertial state estimation for fast navigation of MAVs. We demonstrate how to solve on-board, on a small computational unit, the pose estimation, control and planning problems for MAVs, using a minimal sensor suite for autonomous navigation composed of a single camera and IMU. Additionally, we show that a consumer electronic device such as a smartphone can alternatively be employed for both sensing and computation. Experimental results validate the proposed techniques. Any consumer, provided with a smartphone, can autonomously drive a quadrotor platform at high speed, without GPS, and concurrently build 3-D maps, using a suitably designed app.

  15. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  16. OPC verification and hotspot management for yield enhancement through layout analysis

    NASA Astrophysics Data System (ADS)

    Yoo, Gyun; Kim, Jungchan; Lee, Taehyeong; Jung, Areum; Yang, Hyunjo; Yim, Donggyu; Park, Sungki; Maruyama, Kotaro; Yamamoto, Masahiro; Vikram, Abhishek; Park, Sangho

    2011-03-01

    As the design rule shrinks down, various techniques such as RET, DFM have been continuously developed and applied to lithography field. And we have struggled not only to obtain sufficient process window with those techniques but also to feedback hot spots to OPC process for yield improvement in mass production. OPC verification procedure which iterates its processes from OPC to wafer verification until the CD targets are met and hot spots are cleared is becoming more important to ensure robust and accurate patterning and tight hot spot management. Generally, wafer verification results which demonstrate how well OPC corrections are made need to be fed back to OPC engineer in effective and accurate way. First of all, however, it is not possible to cover all transistors in full-chip with some OPC monitoring points which have been used for wafer verification. Secondly, the hot spots which are extracted by OPC simulator are not always reliable enough to represent defective information for full-chip. Finally, it takes much TAT and labor to do this with CD SEM measurement. These difficulties on wafer verification would be improved by design based analysis. The optimal OPC monitoring points are created by classifying all transistors in full chip layout and Hotspot set is selected by pattern matching process using the NanoScopeTM, which is known as a fast design based analysis tool, with a very small amount of hotspots extracted by OPC simulator in full chip layout. Then, each set is used for wafer verification using design based inspection tool, NGR2150TM. In this paper, new verification methodology based on design based analysis will be introduced as an alternative method for effective control of OPC accuracy and hot spot management.

  17. Geotechnical and environmental considerations in highway layouts: an integrated GIS assessment approach

    NASA Astrophysics Data System (ADS)

    Sadek, Salah; Kaysi, Isam; Bedran, Mounia

    Highway route layout design typically relies on aerial photographs, topographic maps and geologic maps. In this paper, a GIS platform, which incorporates the main digital data needed for evaluating route layouts, is used in a computer-based approach for highway layout assessment. Possible layouts are evaluated based on two sets of criteria. First, traditional evaluation criteria focussing on geometric design factors and impact on man-made features are considered. Next, the developed assessment framework builds on the GIS platform to generate specific environmental and geotechnical criteria for route layout evaluation. The developed approach integrates highway design, slope stability, and traffic noise modeling packages and specifically written codes with the GIS packages ARC/INFO and ArcView. A prototypical application of the assessment framework for a proposed highway south of the city of Beirut, Lebanon is presented. The results demonstrate the potential of the developed approach in incorporating new evaluation criteria at the route layout design stage and in automating the route layout assessment procedure.

  18. Applications to car bodies - Generalized layout design of three-dimensional shells

    NASA Technical Reports Server (NTRS)

    Fukushima, Junichi; Suzuki, Katsuyuki; Kikuchi, Noboru

    1993-01-01

    We shall describe applications of the homogenization method, formulated in Part 1, to design layout of car bodies represented by three-dimensional shell structures based on a multi-loading optimization.

  19. [CUDA-based fast dose calculation in radiotherapy].

    PubMed

    Wang, Xianliang; Liu, Cao; Hou, Qing

    2011-10-01

    Dose calculation plays a key role in treatment planning of radiotherapy. Algorithms for dose calculation require high accuracy and computational efficiency. Finite size pencil beam (FSPB) algorithm is a method commonly adopted in the treatment planning system for radiotherapy. However, improvement on its computational efficiency is still desirable for such purpose as real time treatment planning. In this paper, we present an implementation of the FSPB, by which the most time-consuming parts in the algorithm are parallelized and ported on graphic processing unit (GPU). Compared with the FSPB completely running on central processing unit (CPU), the GPU-implemented FSPB can speed up the dose calculation for 25-35 times on a low price GPU (Geforce GT320) and for 55-100 times on a Tesla C1060, indicating that the GPU-implemented FSPB can provide fast enough dose calculations for real-time treatment planning.

  20. From FAST to E-FAST: an overview of the evolution of ultrasound-based traumatic injury assessment.

    PubMed

    Montoya, J; Stawicki, S P; Evans, D C; Bahner, D P; Sparks, S; Sharpe, R P; Cipolla, J

    2016-04-01

    Ultrasound is a ubiquitous and versatile diagnostic tool. In the setting of acute injury, ultrasound enhances the basic trauma evaluation, influences bedside decision-making, and helps determine whether or not an unstable patient requires emergent procedural intervention. Consequently, continued education of surgeons and other acute care practitioners in performing focused emergency ultrasound is of great importance. This article provides a synopsis of focused assessment with sonography for trauma (FAST) and the extended FAST (E-FAST) that incorporates basic thoracic injury assessment. The authors also review key pitfalls, limitations, controversies, and advances related to FAST, E-FAST, and ultrasound education.

  1. Aerodynamic and Aerothermodynamic Layout of the Hypersonic Flight Experiment Shefex

    NASA Astrophysics Data System (ADS)

    Eggers, Th.

    2005-02-01

    The purpose of the SHarp Edge Flight EXperiment SHEFEX is the investigation of possible new shapes for future launcher or reentry vehicles [1]. The main focus is the improvement of common space vehicle shapes by application of facetted surfaces and sharp edges. The experiment will enable the time accurate investigation of the flow effects and their structural answer during the hypersonic flight from 90 km down to an altitude of 20 km. The project, being performed under responsibility of the German Aerospace Center (DLR) is scheduled to fly on top of a two-stage solid propellant sounding rocket for the first half of 2005. The paper contains a survey of the aerodynamic and aerothermodynamic layout of the experimental vehicle. The results are inputs for the definition of the structural layout, the TPS and the flight instrumentation as well as for the preparation of the flight test performed by the Mobile Rocket Base of DLR.

  2. G-Space: a linear time graph layout

    NASA Astrophysics Data System (ADS)

    Wylie, Brian; Baumes, Jeffrey; Shead, Timothy M.

    2008-01-01

    We describe G-Space, a straightforward linear time layout algorithm that draws undirected graphs based purely on their topological features. The algorithm is divided into two phases. The first phase is an embedding of the graph into a 2-D plane using the graph-theoretical distances as coordinates. These coordinates are computed with the same process used by HDE (High-Dimensional Embedding) algorithms. In our case we do a Low-Dimensional Embedding (LDE), and directly map the graph distances into a two dimensional geometric space. The second phase is the resolution of the many-to-one mappings that frequently occur within the low dimensional embedding. The resulting layout appears to have advantages over existing methods: it can be computed rapidly, and it can be used to answer topological questions quickly and intuitively.

  3. Human Factors Evaluations of Two-Dimensional Spacecraft Conceptual Layouts

    NASA Technical Reports Server (NTRS)

    Kennedy, Kriss J.; Toups, Larry D.; Rudisill, Marianne

    2010-01-01

    Much of the human factors work done in support of the NASA Constellation lunar program has been with low fidelity mockups. These volumetric replicas of the future lunar spacecraft allow researchers to insert test subjects from the engineering and astronaut population and evaluate the vehicle design as the test subjects perform simulations of various operational tasks. However, lunar outpost designs must be evaluated without the use of mockups, creating a need for evaluation tools that can be performed on two-dimension conceptual spacecraft layouts, such as floor plans. A tool based on the Cooper- Harper scale was developed and applied to one lunar scenario, enabling engineers to select between two competing floor plan layouts. Keywords: Constellation, human factors, tools, processes, habitat, outpost, Net Habitable Volume, Cooper-Harper.

  4. Fast gain and phase recovery of semiconductor optical amplifiers based on submonolayer quantum dots

    SciTech Connect

    Herzog, Bastian Owschimikow, Nina; Kaptan, Yücel; Kolarczik, Mirco; Switaiski, Thomas; Woggon, Ulrike; Schulze, Jan-Hindrik; Rosales, Ricardo; Strittmatter, André; Bimberg, Dieter; Pohl, Udo W.

    2015-11-16

    Submonolayer quantum dots as active medium in opto-electronic devices promise to combine the high density of states of quantum wells with the fast recovery dynamics of self-assembled quantum dots. We investigate the gain and phase recovery dynamics of a semiconductor optical amplifier based on InAs submonolayer quantum dots in the regime of linear operation by one- and two-color heterodyne pump-probe spectroscopy. We find an as fast recovery dynamics as for quantum dot-in-a-well structures, reaching 2 ps at moderate injection currents. The effective quantum well embedding the submonolayer quantum dots acts as a fast and efficient carrier reservoir.

  5. AmbiguityVis: Visualization of Ambiguity in Graph Layouts.

    PubMed

    Wang, Yong; Shen, Qiaomu; Archambault, Daniel; Zhou, Zhiguang; Zhu, Min; Yang, Sixiao; Qu, Huamin

    2016-01-01

    Node-link diagrams provide an intuitive way to explore networks and have inspired a large number of automated graph layout strategies that optimize aesthetic criteria. However, any particular drawing approach cannot fully satisfy all these criteria simultaneously, producing drawings with visual ambiguities that can impede the understanding of network structure. To bring attention to these potentially problematic areas present in the drawing, this paper presents a technique that highlights common types of visual ambiguities: ambiguous spatial relationships between nodes and edges, visual overlap between community structures, and ambiguity in edge bundling and metanodes. Metrics, including newly proposed metrics for abnormal edge lengths, visual overlap in community structures and node/edge aggregation, are proposed to quantify areas of ambiguity in the drawing. These metrics and others are then displayed using a heatmap-based visualization that provides visual feedback to developers of graph drawing and visualization approaches, allowing them to quickly identify misleading areas. The novel metrics and the heatmap-based visualization allow a user to explore ambiguities in graph layouts from multiple perspectives in order to make reasonable graph layout choices. The effectiveness of the technique is demonstrated through case studies and expert reviews.

  6. Design and simulation of silicon photonic schematics and layouts

    NASA Astrophysics Data System (ADS)

    Chrostowski, Lukas; Lu, Zeqin; Flueckiger, Jonas; Wang, Xu; Klein, Jackson; Liu, Amy; Jhoja, Jaspreet; Pond, James

    2016-05-01

    Electronic circuit designers commonly start their design process with a schematic, namely an abstract representation of the physical circuit. In integrated photonics on the other hand, it is common for the design to begin at the physical component level, and create a layout by connecting components with interconnects. In this paper, we discuss how to create a schematic from the physical layout via netlist extraction, which enables circuit simulations. Post-layout extraction can also be used to predict how fabrication variability and non-uniformity will impact circuit performance. This is based on the component position information, compact models that are parameterized for dimensional variations, and manufacturing variability models such as a simulated wafer thickness map. This final step is critical in understanding how real-world silicon photonic circuits will behave. We present an example based on treating the ring resonator as a circuit. A silicon photonics design kit, as described here, is available for download at http://github.com/lukasc-ubc/SiEPIC_EBeam_PDK.

  7. Fast Numerically Based Modeling for Ground Penetrating Radar

    NASA Astrophysics Data System (ADS)

    Sassen, D. S.; Everett, M. E.

    2007-05-01

    There is a need for computationally fast GPR numerical modeling. This includes circumstances where real time performance is needed, for example discrimination of landmines or UXO's, and in circumstances that require a high number of successive forward problems, for example inversion or imaging. Traditional numerical techniques such as finite difference or finite element are too slow for these applications, but they provide results from general scenarios such as scattering from very complicated shapes with high contrast. Neural networks may fit in the niche between analytical techniques and traditional numerical techniques. Our concept is training a neural network to associate the model inputs of electromagnetic properties of the background and targets, and the size and shape of the targets, with the output generated by a 3-D finite difference model. Successive examples from various electromagnetic properties and targets are displayed to the neural network, until the neural network has adapted itself though optimization. The trained neural network is now used as the forward model by displaying new input parameters and the neural network then generates the appropriate output. The results from the neural network are then compared to results from finite difference models to see how well the neural networks is performing and at what point it breaks down. Areas of poor fit can be addressed through further training. The neural network GPR model can be adapted by displaying additional finite difference results to the neural network, and can also be adapted to a specific field area by actual field data examples. Because of this adaptation ability the neural network GPR model can be optimized for specific environments and applications.

  8. Hierarchical approaches to VLSI circuit layout

    SciTech Connect

    Sarrafzadeh, M.

    1987-01-01

    This thesis studies two hierarchical approaches to the circuit-layout problem: the top-down approach and the bottom-up approach. The first part is devoted to the traditional top-down approach, and particularly, to an important subproblem thereof called the Channel Routing Problem (CRP). The complexity of CRP in three different layout modes - the reserved mode, the knock-knee mode, and the restricted-overlap mode - is studied. Besides the conventional square grid, two new grids - the alternate grid and the 45/sup 0/ grid - are considered, and their respective versatility is assessed. In the second part, a novel bottom-up technique for solving the layout problem is proposed. The strategy is to recursively interconnect a set of modules, in conformity with the design rules. The basic step consists of merging a pair of strongly-connected modules. This technique is elaborated on and the fundamental problems of this approach are discussed.

  9. Unified layout analysis and text localization framework

    NASA Astrophysics Data System (ADS)

    Vasilopoulos, Nikos; Kavallieratou, Ergina

    2017-01-01

    A technique appropriate for extracting textual information from documents with complex layouts, such as newspapers and journals, is presented. It is a combination of a foreground analysis and a text localization method. The first one is used to segment the page in text and nontext blocks, whereas the second one is used to detect text that may be embedded inside images, charts, diagrams, tables, etc. Detailed experiments on two public databases showed that mixing layout analysis and text localization techniques can lead to improved page segmentation and text extraction results.

  10. A fast and accurate FPGA based QRS detection system.

    PubMed

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.

  11. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT

  12. Effects of image intensifier halo on perceived layout

    NASA Astrophysics Data System (ADS)

    Zacher, James E.; Brandwood, Tracey; Thomas, Paul; Vinnikov, Margarita; Xu, Gancun; Jennings, Sion; Macuda, Todd; Palmisano, Stephan A.; Craig, Greg; Wilcox, Laurie; Allison, Robert S.

    2007-04-01

    Night vision devices (NVDs) or night-vision goggles (NVGs) based on image intensifiers improve nighttime visibility and extend night operations for military and increasingly civil aviation. However, NVG imagery is not equivalent to daytime vision and impaired depth and motion perception has been noted. One potential cause of impaired perceptions of space and environmental layout is NVG halo, where bright light sources appear to be surrounded by a disc-like halo. In this study we measured the characteristics of NVG halo psychophysically and objectively and then evaluated the influence of halo on perceived environmental layout in a simulation experiment. Halos are generated in the device and are not directly related to the spatial layout of the scene. We found that, when visible, halo image (i.e. angular) size was only weakly dependent on both source intensity and distance although halo intensity did vary with effective source intensity. The size of halo images surrounding lights sources are independent of the source distance and thus do not obey the normal laws of perspective. In simulation experiments we investigated the effect of NVG halo on judgements of observer attitude with respect to the ground during simulated flight. We discuss the results in terms of NVG design and of the ability of human operators to compensate for perceptual distortions.

  13. Perceptual organization in user-generated graph layouts.

    PubMed

    van Ham, Frank; Rogowitz, Bernice E

    2008-01-01

    Many graph layout algorithms optimize visual characteristics to achieve useful representations. Implicitly, their goal is to create visual representations that are more intuitive to human observers. In this paper, we asked users to explicitly manipulate nodes in a network diagram to create layouts that they felt best captured the relationships in the data. This allowed us to measure organizational behavior directly, allowing us to evaluate the perceptual importance of particular visual features, such as edge crossings and edge-lengths uniformity. We also manipulated the interior structure of the node relationships by designing data sets that contained clusters, that is, sets of nodes that are strongly interconnected. By varying the degree to which these clusters were "masked" by extraneous edges we were able to measure observers' sensitivity to the existence of clusters and how they revealed them in the network diagram. Based on these measurements we found that observers are able to recover cluster structure, that the distance between clusters is inversely related to the strength of the clustering, and that users exhibit the tendency to use edges to visually delineate perceptual groups. These results demonstrate the role of perceptual organization in representing graph data and provide concrete recommendations for graph layout algorithms.

  14. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  15. BioLayout(Java): versatile network visualisation of structural and functional relationships.

    PubMed

    Goldovsky, Leon; Cases, Ildefonso; Enright, Anton J; Ouzounis, Christos A

    2005-01-01

    Visualisation of biological networks is becoming a common task for the analysis of high-throughput data. These networks correspond to a wide variety of biological relationships, such as sequence similarity, metabolic pathways, gene regulatory cascades and protein interactions. We present a general approach for the representation and analysis of networks of variable type, size and complexity. The application is based on the original BioLayout program (C-language implementation of the Fruchterman-Rheingold layout algorithm), entirely re-written in Java to guarantee portability across platforms. BioLayout(Java) provides broader functionality, various analysis techniques, extensions for better visualisation and a new user interface. Examples of analysis of biological networks using BioLayout(Java) are presented.

  16. Fast Fragmentation of Networks Using Module-Based Attacks

    PubMed Central

    Requião da Cunha, Bruno; González-Avella, Juan Carlos; Gonçalves, Sebastián

    2015-01-01

    In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from a practical point of view. In this contribution, we present a module-based method to efficiently fragment complex networks. The procedure firstly identifies topological communities through which the network can be represented using a well established heuristic algorithm of community finding. Then only the nodes that participate of inter-community links are removed in descending order of their betweenness centrality. We illustrate the method by applying it to a variety of examples in the social, infrastructure, and biological fields. It is shown that the module-based approach always outperforms targeted attacks to vertices based on node degree or betweenness centrality rankings, with gains in efficiency strongly related to the modularity of the network. Remarkably, in the US power grid case, by deleting 3% of the nodes, the proposed method breaks the original network in fragments which are twenty times smaller in size than the fragments left by betweenness-based attack. PMID:26569610

  17. Fast Fragmentation of Networks Using Module-Based Attacks.

    PubMed

    Requião da Cunha, Bruno; González-Avella, Juan Carlos; Gonçalves, Sebastián

    2015-01-01

    In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from a practical point of view. In this contribution, we present a module-based method to efficiently fragment complex networks. The procedure firstly identifies topological communities through which the network can be represented using a well established heuristic algorithm of community finding. Then only the nodes that participate of inter-community links are removed in descending order of their betweenness centrality. We illustrate the method by applying it to a variety of examples in the social, infrastructure, and biological fields. It is shown that the module-based approach always outperforms targeted attacks to vertices based on node degree or betweenness centrality rankings, with gains in efficiency strongly related to the modularity of the network. Remarkably, in the US power grid case, by deleting 3% of the nodes, the proposed method breaks the original network in fragments which are twenty times smaller in size than the fragments left by betweenness-based attack.

  18. Whisker Contact Detection of Rodents Based on Slow and Fast Mechanical Inputs

    PubMed Central

    Claverie, Laure N.; Boubenec, Yves; Debrégeas, Georges; Prevost, Alexis M.; Wandersman, Elie

    2017-01-01

    Rodents use their whiskers to locate nearby objects with an extreme precision. To perform such tasks, they need to detect whisker/object contacts with a high temporal accuracy. This contact detection is conveyed by classes of mechanoreceptors whose neural activity is sensitive to either slow or fast time varying mechanical stresses acting at the base of the whiskers. We developed a biomimetic approach to separate and characterize slow quasi-static and fast vibrational stress signals acting on a whisker base in realistic exploratory phases, using experiments on both real and artificial whiskers. Both slow and fast mechanical inputs are successfully captured using a mechanical model of the whisker. We present and discuss consequences of the whisking process in purely mechanical terms and hypothesize that free whisking in air sets a mechanical threshold for contact detection. The time resolution and robustness of the contact detection strategies based on either slow or fast stress signals are determined. Contact detection based on the vibrational signal is faster and more robust to exploratory conditions than the slow quasi-static component, although both slow/fast components allow localizing the object. PMID:28119582

  19. Fast rule-based bioactivity prediction using associative classification mining

    PubMed Central

    2012-01-01

    Relating chemical features to bioactivities is critical in molecular design and is used extensively in the lead discovery and optimization process. A variety of techniques from statistics, data mining and machine learning have been applied to this process. In this study, we utilize a collection of methods, called associative classification mining (ACM), which are popular in the data mining community, but so far have not been applied widely in cheminformatics. More specifically, classification based on predictive association rules (CPAR), classification based on multiple association rules (CMAR) and classification based on association rules (CBA) are employed on three datasets using various descriptor sets. Experimental evaluations on anti-tuberculosis (antiTB), mutagenicity and hERG (the human Ether-a-go-go-Related Gene) blocker datasets show that these three methods are computationally scalable and appropriate for high speed mining. Additionally, they provide comparable accuracy and efficiency to the commonly used Bayesian and support vector machines (SVM) methods, and produce highly interpretable models. PMID:23176548

  20. Fast vision-based catheter 3D reconstruction.

    PubMed

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D

    2016-07-21

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  1. Fast vision-based catheter 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  2. You Be the Judge: Newspaper Advertising Layout.

    ERIC Educational Resources Information Center

    Koeninger, Jimmy G.

    The learning package is designed to provide the marketing educator with a culminating activity for an instructional unit focusing on advertising layout principles and procedures. It is to be used in conjunction with 35mm slides of newspaper advertisements, which the student views and rates in comparison with the ratings of a panel of experts. A…

  3. Layout and Design in "Real Life"

    ERIC Educational Resources Information Center

    Bremer, Janet; Stocker, Donald

    2004-01-01

    Educators are required to combine their expertise and allow students to explore the different areas by using the method of collaboration in which teachers from different disciplines will create an environment where each will use their expert skills. The collaboration of a computer teacher with an art teacher resulted in the creation of Layout and…

  4. Ultra-fast cell counters based on microtubular waveguides

    PubMed Central

    Bausch, Cornelius S.; Heyn, Christian; Hansen, Wolfgang; Wolf, Insa M. A.; Diercks, Björn-Philipp; Guse, Andreas H.; Blick, Robert H.

    2017-01-01

    We present a radio-frequency impedance-based biosensor embedded inside a semiconductor microtube for the in-flow detection of single cells. An impedance-matched tank circuit and a tight wrapping of the electrodes around the sensing region, which creates a close, leakage current-free contact between cells and electrodes, yields a high signal-to-noise ratio. We experimentally show a twofold improved sensitivity of our three-dimensional electrode structure to conventional planar electrodes and support these findings by finite element simulations. Finally, we report on the differentiation of polystyrene beads, primary mouse T lymphocytes and Jurkat T lymphocytes using our device. PMID:28134293

  5. Ultra-fast cell counters based on microtubular waveguides

    NASA Astrophysics Data System (ADS)

    Bausch, Cornelius S.; Heyn, Christian; Hansen, Wolfgang; Wolf, Insa M. A.; Diercks, Björn-Philipp; Guse, Andreas H.; Blick, Robert H.

    2017-01-01

    We present a radio-frequency impedance-based biosensor embedded inside a semiconductor microtube for the in-flow detection of single cells. An impedance-matched tank circuit and a tight wrapping of the electrodes around the sensing region, which creates a close, leakage current-free contact between cells and electrodes, yields a high signal-to-noise ratio. We experimentally show a twofold improved sensitivity of our three-dimensional electrode structure to conventional planar electrodes and support these findings by finite element simulations. Finally, we report on the differentiation of polystyrene beads, primary mouse T lymphocytes and Jurkat T lymphocytes using our device.

  6. Layout of Ancient Maya Cities

    NASA Astrophysics Data System (ADS)

    Aylesworth, Grant R.

    Although there is little doubt that the ancient Maya of Mesoamerica laid their cities out based, in part, on astronomical considerations, the proliferation of "cosmograms" in contemporary scholarly discourse has complicated matters for the acceptance of rigorous archaeoastronomical research.

  7. Fast Object Motion Estimation Based on Dynamic Stixels

    PubMed Central

    Morales, Néstor; Morell, Antonio; Toledo, Jonay; Acosta, Leopoldo

    2016-01-01

    The stixel world is a simplification of the world in which obstacles are represented as vertical instances, called stixels, standing on a surface assumed to be planar. In this paper, previous approaches for stixel tracking are extended using a two-level scheme. In the first level, stixels are tracked by matching them between frames using a bipartite graph in which edges represent a matching cost function. Then, stixels are clustered into sets representing objects in the environment. These objects are matched based on the number of stixels paired inside them. Furthermore, a faster, but less accurate approach is proposed in which only the second level is used. Several configurations of our method are compared to an existing state-of-the-art approach to show how our methodology outperforms it in several areas, including an improvement in the quality of the depth reconstruction. PMID:27483265

  8. Fast Solvers for Transient Hydraulic Tomography based on Laplace transform

    NASA Astrophysics Data System (ADS)

    Bakhos, T.; Saibaba, A.; Kitanidis, P. K.

    2013-12-01

    Transient Hydraulic Tomography (THT) is a method to estimate the parameters hydraulic conductivity and specific storage, from measurements of hydraulic heads or pressure obtained in a series of interference tests in aquifer geologic formation such as an aquifer (i.e., pumping at one location and depth while measuring the response at several others). These measurements can be used to reconstruct the spatial variation of hydraulic parameters by solving a nonlinear inverse problem, which we tackle using the geostatistical approach. A central challenge associated with the application of the geostatistical approach to THT, is the computational cost associated with constructing the Jacobian - which represents the sensitivity of the measurements to the unknown parameters. This essentially requires repeated solutions to the 'forward problem' and the 'adjoint problem' for determination of derivatives, which are both time-dependent parabolic partial differential equations. To solve the 'forward problem', we use a Laplace Transform based exponential time integrator combined with a Krylov subspace based method for solving shifted systems. This approach allows us to independently evaluate the transient problem at different time instants at (almost) the cost of solving one steady-state groundwater equation. A similar approach can be used to accelerate the solution of the 'adjoint problem' as well. As we demonstrate, this approach dramatically lowers the computational cost associated with evaluating the Jacobian and as a result, the reconstruction of the parameters. The performance of our algorithm is demonstrated on some challenging synthetic examples; in particular, we apply it to large-scale inverse problems arising from transient hydraulic tomography.

  9. Fast magneto-optic switch based on nanosecond pulses

    NASA Astrophysics Data System (ADS)

    Weng, Zi-Hua; Ruan, Jian-Jian; Lin, Shao-Han; Chen, Zhi-Min

    2011-09-01

    The paper studies an all fiber high-speed magneto-optic switch which includes an optical route, a nanosecond pulse generator, and a magnetic field module in order to reduce the switching time of the optical switch in the all optical network. A compact nanosecond pulse generator can be designed based on the special character of the avalanche transistor. The output current pulse of the nanosecond pulse generator is less than 5 ns, while the pulse amplitude is more than 100 V and the pulse width is about 10 to 20 ns, which is able to drive a high-speed magnetic field. A solenoid is used as the magnetic field module, and a bismuth-substituted rare-earth iron garnet single crystal is chosen as the Faraday rotator. By changing the direction of current in the solenoid quickly, the magnetization of the magneto-optic material is reversed, and the optical beam can be rapidly switched. The experimental results indicate that the switching time of the device is about 100 to 400 ns, which can partially meet the demand of the rapid development of the all optical network.

  10. Fast Outlier Detection Using a Grid-Based Algorithm.

    PubMed

    Lee, Jihwan; Cho, Nam-Wook

    2016-01-01

    As one of data mining techniques, outlier detection aims to discover outlying observations that deviate substantially from the reminder of the data. Recently, the Local Outlier Factor (LOF) algorithm has been successfully applied to outlier detection. However, due to the computational complexity of the LOF algorithm, its application to large data with high dimension has been limited. The aim of this paper is to propose grid-based algorithm that reduces the computation time required by the LOF algorithm to determine the k-nearest neighbors. The algorithm divides the data spaces in to a smaller number of regions, called as a "grid", and calculates the LOF value of each grid. To examine the effectiveness of the proposed method, several experiments incorporating different parameters were conducted. The proposed method demonstrated a significant computation time reduction with predictable and acceptable trade-off errors. Then, the proposed methodology was successfully applied to real database transaction logs of Korea Atomic Energy Research Institute. As a result, we show that for a very large dataset, the grid-LOF can be considered as an acceptable approximation for the original LOF. Moreover, it can also be effectively used for real-time outlier detection.

  11. [Fast discrimination of varieties of sugar based on spectroscopy technology].

    PubMed

    Lin, Ping; Chen, Yong-Ming; He, Yong

    2009-02-01

    Visible and near-infrared reflectance spectroscopy (NIRS) was applied in the discrimination of sugar varieties. NIRS is a pollution-free, rapid, quantitative and qualitative analysis method, with the characteristics of high speed, non-destructiveness, high precision and reliable detection data, etc. Four kinds of sugar were gained from the local market and each species was divided into 40 samples. One hundred twenty samples were used as the training set and the remainders (total 40 samples) formed the prediction set. Samples were scanned by a spectroradiometer within a wavelength region of 325-1 075 nm. Three pre-processing methods were applied on the spectra prior to building the PLS regression model. The multivariable analysis using partial least square (PLS) was applied to abstract characteristics of the pattern. Through full cross validation, 11 principal components presenting important information of spectra were confirmed. The correlation coefficient (R), residual variance (Rv) and standard error of calibration (SEC) were 0.999 916, 0. 00 985 and 0.014 538 respectively. Then, these 11 principal components were taken as the input of BP neural network. This model was used to predict the varieties of 40 unknown samples. Through training and prediction, the recognition rate of 100% was achieved by BP neural network. This model has come to be reliable and practicable. Thus, it is concluded that PLS analysis combined with BP neural network is an available alternative for pattern recognition based on the spectroscopy technology.

  12. Fast spot-based multiscale simulations of granular drainage

    SciTech Connect

    Rycroft, Chris H.; Wong, Yee Lok; Bazant, Martin Z.

    2009-05-22

    We develop a multiscale simulation method for dense granular drainage, based on the recently proposed spot model, where the particle packing flows by local collective displacements in response to diffusing"spots'" of interstitial free volume. By comparing with discrete-element method (DEM) simulations of 55,000 spheres in a rectangular silo, we show that the spot simulation is able to approximately capture many features of drainage, such as packing statistics, particle mixing, and flow profiles. The spot simulation runs two to three orders of magnitude faster than DEM, making it an appropriate method for real-time control or optimization. We demonstrateextensions for modeling particle heaping and avalanching at the free surface, and for simulating the boundary layers of slower flow near walls. We show that the spot simulations are robust and flexible, by demonstrating that they can be used in both event-driven and fixed timestep approaches, and showing that the elastic relaxation step used in the model can be applied much less frequently and still create good results.

  13. Wiring knock-knee layouts: A global approach

    NASA Astrophysics Data System (ADS)

    Sarrafzadeh, Majid; Wagner, Dorothea; Wagner, Frank; Weihe, Karsten

    1994-05-01

    We present a global approach to solve the three-layer wirability problem for knock-knee layouts. In general, the problem is NP-complete. Only for very restricted classes of layouts polynomial three-layer wiring algorithms are known up to now. In this paper, we show that for a large class of layouts a three-layer wiring can be constructed by solving a path problem in a special class of graphs or a two-satisfiability problem, and thus may be wired in time linear in the size of the layout area. Moreover, it is shown that a minimum stretching of the layout into a layout belonging to this class can be found by solving a clique cover problem in an interval graph. This problem is solvable in time linear in the size of the layout area as well. Altogether, the method also yields a good heuristic for the three-layer wirability problem for knock-knee layouts.

  14. Automatic Constraint Detection for 2D Layout Regularization.

    PubMed

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  15. Automatic layout of integrated-optic time-of-flight circuits

    NASA Astrophysics Data System (ADS)

    Fogg, Ruth D.

    1996-11-01

    This work describes the architecture and algorithms used in the computer-aided design tool developed for the automatic layout of integrated-optic, time-of-flight circuit designs. As in VLSI circuit layout, total wire length and chip area minimization are the goals in the layout of time-of-flight circuits. However, there are two major differences between the layout of time of flight circuits and VLSI circuits. First, the interconnection lengths of time-of-flight designs are exactly specified in order to achieve the necessary delays for signal synchronization. SEcondly, the switching elements are 120 times longer than they are wide. This highly astigmatic aspect ratio causes severe constraints on how and where the switches are placed. Assuming the continued development of corner turning mirrors allows the use of a parallel, row-based device placement architecture and a rectangular, fixed-grid track system for the connecting paths. The layout process proceeds in two steps. The first step involves the use of a partial circuit graph representation to place the elements in rows, oriented in the direction of the signal flow. After iterative improvement of the placement, the second step proceeds with the routing of the connecting paths. The main problem in the automatic layout of time-of-flight circuits is achieving the correct path lengths without overlapping previously routed paths. This problem is solved by taking advantage of a certain degree of variability present in each path, allowing the use of simple heuristics to circumvent previously routed paths.

  16. Nanorod-Based Fast-Response Pressure-Sensitive Paints

    NASA Technical Reports Server (NTRS)

    Bencic, Timothy; VanderWal, Randall

    2007-01-01

    A proposed program of research and development would be devoted to exploitation of nanomaterials in pressuresensitive paints (PSPs), which are used on wind-tunnel models for mapping surface pressures associated with flow fields. Heretofore, some success has been achieved in measuring steady-state pressures by use of PSPs, but success in measuring temporally varying pressures has been elusive because of the inherent slowness of the optical responses of these materials. A PSP contains a dye that luminesces in a suitable wavelength range in response to photoexcitation in a shorter wavelength range. The luminescence is quenched by oxygen at a rate proportional to the partial pressure of oxygen and thus proportional to the pressure of air. As a result, the intensity of luminescence varies inversely with the pressure of air. The major problem in developing a PSP that could be easily applied to a wind-tunnel model and could be useful for measuring rapidly varying pressure is to provide very high gas diffusivity for rapid, easy transport of oxygen to and from active dye molecules. Most PSPs include polymer-base binders, which limit the penetration of oxygen to dye molecules, thereby reducing responses to pressure fluctuations. The proposed incorporation of nanomaterials (somewhat more specifically, nanorods) would result in paints having nanostructured surfaces that, relative to conventional PSP surfaces, would afford easier and more nearly complete access of oxygen molecules to dye molecules. One measure of greater access is effective surface area: For a typical PSP as proposed applied to a given solid surface, the nanometer-scale structural features would result in an exposed surface area more than 100 times that of a conventional PSP, and the mass of proposed PSP needed to cover the surface would be less than tenth of the mass of the conventional PSP. One aspect of the proposed development would be to synthesize nanorods of Si/SiO2, in both tangle-mat and regular- array

  17. A fast and flexible library-based thick-mask near-field calculation method

    NASA Astrophysics Data System (ADS)

    Ma, Xu; Gao, Jie; Chen, Xuanbo; Dong, Lisong; Li, Yanqiu

    2015-03-01

    Aerial image calculation is the basis of the current lithography simulation. As the critical dimension (CD) of the integrated circuits continuously shrinks, the thick mask near-field calculation has increasing influence on the accuracy and efficiency of the entire aerial image calculation process. This paper develops a flexible librarybased approach to significantly improve the efficiency of the thick mask near-field calculation compared to the rigorous modeling method, while leading to much higher accuracy than the Kirchhoff approximation method. Specifically, a set of typical features on the fullchip are selected to serve as the training data, whose near-fields are pre-calculated and saved in the library. Given an arbitrary test mask, we first decompose it into convex corners, concave corners and edges, afterwards match each patch to the training layouts based on nonparametric kernel regression. Subsequently, we use the matched near-fields in the library to replace the mask patches, and rapidly synthesize the near-field for the entire test mask. Finally, a data-fitting method is proposed to improve the accuracy of the synthesized near-field based on least square estimate (LSE). We use a pair of two-dimensional mask patterns to test our method. Simulations show that the proposed method can significantly speed up the current FDTD method, and effectively improve the accuracy of the Kirchhoff approximation method.

  18. 48 CFR 36.517 - Layout of work.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Layout of work. 36.517... CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Contract Clauses 36.517 Layout of work. The contracting officer shall insert the clause at 52.236-17, Layout of Work, in solicitations and contracts...

  19. Built environmental factors and adults' travel behaviors: Role of street layout and local destinations.

    PubMed

    Koohsari, Mohammad Javad; Owen, Neville; Cole, Rachel; Mavoa, Suzanne; Oka, Koichiro; Hanibuchi, Tomoya; Sugiyama, Takemi

    2017-03-01

    Street layout is consistently associated with adults' travel behaviors, however factors influencing this association are unclear. We examined associations of street layout with travel behaviors: walking for transport (WT) and car use; and, the extent to which these relationships may be accounted for by availability of local destinations. A 24-h travel diary was completed in 2009 by 16,345 adult participants of the South-East Queensland Household Travel Survey, Australia. Three travel-behavior outcomes were derived: any home-based WT; over 30min of home-based WT; and, over 60min of car use. For street layout, a space syntax measure of street integration was calculated for each Statistical Area 1 (SA1, the smallest geographic unit in Australia). An objective measure of availability of destinations - Walk Score - was also derived for each SA1. Logistic regression examined associations of street layout with travel behaviors. Mediation analyses examined to what extent availability of destinations explained the associations. Street integration was significantly associated with travel behaviors. Each one-decile increment in street integration was associated with an 18% (95%CI: 1.15, 1.21) higher odds of any home-based WT; a 10% (95%CI: 1.06, 1.15) higher odds of over 30min of home-based WT; and a 5% (95%CI: 0.94, 0.96) lower odds of using a car over 60min. Local destinations partially mediated the effects of street layout on travel behaviors. Well-connected street layout contributes to active travel partially through availability of more local destinations. Urban design strategies need to address street layout and destinations to promote active travel among residents.

  20. Child and Parent Voices on a Community-Based Prevention Program (FAST)

    ERIC Educational Resources Information Center

    Fearnow-Kenney, Melodie; Hill, Patricia; Gore, Nicole

    2016-01-01

    Families and Schools Together (FAST) is a collaborative program involving schools, families, and community-based partners in efforts to prevent substance use, juvenile delinquency, school failure, child abuse and neglect, mental health problems, and violence. Although evaluated extensively, there remains a dearth of qualitative data on child and…

  1. Common and Specific Factors Approaches to Home-Based Treatment: I-FAST and MST

    ERIC Educational Resources Information Center

    Lee, Mo Yee; Greene, Gilbert J.; Fraser, J. Scott; Edwards, Shivani G.; Grove, David; Solovey, Andrew D.; Scott, Pamela

    2013-01-01

    Objectives: This study examined the treatment outcomes of integrated families and systems treatment (I-FAST), a moderated common factors approach, in reference to multisystemic therapy (MST), an established specific factor approach, for treating at risk children and adolescents and their families in an intensive community-based setting. Method:…

  2. Basic concepts underlying fast-neutron-based contraband interrogation technology. A systems viewpoint

    SciTech Connect

    Fink, C.L.; Guenther, P.T.; Smith, D.L.

    1992-12-01

    All accelerator-based fast-neutron contraband interrogation systems have many closely interrelated subsystems, whose performance parameters will be critically interdependent. For optimal overall performance, a systems analysis design approach is required. This paper provides a general overview of the interrelationships and the tradeoffs to be considered for optimization of nonaccelerator subsystems.

  3. Fast neutron mutants database and web displays at SoyBase

    Technology Transfer Automated Retrieval System (TEKTRAN)

    SoyBase, the USDA-ARS soybean genetics and genomics database, has been expanded to include data for the fast neutron mutants produced by Bolon, Vance, et al. In addition to the expected text and sequence homology searches and visualization of the indels in the context of the genome sequence viewer, ...

  4. Fast switchable grating based on orthogonal photo alignments of ferroelectric liquid crystals

    NASA Astrophysics Data System (ADS)

    Srivastava, A. K.; Hu, Wei; Chigrinov, V. G.; Kiselev, A. D.; Lu, Yan-Qing

    2012-07-01

    We demonstrate a fast switchable grating based on ferroelectric liquid crystals and orthogonal planar alignment by means of photo alignments. Both 1D and 2D gratings have been constructed. The proposed diffracting element provides fast response time of around 20 μs, contrast of 7000:1 and high diffraction efficiency, at the electric field of 6 V/μm. The saturated electro-optical (EO) states up to very high frequency (≈5 kHz) are the real advantage of the proposed switchable grating, which opens several opportunities to improve the quality of existing devices and to find new applications.

  5. Tunable temporal gap based on simultaneous fast and slow light in electro-optic photonic crystals.

    PubMed

    Li, Guangzhen; Chen, Yuping; Jiang, Haowei; Liu, Yi'an; Liu, Xiao; Chen, Xianfeng

    2015-07-13

    We demonstrated a tunable temporal gap based on simultaneous fast and slow light in electro-optic photonic crystals. The light experiences an anomalous dispersion near the transmission center and a normal dispersion away from the center, where it can be accelerated and slowed down, respectively. We also obtained the switch between fast and slow light by adjusting the external electric filed. The observed largest temporal gap is 541 ps, which is crucial in practical event operation inside the gap. The results offer a new solution for temporal cloak.

  6. Fast polarization-state tracking scheme based on radius-directed linear Kalman filter.

    PubMed

    Yang, Yanfu; Cao, Guoliang; Zhong, Kangping; Zhou, Xian; Yao, Yong; Lau, Alan Pak Tao; Lu, Chao

    2015-07-27

    We propose and experimentally demonstrate a fast polarization tracking scheme based on radius-directed linear Kalman filter. It has the advantages of fast convergence and is inherently insensitive to phase noise and frequency offset effects. The scheme is experimentally compared to conventional polarization tracking methods on the polarization rotation angular frequency. The results show that better tracking capability with more than one order of magnitude improvement is obtained in the cases of polarization multiplexed QPSK and 16QAM signals. The influences of the filter tuning parameters on tracking performance are also investigated in detail.

  7. Ultra Fast X-ray Streak Camera for TIM Based Platforms

    SciTech Connect

    Marley, E; Shepherd, R; Fulkerson, E S; James, L; Emig, J; Norman, D

    2012-05-02

    Ultra fast x-ray streak cameras are a staple for time resolved x-ray measurements. There is a need for a ten inch manipulator (TIM) based streak camera that can be fielded in a newer large scale laser facility. The LLNL ultra fast streak camera's drive electronics have been upgraded and redesigned to fit inside a TIM tube. The camera also has a new user interface that allows for remote control and data acquisition. The system has been outfitted with a new sensor package that gives the user more operational awareness and control.

  8. Photonic-chip-based tunable slow and fast light via stimulated Brillouin scattering.

    PubMed

    Pant, Ravi; Byrnes, Adam; Poulton, Christopher G; Li, Enbang; Choi, Duk-Yong; Madden, Steve; Luther-Davies, Barry; Eggleton, Benjamin J

    2012-03-01

    We report the first (to our knowledge) demonstration of photonic chip based tunable slow and fast light via stimulated Brillouin scattering. Slow, fast, and negative group velocities were observed in a 7 cm long chalcogenide (As(2)S(3)) rib waveguide with a group index change ranging from ~-44 to +130, which results in a maximum delay of ~23 ns at a relatively low gain of ~23 dB. Demonstration of large tunable delays in a chip scale device opens up applications such as frequency sensing and true-time delay for a phased array antenna, where integration and delays ~10 ns are highly desirable.

  9. Page layout analysis and classification for complex scanned documents

    NASA Astrophysics Data System (ADS)

    Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan

    2011-09-01

    A framework for region/zone classification in color and gray-scale scanned documents is proposed in this paper. The algorithm includes modules for extracting text, photo, and strong edge/line regions. Firstly, a text detection module which is based on wavelet analysis and Run Length Encoding (RLE) technique is employed. Local and global energy maps in high frequency bands of the wavelet domain are generated and used as initial text maps. Further analysis using RLE yields a final text map. The second module is developed to detect image/photo and pictorial regions in the input document. A block-based classifier using basis vector projections is employed to identify photo candidate regions. Then, a final photo map is obtained by applying probabilistic model based on Markov random field (MRF) based maximum a posteriori (MAP) optimization with iterated conditional mode (ICM). The final module detects lines and strong edges using Hough transform and edge-linkages analysis, respectively. The text, photo, and strong edge/line maps are combined to generate a page layout classification of the scanned target document. Experimental results and objective evaluation show that the proposed technique has a very effective performance on variety of simple and complex scanned document types obtained from MediaTeam Oulu document database. The proposed page layout classifier can be used in systems for efficient document storage, content based document retrieval, optical character recognition, mobile phone imagery, and augmented reality.

  10. Fast online Monte Carlo-based IMRT planning for the MRI linear accelerator

    NASA Astrophysics Data System (ADS)

    Bol, G. H.; Hissoiny, S.; Lagendijk, J. J. W.; Raaymakers, B. W.

    2012-03-01

    The MRI accelerator, a combination of a 6 MV linear accelerator with a 1.5 T MRI, facilitates continuous patient anatomy updates regarding translations, rotations and deformations of targets and organs at risk. Accounting for these demands high speed, online intensity-modulated radiotherapy (IMRT) re-optimization. In this paper, a fast IMRT optimization system is described which combines a GPU-based Monte Carlo dose calculation engine for online beamlet generation and a fast inverse dose optimization algorithm. Tightly conformal IMRT plans are generated for four phantom cases and two clinical cases (cervix and kidney) in the presence of the magnetic fields of 0 and 1.5 T. We show that for the presented cases the beamlet generation and optimization routines are fast enough for online IMRT planning. Furthermore, there is no influence of the magnetic field on plan quality and complexity, and equal optimization constraints at 0 and 1.5 T lead to almost identical dose distributions.

  11. 9. Photographic copy of engineering drawing showing the mechanical layout ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Photographic copy of engineering drawing showing the mechanical layout of Test Stand 'C' Cv Cell, vacuum line, and scrubber-condenser as erected in 1977-78. JPL drawing by VTN Consolidated, Inc. Engineers, Architects, Planners, 2301 Campus Drive, Irvine, California 92664: 'JPL-ETS E-18 (C-Stand Modifications) Control Elevations & Schematics,' sheet M-5 (JPL sheet number E18/44-0), 1 September 1977. - Jet Propulsion Laboratory Edwards Facility, Test Stand C, Edwards Air Force Base, Boron, Kern County, CA

  12. An interactive wire-wrap board layout program

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A.

    1987-01-01

    An interactive computer-graphics-based tool for specifying the placement of electronic parts on a wire-wrap circuit board is presented. Input is a data file (currently produced by a commercial logic design system) which describes the parts used and their interconnections. Output includes printed reports describing the parts and wire paths, parts counts, placement lists, board drawing, and a tape to send to the wire-wrap vendor. The program should reduce the engineer's layout time by a factor of 3 to 5 as compared to manual methods.

  13. Energy efficient LED layout optimization for near-uniform illumination

    NASA Astrophysics Data System (ADS)

    Ali, Ramy E.; Elgala, Hany

    2016-09-01

    In this paper, we consider the problem of designing energy efficient light emitting diodes (LEDs) layout while satisfying the illumination constraints. Towards this objective, we present a simple approach to the illumination design problem based on the concept of the virtual LED. We formulate a constrained optimization problem for minimizing the power consumption while maintaining a near-uniform illumination throughout the room. By solving the resulting constrained linear program, we obtain the number of required LEDs and the optimal output luminous intensities that achieve the desired illumination constraints.

  14. 10. Photographic copy of engineering drawing showing the plumbing layout ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. Photographic copy of engineering drawing showing the plumbing layout of Test Stand 'C' Cv Cell, vacuum line, and scrubber-condenser as erected in 1977-78. JPL drawing by VTN Consolidated, Inc. Engineers, Architects, Planners, 2301 Campus Drive, Irvine, California 92664: 'JPL-ETS E-18 (C-Stand Modifications) Flow Diagram,' sheet M-2 (JPL sheet number E18/41-0), September 1, 1977. - Jet Propulsion Laboratory Edwards Facility, Test Stand C, Edwards Air Force Base, Boron, Kern County, CA

  15. Case-based reasoning(CBR) model for ultra-fast cooling in plate mill

    NASA Astrophysics Data System (ADS)

    Hu, Xiao; Wang, Zhaodong; Wang, Guodong

    2014-11-01

    New generation thermo-mechanical control process(TMCP) based on ultra-fast cooling is being widely adopted in plate mill to product high-performance steel material at low cost. Ultra-fast cooling system is complex because of optimizing the temperature control error generated by heat transfer mathematical model and process parameters. In order to simplify the system and improve the temperature control precision in ultra-fast cooling process, several existing models of case-based reasoning(CBR) model are reviewed. Combining with ultra-fast cooling process, a developed R5 CBR model is proposed, which mainly improves the case representation, similarity relation and retrieval module. Certainty factor is defined in semantics memory unit of plate case which provides not only internal data reliability but also product performance reliability. Similarity relation is improved by defined power index similarity membership function. Retrieval process is simplified and retrieval efficiency is improved apparently by windmill retrieval algorithm. The proposed CBR model is used for predicting the case of cooling strategy and its capability is superior to traditional process model. In order to perform comprehensive investigations on ultra-fast cooling process, different steel plates are considered for the experiment. The validation experiment and industrial production of proposed CBR model are carried out, which demonstrated that finish cooling temperature(FCT) error is controlled within ±25°C and quality rate of product is more than 97%. The proposed CBR model can simplify ultra-fast cooling system and give quality performance for steel product.

  16. Center for Shape Optimization and Material Layout

    DTIC Science & Technology

    1992-01-01

    that eventually participate in the optimal layout for non -self-adjoint problems . Currently, these microstructures are worked out numerically [6...the fourth order problem arising in the theory of plates. 1.2 The Fourth Order Problems Direct Approach in the Optimal Design of Plates. The state of... constraint set. In fact, the constraint set is not only nonlinear, its also non -smooth, and even non - convex . Worst of all, we do not even have an analytic

  17. Fast computer simulation of reconstructed image from rainbow hologram based on GPU

    NASA Astrophysics Data System (ADS)

    Shuming, Jiao; Yoshikawa, Hiroshi

    2015-10-01

    A fast computer simulation solution for rainbow hologram reconstruction based on GPU is proposed. In the commonly used segment Fourier transform method for rainbow hologram reconstruction, the computation of 2D Fourier transform on each hologram segment is very time consuming. GPU-based parallel computing can be applied to improve the computing speed. Compared with CPU computing, simulation results indicate that our proposed GPU computing can effectively reduce the computation time by as much as eight times.

  18. Automated layout and phase assignment techniques for dark-field alternating PSM

    NASA Astrophysics Data System (ADS)

    Kahng, Andrew B.; Wang, Huijuan; Zelikovsky, Alexander

    1998-12-01

    We describe new, efficient algorithms for layout modification and phase assignment for dark field alternating-type phase- shifting masks in the single-exposure regime. We make the following contributions. First, we give optimal and fast algorithms to minimize the number of phase conflicts that must be removed to ensure 2-colorability of the conflict graph. These methods can potentially reduce runtime and/or improve solution quality, compared to previous approaches of Moniwa et al. and Ooi et al. Second, we suggest a new iterative 2- coloring and compaction approach that simultaneously optimizes layout and phase assignment. The approach iteratively performs the following steps: (1) compact the layout and find the conflict graph; (2) find the minimum set of edges whose deletion makes the conflict graph bipartite; and (3) add a new compaction constraint for each edge in this minimum set, such that the corresponding pair of features will no longer conflict. Third, we describe additional approaches to co- optimization of layout and phase assignment for alternating PSM. Preliminary computational experience appears promising.

  19. PARLO: PArallel Run-Time Layout Optimization for Scientific Data Explorations with Heterogeneous Access Pattern

    SciTech Connect

    Gong, Zhenhuan; Boyuka, David; Zou, X; Liu, Gary; Podhorszki, Norbert; Klasky, Scott A; Ma, Xiaosong; Samatova, Nagiza F

    2013-01-01

    Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-level data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.

  20. Hyper-NA imaging of 45nm node random CH layouts using inverse lithography

    NASA Astrophysics Data System (ADS)

    Hendrickx, E.; Tritchkov, A.; Sakajiri, K.; Granik, Y.; Kempsell, M.; Vandenberghe, G.

    2008-03-01

    The imaging of Contact Hole (CH) layouts is one of the most challenging tasks in hyper-NA lithography. Contact Hole layouts can be printed using different illumination conditions, but an illumination condition that provides good imaging at dense pitches (such as Quasar or Quadrupole illumination), will usually suffer from poor image contrast and Depth of Focus (DOF) towards the more isolated pitches. Assist Features (AF) can be used to improve the imaging of more isolated contact holes, but for a random CH layout, an AF placement rule would have to be developed for every CH configuration in the design. This makes optimal AF placement an almost impossible task for random layouts when using rule-based AF placement. We have used an inverse lithography technique by Mentor Graphics, to treat a random contact hole layout (drawn at minimal pitch 115nm) for imaging at NA 1.35. The combination of the dense 115nm pitch and available NA of 1.35 makes the use of Quasar illumination necessary, and the treatment of the clip with inverse lithography automatically generated optimal (model-based) AF for all geometries in the design. Because the inverse lithography solution consists of smooth shapes rather than rectangles, mask manufacturability becomes a concern. The algorithm allows simplification of the smooth shapes into rectangles and greatly improves mask write time. Wafer prints of clips treated with inverse lithography at NA 1.35 confirm the benefit of the assist features.

  1. A Universal Fast Algorithm for Sensitivity-Based Structural Damage Detection

    PubMed Central

    Yang, Q. W.; Liu, J. K.; Li, C. H.; Liang, C. F.

    2013-01-01

    Structural damage detection using measured response data has emerged as a new research area in civil, mechanical, and aerospace engineering communities in recent years. In this paper, a universal fast algorithm is presented for sensitivity-based structural damage detection, which can quickly improve the calculation accuracy of the existing sensitivity-based technique without any high-order sensitivity analysis or multi-iterations. The key formula of the universal fast algorithm is derived from the stiffness and flexibility matrix spectral decomposition theory. With the introduction of the key formula, the proposed method is able to quickly achieve more accurate results than that obtained by the original sensitivity-based methods, regardless of whether the damage is small or large. Three examples are used to demonstrate the feasibility and superiority of the proposed method. It has been shown that the universal fast algorithm is simple to implement and quickly gains higher accuracy over the existing sensitivity-based damage detection methods. PMID:24453815

  2. LAYOUT AND SIZING OF ESF ALCOVES AND REFUGE CHAMBERS

    SciTech Connect

    John Beesley and Romeo S. Jurani

    1995-08-25

    The purpose of this analysis is to establish size requirements and approximate locations of Exploratory Studies Facility (ESF) test and operations alcoves, including refuge chambers during construction of the Topopah Spring (TS) loop. Preliminary conceptual layouts for non-deferred test alcoves will be developed to examine construction feasibility based on current test plans and available equipment. The final location and configuration layout for alcoves will be developed when in-situ rock conditions can be visually determined. This will be after the TBM has excavated beyond the alcove location and the rock has been exposed. The analysis will examine the need for construction of walkways and electrical alcoves in the ramps and main drift. Niches that may be required to accommodate conveyor booster drives and alignments are not included in this analysis. The analysis will develop design criteria for refuge chambers to meet MSHA requirements and will examine the strategic location of refuge chambers based on their potential use in various ESF fire scenarios. This document supersedes DI:BABE00000-01717-0200-00003 Rev 01, ''TS North Ramp Alcove and Stubout Location Analysis'' in its entirety (Reference 5-6).

  3. Optimization of wind plant layouts using an adjoint approach

    DOE PAGES

    King, Ryan N.; Dykes, Katherine; Graf, Peter; ...

    2017-03-10

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  4. CerioFAST{trademark}: An acute toxicity test based on Ceriodaphnia dubia feeding behavior

    SciTech Connect

    Bitton, G.; Rhodes, K.; Koopman, B.

    1996-02-01

    The authors have developed a rapid acute toxicity test (CerioFAST{trademark}) based on suppression of feeding activity of Ceriodaphnia dubia in the presence of toxicants. The bioassay consists of a 1-h exposure period to a given toxicant. Yeast cells, stained with a fluorescent dye, are added 20 min before the end of the exposure period. Response to a toxic sample is indicated by the absence of fluorescence in the gut of the daphnids. CerioFAST was compared to the standard 48-h C. dubia acute bioassay, using heavy metals and organic compounds.CerioFAST EC50s of Cd, Cu, Pb, Ag, Zn, and carbofuran were in the 0.01--0.1-mg/L range, whereas EC50s of hexachloroethane, pentachlorophenol, trichlorophenol, and lindane were in the 1--10-mg/L range. CerioFAST EC50s of the heavy metals and organics were well correlated with Ec50s obtained with the 48-h C. dubia bioassay.

  5. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    NASA Astrophysics Data System (ADS)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  6. Assessing cognitive processes with diffusion model analyses: a tutorial based on fast-dm-30

    PubMed Central

    Voss, Andreas; Voss, Jochen; Lerche, Veronika

    2015-01-01

    Diffusion models can be used to infer cognitive processes involved in fast binary decision tasks. The model assumes that information is accumulated continuously until one of two thresholds is hit. In the analysis, response time distributions from numerous trials of the decision task are used to estimate a set of parameters mapping distinct cognitive processes. In recent years, diffusion model analyses have become more and more popular in different fields of psychology. This increased popularity is based on the recent development of several software solutions for the parameter estimation. Although these programs make the application of the model relatively easy, there is a shortage of knowledge about different steps of a state-of-the-art diffusion model study. In this paper, we give a concise tutorial on diffusion modeling, and we present fast-dm-30, a thoroughly revised and extended version of the fast-dm software (Voss and Voss, 2007) for diffusion model data analysis. The most important improvement of the fast-dm version is the possibility to choose between different optimization criteria (i.e., Maximum Likelihood, Chi-Square, and Kolmogorov-Smirnov), which differ in applicability for different data sets. PMID:25870575

  7. Study of wave-particle interaction between fast Magnetosonic and energetic electrons based on numerical simulation

    NASA Astrophysics Data System (ADS)

    Fu, S.

    2015-12-01

    There are many energetic electrons in the radiation belt of Earth. When the geomagnetic activity becomes stronger, the energy flux of energetic electrons will increase to more than ten times in the outer radiation belt, therefore it is very important to study how the energetic electrons generate and the lifetime of energetic electrons for space weather research. The acceleration of electrons in radiation belt is mainly depending on wave-particle interaction: the whistler mode chorus is the main driver for local acceleration mechanism, which could accelerate and loss energetic electrons; the geomagnetic pulsation ULF wave will cause energetic electron inward radial diffusion which will charge the electrons; recently observation results show us that the fast magnetosonic waves may also accelerate energetic electrons. For the reason that we try to study the wave-particle interaction between fast Magnetosonic and energetic electrons based on numerical simulation, in which the most important past is at the storm time the combination of highly warped Earth magnetic field and fast magnetosonic wave field will be applied for the electromagnetic environment of moving test particles. The energy, pitch angle and cross diffusion coefficients will be calculated respectively in this simulation to study how the electrons receive energy from fast magnetosonic wave. The diffusion coefficients within different dipole Earth magnetic field and non-dipole storm magnetic field are compared, while dynamics of electrons at selected initial energys are shown in our study.

  8. Polylactide-based polyurethane shape memory nanocomposites (Fe3O4/PLAUs) with fast magnetic responsiveness

    NASA Astrophysics Data System (ADS)

    Gu, Shu-Ying; Jin, Sheng-Peng; Gao, Xie-Feng; Mu, Jian

    2016-05-01

    Polylactide-based polyurethane shape memory nanocomposites (Fe3O4/PLAUs) with fast magnetic responsiveness are presented. For the purpose of fast response and homogeneous dispersion of magnetic nanoparticles, oleic acid was used to improve the dispersibility of Fe3O4 nanoparticles in a polymer matrix. A homogeneous distribution of Fe3O4 nanoparticles in the polymer matrix was obtained for nanocomposites with low Fe3O4 loading content. A small agglomeration was observed for nanocomposites with 6 wt% and 9 wt% loading content, leading to a small decline in the mechanical properties. PLAU and its nanocomposites have glass transition around 52 °C, which can be used as the triggering temperature. PLAU and its nanocomposites have shape fixity ratios above 99%, shape recovery ratios above 82% for the first cycle and shape recovery ratios above 91% for the second cycle. PLAU and its nanocomposites also exhibit a fast water bath or magnetic responsiveness. The magnetic recovery time decreases with an increase in the loading content of Fe3O4 nanoparticles due to an improvement in heating performance for increased weight percentage of fillers. The nanocomposites have fast responses in an alternating magnetic field and have potential application in biomedical areas such as intravascular stent.

  9. FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine

    NASA Astrophysics Data System (ADS)

    Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo

    Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.

  10. Fast implementation of sparse iterative covariance-based estimation for source localization.

    PubMed

    Zhang, Qilin; Abeida, Habti; Xue, Ming; Rowe, William; Li, Jian

    2012-02-01

    Fast implementations of the sparse iterative covariance-based estimation (SPICE) algorithm are presented for source localization with a uniform linear array (ULA). SPICE is a robust, user parameter-free, high-resolution, iterative, and globally convergent estimation algorithm for array processing. SPICE offers superior resolution and lower sidelobe levels for source localization compared to the conventional delay-and-sum beamforming method; however, a traditional SPICE implementation has a higher computational complexity (which is exacerbated in higher dimensional data). It is shown that the computational complexity of the SPICE algorithm can be mitigated by exploiting the Toeplitz structure of the array output covariance matrix using Gohberg-Semencul factorization. The SPICE algorithm is also extended to the acoustic vector-sensor ULA scenario with a specific nonuniform white noise assumption, and the fast implementation is developed based on the block Toeplitz properties of the array output covariance matrix. Finally, numerical simulations illustrate the computational gains of the proposed methods.

  11. Tabu search approaches for the multi-level warehouse layout problem with adjacency constraints

    NASA Astrophysics Data System (ADS)

    Zhang, G. Q.; Lai, K. K.

    2010-08-01

    A new multi-level warehouse layout problem, the multi-level warehouse layout problem with adjacency constraints (MLWLPAC), is investigated. The same item type is required to be located in adjacent cells, and horizontal and vertical unit travel costs are product dependent. An integer programming model is proposed to formulate the problem, which is NP hard. Along with a cube-per-order index policy based heuristic, the standard tabu search (TS), greedy TS, and dynamic neighbourhood based TS are presented to solve the problem. The computational results show that the proposed approaches can reduce the transportation cost significantly.

  12. Performance study of the fast timing Cherenkov detector based on a microchannel plate PMT

    NASA Astrophysics Data System (ADS)

    Finogeev, D. A.; Grigoriev, V. A.; Kaplin, V. A.; Karavichev, O. V.; Karavicheva, T. L.; Konevskikh, A. S.; Kurepin, A. B.; Kurepin, A. N.; Loginov, V. A.; Mayevskaya, A. I.; Melikyan, Yu A.; Morozov, I. V.; Serebryakov, D. V.; Shabanov, A. I.; Slupecki, M.; Tikhonov, A. A.; Trzaska, W. H.

    2017-01-01

    Prototype of the fast timing Cherenkov detector, applicable in high-energy collider experiments, has been developed basing on the modified Planacon XP85012 MCP-PMT and fused silica radiators. We present the reasons and description of the MCP-PMT modification, timing and amplitude characteristics of the prototype including the summary of the detector’s response on particle hits at oblique angles and MCP-PMT performance at high illumination rates.

  13. Accelerated susceptibility-based positive contrast imaging of MR compatible metallic devices based on modified fast spin echo sequences

    NASA Astrophysics Data System (ADS)

    Shi, Caiyun; Xie, Guoxi; Zhang, Yongqin; Zhang, Xiaoyong; Chen, Min; Su, Shi; Dong, Ying; Liu, Xin; Ji, Jim

    2017-04-01

    This study aims to develop an accelerated susceptibility-based positive contrast MR imaging method for visualizing MR compatible metallic devices. A modified fast spin echo sequence is used to accelerate data acquisition. Each readout gradient in the modified fast spin echo is slightly shifted by a short distance T shift. Phase changes accumulated within T shift are then used to calculate the susceptibility map by using a kernel deconvolution algorithm with a regularized ℓ1 minimization. To evaluate the proposed fast spin echo method, three phantom experiments were conducted and compared to a spin echo based technique and the gold standard CT for visualizing biopsy needles and brachytherapy seeds. Compared to the spin echo based technique, the data sampling speed of the proposed method was faster by 2–4 times while still being able to accurately visualize and identify the location of the biopsy needle and brachytherapy seeds. These results were confirmed by CT images of the same devices. Results also demonstrated that the proposed fast spin echo method can achieve good visualization of the brachytherapy seeds in positive contrast and in different orientations. It is also capable of correctly differentiating brachytherapy seeds from other similar structures on conventional magnitude images.

  14. Repository Surface Design Site Layout Analysis

    SciTech Connect

    Montalvo, H.R.

    1998-02-27

    The purpose of this analysis is to establish the arrangement of the Repository surface facilities and features near the North Portal. The analysis updates and expands the North Portal area site layout concept presented in the ACD (Reference 5.5), including changes to reflect the resizing of the Waste Handling Building (WHB), Waste Treatment Building (WTB), Carrier Preparation Building (CPB), and site parking areas; the addition of the Carrier Washdown Buildings (CWBs); the elimination of the Cask Maintenance Facility (CMF); and the development of a concept for site grading and flood control. The analysis also establishes the layout of the surface features (e.g., roads and utilities) that connect all the repository surface areas (North Portal Operations Area, South Portal Development Operations Area, Emplacement Shaft Surface Operations Area, and Development Shaft Surface Operations Area) and locates an area for a potential lag storage facility. Details of South Portal and shaft layouts will be covered in separate design analyses. The objective of this analysis is to provide a suitable level of design for the Viability Assessment (VA). The analysis was revised to incorporate additional material developed since the issuance of Revision 01. This material includes safeguards and security input, utility system input (size and location of fire water tanks and pump houses, potable water and sanitary sewage rates, size of wastewater evaporation pond, size and location of the utility building, size of the bulk fuel storage tank, and size and location of other exterior process equipment), main electrical substation information, redundancy of water supply and storage for the fire support system, and additional information on the storm water retention pond.

  15. Repository surface design site layout analysis

    SciTech Connect

    Montalvo, H.R.

    1998-02-27

    The purpose of this analysis is to establish the arrangement of the Yucca Mountain Repository surface facilities and features near the North Portal. The analysis updates and expands the North Portal area site layout concept presented in the ACD, including changes to reflect the resizing of the Waste Handling Building (WHB), Waste Treatment Building (WTB), Carrier Preparation Building (CPB), and site parking areas; the addition of the Carrier Washdown Buildings (CWBs); the elimination of the Cask Maintenance Facility (CMF); and the development of a concept for site grading and flood control. The analysis also establishes the layout of the surface features (e.g., roads and utilities) that connect all the repository surface areas (North Portal Operations Area, South Portal Development Operations Area, Emplacement Shaft Surface Operations Area, and Development Shaft Surface Operations Area) and locates an area for a potential lag storage facility. Details of South Portal and shaft layouts will be covered in separate design analyses. The objective of this analysis is to provide a suitable level of design for the Viability Assessment (VA). The analysis was revised to incorporate additional material developed since the issuance of Revision 01. This material includes safeguards and security input, utility system input (size and location of fire water tanks and pump houses, potable water and sanitary sewage rates, size of wastewater evaporation pond, size and location of the utility building, size of the bulk fuel storage tank, and size and location of other exterior process equipment), main electrical substation information, redundancy of water supply and storage for the fire support system, and additional information on the storm water retention pond.

  16. Fast and highly specific DNA-based multiplex detection on a solid support.

    PubMed

    Barišić, Ivan; Kamleithner, Verena; Schönthaler, Silvia; Wiesinger-Mayr, Herbert

    2015-01-01

    Highly specific and fast multiplex detection methods are essential to conduct reasonable DNA-based diagnostics and are especially important to characterise infectious diseases. More than 1000 genetic targets such as antibiotic resistance genes, virulence factors and phylogenetic markers have to be identified as fast as possible to facilitate the correct treatment of a patient. In the present work, we developed a novel ligation-based DNA probe concept that was combined with the microarray technology and used it for the detection of bacterial pathogens. The novel linear chain (LNC) probes identified all tested species correctly within 1 h based on their 16S rRNA gene in a 25-multiplex reaction. Genomic DNA was used directly as template in the ligation reaction identifying as little as 10(7) cells without any pre-amplification. The high specificity was further demonstrated characterising a single nucleotide polymorphism leading to no false positive fluorescence signals of the untargeted single nucleotide polymorphism (SNP) variants. In comparison to conventional microarray probes, the sensitivity of the novel LNC3 probes was higher by a factor of 10 or more. In summary, we present a fast, simple, highly specific and sensitive multiplex detection method adaptable for a wide range of applications.

  17. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    PubMed

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2015-01-01

    Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs) remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA). Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  18. Pharmacy layout: What are consumers' perceptions?.

    PubMed

    Emmett, Dennis; Paul, David P; Chandra, Ashish; Barrett, Hilton

    2006-01-01

    The physical layout of a retail pharmacy can play a significant role in the development of the customers' perceptions which can have a positive (or negative) impact on its sales potential. Compared to most general merchandise stores, pharmacies are more concerned about safety and security issues due to the nature of their products. This paper will discuss these aspects as well as the physical and professional environments of retail pharmacies that influence the perceptions of customers and how these vary whether chain, independent, or hospital pharmacies.

  19. The layout of a bacterial genome.

    PubMed

    Képès, François; Jester, Brian C; Lepage, Thibaut; Rafiei, Nafiseh; Rosu, Bianca; Junier, Ivan

    2012-07-16

    Recently the mismatch between our newly acquired capacity to synthetize DNA at genome scale, and our low capacity to design ab initio a functional genome has become conspicuous. This essay gathers a variety of constraints that globally shape natural genomes, with a focus on eubacteria. These constraints originate from chromosome replication (leading/lagging strand asymmetry; gene dosage gradient from origin to terminus; collisions with the transcription complexes), from biased codon usage, from noise control in gene expression, and from genome layout for co-functional genes. On the basis of this analysis, lessons are drawn for full genome design.

  20. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the

  1. MetaSensing's FastGBSAR: ground based radar for deformation monitoring

    NASA Astrophysics Data System (ADS)

    Rödelsperger, Sabine; Meta, Adriano

    2014-10-01

    The continuous monitoring of ground deformation and structural movement has become an important task in engineering. MetaSensing introduces a novel sensor system, the Fast Ground Based Synthetic Aperture Radar (FastGBSAR), based on innovative technologies that have already been successfully applied to airborne SAR applications. The FastGBSAR allows the remote sensing of deformations of a slope or infrastructure from up to a distance of 4 km. The FastGBSAR can be setup in two different configurations: in Real Aperture Radar (RAR) mode it is capable of accurately measuring displacements along a linear range profile, ideal for monitoring vibrations of structures like bridges and towers (displacement accuracy up to 0.01 mm). Modal parameters can be determined within half an hour. Alternatively, in Synthetic Aperture Radar (SAR) configuration it produces two-dimensional displacement images with an acquisition time of less than 5 seconds, ideal for monitoring areal structures like dams, landslides and open pit mines (displacement accuracy up to 0.1 mm). The MetaSensing FastGBSAR is the first ground based SAR instrument on the market able to produce two-dimensional deformation maps with this high acquisition rate. By that, deformation time series with a high temporal and spatial resolution can be generated, giving detailed information useful to determine the deformation mechanisms involved and eventually to predict an incoming failure. The system is fully portable and can be quickly installed on bedrock or a basement. The data acquisition and processing can be fully automated leading to a low effort in instrument operation and maintenance. Due to the short acquisition time of FastGBSAR, the coherence between two acquisitions is very high and the phase unwrapping is simplified enormously. This yields a high density of resolution cells with good quality and high reliability of the acquired deformations. The deformation maps can directly be used as input into an Early

  2. Safety assessment in plant layout design using indexing approach: implementing inherent safety perspective. Part 1 - guideword applicability and method description.

    PubMed

    Tugnoli, Alessandro; Khan, Faisal; Amyotte, Paul; Cozzani, Valerio

    2008-12-15

    Layout planning plays a key role in the inherent safety performance of process plants since this design feature controls the possibility of accidental chain-events and the magnitude of possible consequences. A lack of suitable methods to promote the effective implementation of inherent safety in layout design calls for the development of new techniques and methods. In the present paper, a safety assessment approach suitable for layout design in the critical early phase is proposed. The concept of inherent safety is implemented within this safety assessment; the approach is based on an integrated assessment of inherent safety guideword applicability within the constraints typically present in layout design. Application of these guidewords is evaluated along with unit hazards and control devices to quantitatively map the safety performance of different layout options. Moreover, the economic aspects related to safety and inherent safety are evaluated by the method. Specific sub-indices are developed within the integrated safety assessment system to analyze and quantify the hazard related to domino effects. The proposed approach is quick in application, auditable and shares a common framework applicable in other phases of the design lifecycle (e.g. process design). The present work is divided in two parts: Part 1 (current paper) presents the application of inherent safety guidelines in layout design and the index method for safety assessment; Part 2 (accompanying paper) describes the domino hazard sub-index and demonstrates the proposed approach with a case study, thus evidencing the introduction of inherent safety features in layout design.

  3. Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim

    2016-05-01

    A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.

  4. The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.

    PubMed

    Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas

    2015-09-04

    Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.

  5. Turbo recognition: a statistical approach to layout analysis

    NASA Astrophysics Data System (ADS)

    Tokuyasu, Taku A.; Chou, Philip A.

    2000-12-01

    Turbo recognition (TR) is a communication theory approach to the analysis of rectangular layouts, in the spirit of Document Image Decoding. The TR algorithm, inspired by turbo decoding, is based on a generative model of image production, in which two grammars are used simultaneously to describe structure in orthogonal (horizontal and vertical directions. This enables TR to strictly embody non-local constraints that cannot be taken into account by local statistical methods. This basis in finite state grammars also allows TR to be quickly retargetable to new domains. We illustrate some of the capabilities of TR with two examples involving realistic images. While TR, like turbo decoding, is not guaranteed to recover the statistically optimal solution, we present an experiment that demonstrates its ability to produce optimal or near-optimal results on a simple yet nontrivial example, the recovery of a filled rectangle in the midst of noise. Unlike methods such as stochastic context free grammars and exhaustive search, which are often intractable beyond small images, turbo recognition scales linearly with image size, suggesting TR as an efficient yet near-optimal approach to statistical layout analysis.

  6. Stereoscopic layout of a perspective flight guidance display

    NASA Astrophysics Data System (ADS)

    Hammer, Matthias; Muecke, Stephan K. M.; Mayer, Udo

    1997-05-01

    Analyses of aviation accidents ascribe about 75% of all incidents to human (pilot) behavior. A strong effort is being made to improve ergonomic cockpit layout, because of dramatic changes in the airspace structure, the increase in air traffic, and larger aircraft. One part of an interdisciplinary research project investigates the potential of stereoscopic flight-guidance displays in order to improve pilots' situation awareness. This experimental approach, which aims to research and apply ergonomic design recommendations for stereoscopic flight displays, is based upon a new type of perspective flight-guidance display. The examination of existing research regarding stereoscopic flight displays reveals a lack of basic knowledge, as well as a need for further systematic research into cockpit application. Thus the project contains experiments on different levels of abstraction, ranging from classic parameter experiments to flight simulator tests. Both current knowledge and recent discoveries are applied to superimposed 2-D flight parameters and to real and synthetic 3-D elements, such as a perspective landscape, other airplanes or flight prediction. The stereoscopic layout takes into consideration specific informational needs within different flight phases and is evaluated by means of pilot performance and pilot strain. Selected symbols of the flight guidance display and actual results are presented as examples of the research approach.

  7. Large-scale analytical Fourier transform of photomask layouts using graphics processing units

    NASA Astrophysics Data System (ADS)

    Sakamoto, Julia A.

    2015-10-01

    Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.

  8. A fast preamplifier concept for SiPM-based time-of-flight PET detectors

    NASA Astrophysics Data System (ADS)

    Huizenga, J.; Seifert, S.; Schreuder, F.; van Dam, H. T.; Dendooven, P.; Löhner, H.; Vinke, R.; Schaart, D. R.

    2012-12-01

    Silicon photomultipliers (SiPMs) offer high gain and fast response to light, making them interesting for fast timing applications such as time-of-flight (TOF) PET. To fully exploit the potential of these photosensors, dedicated preamplifiers that do not deteriorate the rise time and signal-to-noise ratio are crucial. Challenges include the high sensor capacitance, typically >300 pF for a 3 mm×3 mm SiPM sensor, as well as oscillation issues. Here we present a preamplifier concept based on low noise, high speed transistors, designed for optimum timing performance. The input stage consists of a transimpedance common-base amplifier with a very low input impedance even at high frequencies, which assures a good linearity and avoids that the high detector capacitance affects the amplifier bandwidth. The amplifier has a fast timing output as well as a 'slow' energy output optimized for determining the total charge content of the pulse. The rise time of the amplifier is about 300 ps. The measured coincidence resolving time (CRT) for 511 keV photon pairs using the amplifiers in combination with 3 mm×3 mm SiPMs (Hamamatsu MPPC-S10362-33-050C) coupled to 3 mm×3 mm×5 mm LaBr3:Ce and LYSO:Ce crystals equals 95 ps FWHM and 138 ps FWHM, respectively.

  9. Test of a prototype neutron spectrometer based on diamond detectors in a fast reactor

    SciTech Connect

    Osipenko, M.; Ripani, M.; Ricco, G.; Caiffi, B.; Pompili, F.; Pillon, M.; Angelone, M.; Verona-Rinati, G.; Cardarelli, R.; Argiro, S.

    2015-07-01

    A prototype of neutron spectrometer based on diamond detectors has been developed. This prototype consists of a {sup 6}Li neutron converter sandwiched between two CVD diamond crystals. The radiation hardness of the diamond crystals makes it suitable for applications in low power research reactors, while a low sensitivity to gamma rays and low leakage current of the detector permit to reach good energy resolution. A fast coincidence between two crystals is used to reject background. The detector was read out using two different electronic chains connected to it by a few meters of cable. The first chain was based on conventional charge-sensitive amplifiers, the other used a custom fast charge amplifier developed for this purpose. The prototype has been tested at various neutron sources and showed its practicability. In particular, the detector was calibrated in a TRIGA thermal reactor (LENA laboratory, University of Pavia) with neutron fluxes of 10{sup 8} n/cm{sup 2}s and at the 3 MeV D-D monochromatic neutron source named FNG (ENEA, Rome) with neutron fluxes of 10{sup 6} n/cm{sup 2}s. The neutron spectrum measurement was performed at the TAPIRO fast research reactor (ENEA, Casaccia) with fluxes of 10{sup 9} n/cm{sup 2}s. The obtained spectra were compared to Monte Carlo simulations, modeling detector response with MCNP and Geant4. (authors)

  10. Fast entropy-based CABAC rate estimation for mode decision in HEVC.

    PubMed

    Chen, Wei-Gang; Wang, Xun

    2016-01-01

    High efficiency video coding (HEVC) seeks the best code tree configuration, the best prediction unit division and the prediction mode, by evaluating the rate-distortion functional in a recursive way and using a "try all and select the best" strategy. Further, HEVC only supports context adaptive binary arithmetic coding (CABAC), which has the disadvantage of being highly sequential and having strong data dependencies, as the entropy coder. So, the development of a fast rate estimation algorithm for CABAC-based coding has a great practical significance for mode decision in HEVC. There are three elementary steps in CABAC encoding process: binarization, context modeling, and binary arithmetic coding. Typical approaches to fast CABAC rate estimation simplify or eliminate the last two steps, but leave the binarization step unchanged. To maximize the reduction of computational complexity, we propose a fast entropy-based CABAC rate estimator in this paper. It eliminates not only the modeling and the coding steps, but also the binarization step. Experimental results demonstrate that the proposed estimator is able to reduce the computational complexity of the mode decision in HEVC by 9-23 % with negligible PSNR loss and BD-rate increment, and therefore exhibits applicability to practical HEVC encoder implementation.

  11. Document reconstruction by layout analysis of snippets

    NASA Astrophysics Data System (ADS)

    Kleber, Florian; Diem, Markus; Sablatnig, Robert

    2010-02-01

    Document analysis is done to analyze entire forms (e.g. intelligent form analysis, table detection) or to describe the layout/structure of a document. Also skew detection of scanned documents is performed to support OCR algorithms that are sensitive to skew. In this paper document analysis is applied to snippets of torn documents to calculate features for the reconstruction. Documents can either be destroyed by the intention to make the printed content unavailable (e.g. tax fraud investigation, business crime) or due to time induced degeneration of ancient documents (e.g. bad storage conditions). Current reconstruction methods for manually torn documents deal with the shape, inpainting and texture synthesis techniques. In this paper the possibility of document analysis techniques of snippets to support the matching algorithm by considering additional features are shown. This implies a rotational analysis, a color analysis and a line detection. As a future work it is planned to extend the feature set with the paper type (blank, checked, lined), the type of the writing (handwritten vs. machine printed) and the text layout of a snippet (text size, line spacing). Preliminary results show that these pre-processing steps can be performed reliably on a real dataset consisting of 690 snippets.

  12. Assessment of controls layout of Indian tractors.

    PubMed

    Kumar, Adarsh; Bhaskar, Gaikwad; Singh, J K

    2009-01-01

    Tractors in low-income countries are used both for farm and non-farm activities. Most of the tractors being manufactured in India are products of collaboration with other countries. The design of tractors manufactured in India has not changed much in the past five decades especially from an ergonomics point of view, because of economic considerations. This paper describes a tractor control layout assessment with respect to the Indian population and compares the location of controls with workspace envelopes and the IS12343 standard for commonly used tractors on Indian farms. Controls like steering, foot clutch, foot brake, foot accelerator are located in areas defined by IS12343 standard in some tractors but these are not placed in the workspace envelopes of the Indian population. This results in a mismatch between the workspace envelope and location of controls as defined by the standard. The controls need a complete change in their layout to be in the workspace envelopes, as this cannot be achieved by providing seat movement in the horizontal and vertical directions in the present tractor design.

  13. Dynamical programming based turbulence velocimetry for fast visible imaging of tokamak plasma

    NASA Astrophysics Data System (ADS)

    Banerjee, Santanu; Zushi, H.; Nishino, N.; Mishra, K.; Onchi, T.; Kuzmin, A.; Nagashima, Y.; Hanada, K.; Nakamura, K.; Idei, H.; Hasegawa, M.; Fujisawa, A.

    2015-03-01

    An orthogonal dynamic programming (ODP) based particle image velocimetry (PIV) technique is developed to measure the time resolved flow field of the fluctuating structures at the plasma edge and scrape off layer (SOL) of tokamaks. This non-intrusive technique can provide two dimensional velocity fields at high spatial and temporal resolution from a fast framing image sequence and hence can provide better insights into plasma flow as compared to conventional probe measurements. Applicability of the technique is tested with simulated image pairs. Finally, it is applied to tangential fast visible images of QUEST plasma to estimate the SOL flow in inboard poloidal null-natural divertor configuration. This technique is also applied to investigate the intricate features of the core of the run-away dominated phase following the injection of a large amount of neutrals in the target Ohmic plasma. Development of the ODP-PIV code and its applicability on actual plasma images is reported.

  14. A fast multispectral light synthesiser based on LEDs and a diffraction grating

    PubMed Central

    Belušič, Gregor; Ilić, Marko; Meglič, Andrej; Pirih, Primož

    2016-01-01

    Optical experiments often require fast-switching light sources with adjustable bandwidths and intensities. We constructed a wavelength combiner based on a reflective planar diffraction grating and light emitting diodes with emission peaks from 350 to 630 nm that were positioned at the angles corresponding to the first diffraction order of the reversed beam. The combined output beam was launched into a fibre. The spacing between 22 equally wide spectral bands was about 15 nm. The time resolution of the pulse-width modulation drivers was 1 ms. The source was validated with a fast intracellular measurement of the spectral sensitivity of blowfly photoreceptors. In hyperspectral imaging of Xenopus skin circulation, the wavelength resolution was adequate to resolve haemoglobin absorption spectra. The device contains no moving parts, has low stray light and is intrinsically capable of multi-band output. Possible applications include visual physiology, biomedical optics, microscopy and spectroscopy. PMID:27558155

  15. Development of fast neutron radiography system based on portable neutron generator

    NASA Astrophysics Data System (ADS)

    Yi, Chia Jia; Nilsuwankosit, Sunchai

    2016-01-01

    Due to the high installation cost, the safety concern and the immobility of the research reactors, the neutron radiography system based on portable neutron generator is proposed. Since the neutrons generated from a portable neutron generator are mostly the fast neutrons, the system is emphasized on using the fast neutrons for the purpose of conducting the radiography. In order to suppress the influence of X-ray produced by the neutron generator, a combination of a shielding material sandwiched between two identical imaging plates is used. A binary XOR operation is then applied for combining the information from the imaging plates. The raw images obtained confirm that the X-ray really has a large effect and that XOR operation can help enhance the effect of the neutrons.

  16. Preconditioning based on Calderon's formulae for periodic fast multipole methods for Helmholtz' equation

    NASA Astrophysics Data System (ADS)

    Niino, Kazuki; Nishimura, Naoshi

    2012-01-01

    Solution of periodic boundary value problems is of interest in various branches of science and engineering such as optics, electromagnetics and mechanics. In our previous studies we have developed a periodic fast multipole method (FMM) as a fast solver of wave problems in periodic domains. It has been found, however, that the convergence of the iterative solvers for linear equations slows down when the solutions show anomalies related to the periodicity of the problems. In this paper, we propose preconditioning schemes based on Calderon's formulae to accelerate convergence of iterative solvers in the periodic FMM for Helmholtz' equations. The proposed preconditioners can be implemented more easily than conventional ones. We present several numerical examples to test the performance of the proposed preconditioners. We show that the effectiveness of these preconditioners is definite even near anomalies.

  17. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  18. A fast multispectral light synthesiser based on LEDs and a diffraction grating

    NASA Astrophysics Data System (ADS)

    Belušič, Gregor; Ilić, Marko; Meglič, Andrej; Pirih, Primož

    2016-08-01

    Optical experiments often require fast-switching light sources with adjustable bandwidths and intensities. We constructed a wavelength combiner based on a reflective planar diffraction grating and light emitting diodes with emission peaks from 350 to 630 nm that were positioned at the angles corresponding to the first diffraction order of the reversed beam. The combined output beam was launched into a fibre. The spacing between 22 equally wide spectral bands was about 15 nm. The time resolution of the pulse-width modulation drivers was 1 ms. The source was validated with a fast intracellular measurement of the spectral sensitivity of blowfly photoreceptors. In hyperspectral imaging of Xenopus skin circulation, the wavelength resolution was adequate to resolve haemoglobin absorption spectra. The device contains no moving parts, has low stray light and is intrinsically capable of multi-band output. Possible applications include visual physiology, biomedical optics, microscopy and spectroscopy.

  19. Response of a Si-diode-based device to fast neutrons.

    PubMed

    Spurný, Frantisek

    2005-02-01

    Semiconductor devices based on a Si-detector are frequently used for charged particle's detection; one application being in the investigation of cosmic radiation fields. From the spectra of energy deposition events in such devices, the total energy deposited by the radiation in silicon can be derived. This contribution presents the results of studies concerning the response of this type of detector to fast neutrons. First, the spectrum of energy deposition was established in fast neutron radiation fields with average energies from 0.5 to 50 MeV. It was found that these spectra vary significantly with the neutron energy. The comparison with the spectra registered in photon beams permitted an estimation of the part of energy deposited that could be attributed to neutrons. It was found that this part increases rapidly with neutron energy. The possibilities to use this type of detector for neutron detection and dosimetry for radiation protection are analysed and discussed.

  20. Development of fast neutron radiography system based on portable neutron generator

    SciTech Connect

    Yi, Chia Jia Nilsuwankosit, Sunchai

    2016-01-22

    Due to the high installation cost, the safety concern and the immobility of the research reactors, the neutron radiography system based on portable neutron generator is proposed. Since the neutrons generated from a portable neutron generator are mostly the fast neutrons, the system is emphasized on using the fast neutrons for the purpose of conducting the radiography. In order to suppress the influence of X-ray produced by the neutron generator, a combination of a shielding material sandwiched between two identical imaging plates is used. A binary XOR operation is then applied for combining the information from the imaging plates. The raw images obtained confirm that the X-ray really has a large effect and that XOR operation can help enhance the effect of the neutrons.

  1. A VLSI Architecture with Multiple Fast Store-Based Block Parallel Processing for Output Probability and Likelihood Score Computations in HMM-Based Isolated Word Recognition

    NASA Astrophysics Data System (ADS)

    Nakamura, Kazuhiro; Shimazaki, Ryo; Yamamoto, Masatoshi; Takagi, Kazuyoshi; Takagi, Naofumi

    This paper presents a memory-efficient VLSI architecture for output probability computations (OPCs) of continuous hidden Markov models (HMMs) and likelihood score computations (LSCs). These computations are the most time consuming part of HMM-based isolated word recognition systems. We demonstrate multiple fast store-based block parallel processing (MultipleFastStoreBPP) for OPCs and LSCs and present a VLSI architecture that supports it. Compared with conventional fast store-based block parallel processing (FastStoreBPP) and stream-based block parallel processing (StreamBPP) architectures, the proposed architecture requires fewer registers and less processing time. The processing elements (PEs) used in the FastStoreBPP and StreamBPP architectures are identical to those used in the MultipleFastStoreBPP architecture. From a VLSI architectural viewpoint, a comparison shows that the proposed architecture is an improvement over the others, through efficient use of PEs and registers for storing input feature vectors.

  2. Fast-Fourier-transform based numerical integration method for the Rayleigh-Sommerfeld diffraction formula

    NASA Astrophysics Data System (ADS)

    Shen, Fabin; Wang, Anbo

    2006-02-01

    The numerical calculation of the Rayleigh-Sommerfeld diffraction integral is investigated. The implementation of a fast-Fourier-transform (FFT) based direct integration (FFT-DI) method is presented, and Simpson's rule is used to improve the calculation accuracy. The sampling interval, the size of the computation window, and their influence on numerical accuracy and on computational complexity are discussed for the FFT-DI and the FFT-based angular spectrum (FFT-AS) methods. The performance of the FFT-DI method is verified by numerical simulation and compared with that of the FFT-AS method.

  3. Improving abdomen tumor low-dose CT images using a fast dictionary learning based processing

    NASA Astrophysics Data System (ADS)

    Chen, Yang; Yin, Xindao; Shi, Luyao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis; Toumoulin, Christine

    2013-08-01

    In abdomen computed tomography (CT), repeated radiation exposures are often inevitable for cancer patients who receive surgery or radiotherapy guided by CT images. Low-dose scans should thus be considered in order to avoid the harm of accumulative x-ray radiation. This work is aimed at improving abdomen tumor CT images from low-dose scans by using a fast dictionary learning (DL) based processing. Stemming from sparse representation theory, the proposed patch-based DL approach allows effective suppression of both mottled noise and streak artifacts. The experiments carried out on clinical data show that the proposed method brings encouraging improvements in abdomen low-dose CT images with tumors.

  4. Framework for identifying recommended rules and DFM scoring model to improve manufacturability of sub-20nm layout design

    NASA Astrophysics Data System (ADS)

    Pathak, Piyush; Madhavan, Sriram; Malik, Shobhit; Wang, Lynn T.; Capodieci, Luigi

    2012-03-01

    This paper addresses the framework for building critical recommended rules and a methodology for devising scoring models using simulation or silicon data. Recommended rules need to be applied to critical layout configurations (edge or polygon based geometric relations), which can cause yield issues depending on layout context and process variability. Determining of critical recommended rules is the first step for this framework. Based on process specifications and design rule calculations, recommended rules are characterized by evaluating the manufacturability response to improvements in a layout-dependent parameter. This study is applied to critical 20nm recommended rules. In order to enable the scoring of layouts, this paper also discusses a CAD framework involved in supporting use-models for improving the DFM-compliance of a physical design.

  5. Operator Station Design System - A computer aided design approach to work station layout

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.

    1979-01-01

    The Operator Station Design System is resident in NASA's Johnson Space Center Spacecraft Design Division Performance Laboratory. It includes stand-alone minicomputer hardware and Panel Layout Automated Interactive Design and Crew Station Assessment of Reach software. The data base consists of the Shuttle Transportation System Orbiter Crew Compartment (in part), the Orbiter payload bay and remote manipulator (in part), and various anthropometric populations. The system is utilized to provide panel layouts, assess reach and vision, determine interference and fit problems early in the design phase, study design applications as a function of anthropometric and mission requirements, and to accomplish conceptual design to support advanced study efforts.

  6. A CFD-based wind solver for a fast response transport and dispersion model

    SciTech Connect

    Gowardhan, Akshay A; Brown, Michael J; Pardyjak, Eric R; Senocak, Inanc

    2010-01-01

    In many cities, ambient air quality is deteriorating leading to concerns about the health of city inhabitants. In urban areas with narrow streets surrounded by clusters of tall buildings, called street canyons, air pollution from traffic emissions and other sources is difficult to disperse and may accumulate resulting in high pollutant concentrations. For various situations, including the evacuation of populated areas in the event of an accidental or deliberate release of chemical, biological and radiological agents, it is important that models should be developed that produce urban flow fields quickly. For these reasons it has become important to predict the flow field in urban street canyons. Various computational techniques have been used to calculate these flow fields, but these techniques are often computationally intensive. Most fast response models currently in use are at a disadvantage in these cases as they are unable to correlate highly heterogeneous urban structures with the diagnostic parameterizations on which they are based. In this paper, a fast and reasonably accurate computational fluid dynamics (CFD) technique that solves the Navier-Stokes equations for complex urban areas has been developed called QUIC-CFD (Q-CFD). This technique represents an intermediate balance between fast (on the order of minutes for a several block problem) and reasonably accurate solutions. The paper details the solution procedure and validates this model for various simple and complex urban geometries.

  7. Accelerated materials design of fast oxygen ionic conductors based on first principles calculations

    NASA Astrophysics Data System (ADS)

    He, Xingfeng; Mo, Yifei

    Over the past decades, significant research efforts have been dedicated to seeking fast oxygen ion conductor materials, which have important technological applications in electrochemical devices such as solid oxide fuel cells, oxygen separation membranes, and sensors. Recently, Na0.5Bi0.5TiO3 (NBT) was reported as a new family of fast oxygen ionic conductor. We will present our first principles computation study aims to understand the O diffusion mechanisms in the NBT material and to design this material with enhanced oxygen ionic conductivity. Using the NBT materials as an example, we demonstrate the computation capability to evaluate the phase stability, chemical stability, and ionic diffusion of the ionic conductor materials. We reveal the effects of local atomistic configurations and dopants on oxygen diffusion and identify the intrinsic limiting factors in increasing the ionic conductivity of the NBT materials. Novel doping strategies were predicted and demonstrated by the first principles calculations. In particular, the K doped NBT compound achieved good phase stability and an order of magnitude increase in oxygen ionic conductivity of up to 0.1 S cm-1 at 900 K compared to the experimental Mg doped compositions. Our results provide new avenues for the future design of the NBT materials and demonstrate the accelerated design of new ionic conductor materials based on first principles techniques. This computation methodology and workflow can be applied to the materials design of any (e.g. Li +, Na +) fast ion-conducting materials.

  8. Fast Fourier transform based direct integration algorithm for the linear canonical transform

    NASA Astrophysics Data System (ADS)

    Wang, Dayong; Liu, Changgeng; Wang, Yunxin; Zhao, Jie

    2011-03-01

    The linear canonical transform(LCT) is a parameterized linear integral transform, which is the general case of many well-known transforms such as the Fourier transform(FT), the fractional Fourier transform(FRT) and the Fresnel transform(FST). These integral transforms are of great importance in wave propagation problems because they are the solutions of the wave equation under a variety of circumstances. In optics, the LCT can be used to model paraxial free space propagation and other quadratic phase systems such as lens and graded-index media. A number of algorithms have been presented to fast compute the LCT. When they are used to compute the LCT, the sampling period in the transform domain is dependent on that in the signal domain. This drawback limits their applicability in some cases such as color digital holography. In this paper, a Fast-Fourier-Transform-based Direct Integration algorithm(FFT-DI) for the LCT is presented. The FFT-DI is a fast computational method of the Direct Integration(DI) for the LCT. It removes the dependency of the sampling period in the transform domain on that in the signal domain. Simulations and experimental results are presented to validate this idea.

  9. Fast Fourier transform based direct integration algorithm for the linear canonical transform

    NASA Astrophysics Data System (ADS)

    Wang, Dayong; Liu, Changgeng; Wang, Yunxin; Zhao, Jie

    2010-07-01

    The linear canonical transform(LCT) is a parameterized linear integral transform, which is the general case of many well-known transforms such as the Fourier transform(FT), the fractional Fourier transform(FRT) and the Fresnel transform(FST). These integral transforms are of great importance in wave propagation problems because they are the solutions of the wave equation under a variety of circumstances. In optics, the LCT can be used to model paraxial free space propagation and other quadratic phase systems such as lens and graded-index media. A number of algorithms have been presented to fast compute the LCT. When they are used to compute the LCT, the sampling period in the transform domain is dependent on that in the signal domain. This drawback limits their applicability in some cases such as color digital holography. In this paper, a Fast-Fourier-Transform-based Direct Integration algorithm(FFT-DI) for the LCT is presented. The FFT-DI is a fast computational method of the Direct Integration(DI) for the LCT. It removes the dependency of the sampling period in the transform domain on that in the signal domain. Simulations and experimental results are presented to validate this idea.

  10. Fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics.

    PubMed

    Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J

    2015-05-15

    Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis.

  11. Raw data based image processing algorithm for fast detection of surface breaking cracks

    NASA Astrophysics Data System (ADS)

    Sruthi Krishna K., P.; Puthiyaveetil, Nithin; Kidangan, Renil; Unnikrishnakurup, Sreedhar; Zeigler, Mathias; Myrach, Philipp; Balasubramaniam, Krishnan; Biju, P.

    2017-02-01

    The aim of this work is to illustrate the contribution of signal processing techniques in the field of Non-Destructive Evaluation. A component's life evaluation is inevitably related to the presence of flaws in it. The detection and characterization of cracks prior to damage is a technologically and economically significant task and is of very importance when it comes to safety-relevant measures. The Laser Thermography is the most effective and advanced thermography method for Non-Destructive Evaluation. High capability for the detection of surface cracks and for the characterization of the geometry of artificial surface flaws in metallic samples of laser thermography is particularly encouraging. This is one of the non-contacting, fast and real time detection method. The presence of a vertical surface breaking crack will disturb the thermal footprint. The data processing method plays vital role in fast detection of the surface and sub-surface cracks. Currently in laser thermographic inspection lacks a compromising data processing algorithm which is necessary for the fast crack detection and also the analysis of data is done as part of post processing. In this work we introduced a raw data based image processing algorithm which results precise, better and fast crack detection. The algorithm we developed gives better results in both experimental and modeling data. By applying this algorithm we carried out a detailed investigation variation of thermal contrast with crack parameters like depth and width. The algorithm we developed is applied for various surface temperature data from the 2D scanning model and also validated credibility of algorithm with experimental data.

  12. Automatic pattern localization across layout database and photolithography mask

    NASA Astrophysics Data System (ADS)

    Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter

    2016-03-01

    Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.

  13. A maxent-stress model for graph layout.

    PubMed

    Gansner, Emden R; Hu, Yifan; North, Stephen

    2013-06-01

    In some applications of graph visualization, input edges have associated target lengths. Dealing with these lengths is a challenge, especially for large graphs. Stress models are often employed in this situation. However, the traditional full stress model is not scalable due to its reliance on an initial all-pairs shortest path calculation. A number of fast approximation algorithms have been proposed. While they work well for some graphs, the results are less satisfactory on graphs of intrinsically high dimension, because some nodes may be placed too close together, or even share the same position. We propose a solution, called the maxent-stress model, which applies the principle of maximum entropy to cope with the extra degrees of freedom. We describe a force-augmented stress majorization algorithm that solves the maxent-stress model. Numerical results show that the algorithm scales well, and provides acceptable layouts for large, nonrigid graphs. This also has potential applications to scalable algorithms for statistical multidimensional scaling (MDS) with variable distances.

  14. Automatic layout of integrated-optics time-of-flight circuits

    NASA Astrophysics Data System (ADS)

    Kennett-Fogg, Ruth D.

    1995-04-01

    This work describes the architecture and algorithms used in the computer aided design tool developed for the automatic layout of integrated optic, time of flight circuit designs. This is similar to the layout of electronic VLSI circuits, where total wire length and chip area minimization are the major goals. Likewise, total wire length and chip area minimization are also the goals in the layout of time of flight circuits. However, there are two major differences between the layout of time of flight circuits and VLSI circuits. First, the interconnection lengths of time of flight designs are exactly specified in order to achieve the necessary delays for signal synchronization. Secondly, the switching elements are 120 times longer than they are wide. This highly astigmatic aspect ratio causes severe constraints on how and where the switches are placed. The assumed development of integrated corner turning mirrors allows the use of a parallel, row based device placement architecture and a rectangular, fixed grid track system for the connecting paths. The layout process proceeds in two steps. The first step involves the use of a partial circuit graph representation to place the elements in rows, oriented in the direction of the signal flow. After iterative improvement of the placement, the second step proceeds with the routing of the connecting paths. The main problem in the automatic layout of time of flight circuits is achieving the correct path lengths without overlapping previously routed paths. This problem is solved by taking advantage of a certain degree of variability present in each path, allowing the use of simple heuristics to circumvent previously routed paths.

  15. Coach design for the Korean high-speed train: a systematic approach to passenger seat design and layout.

    PubMed

    Jung, E S; Han, S H; Jung, M; Choe, J

    1998-12-01

    Proper ergonomic design of a passenger seat and coach layout for a high-speed train is an essential component that is directly related to passenger comfort. In this research, a systematic approach to the design of passenger seats was described and the coach layout which reflected the tradeoff between transportation capacity and passenger comfort was investigated for the Korean high-speed train. As a result, design recommendations and specifications of the passenger seat and its layout were suggested. The whole design process is composed of four stages. A survey and analysis of design requirement was first conducted, which formed the base for designing the first and second class passenger seats. Prototypes were made and evaluated iteratively, and seat arrangement and coach layout were finally obtained. The systematic approach and recommendations suggested in this study are expected to be applicable to the seat design for public transportations and to help modify and redesign existing vehicular seats.

  16. Fast calculation with point-based method to make CGHs of the polygon model

    NASA Astrophysics Data System (ADS)

    Ogihara, Yuki; Ichikawa, Tsubasa; Sakamoto, Yuji

    2014-02-01

    Holography is one of the three-dimensional technology. Light waves from an object are recorded and reconstructed by using a hologram. Computer generated holograms (CGHs), which are made by simulating light propagation using a computer, are able to represent virtual object. However, an enormous amount of computation time is required to make CGHs. There are two primary methods of calculating CGHs: the polygon-based method and the point-based method. In the polygon-based method with Fourier transforms, CGHs are calculated using a fast Fourier transform (FFT). The calculation of complex objects composed of multiple polygons requires as many FFTs, so unfortunately the calculation time become enormous. In contrast, in the point-based method, it is easy to express complex objects, an enormous calculation time is still required. Graphics processing units (GPUs) have been used to speed up the calculations of point-based method. Because a GPU is specialized for parallel computation and CGH calculation can be calculated independently for each pixel. However, expressing a planar object by the point-based method requires a signi cant increase in the density of points and consequently in the number of point light sources. In this paper, we propose a fast calculation algorithm to express planar objects by the point-based method with a GPU. The proposed method accelerate calculation by obtaining the distance between a pixel and the point light source from the adjacent point light source by a difference method. Under certain speci ed conditions, the difference between adjacent object points becomes constant, so the distance is obtained by only an additions. Experimental results showed that the proposed method is more effective than the polygon-based method with FFT when the number of polygons composing an objects are high.

  17. Optimal Sensor Layouts in Underwater Locomotory Systems

    NASA Astrophysics Data System (ADS)

    Colvert, Brendan; Kanso, Eva

    2015-11-01

    Retrieving and understanding global flow characteristics from local sensory measurements is a challenging but extremely relevant problem in fields such as defense, robotics, and biomimetics. It is an inverse problem in that the goal is to translate local information into global flow properties. In this talk we present techniques for optimization of sensory layouts within the context of an idealized underwater locomotory system. Using techniques from fluid mechanics and control theory, we show that, under certain conditions, local measurements can inform the submerged body about its orientation relative to the ambient flow, and allow it to recognize local properties of shear flows. We conclude by commenting on the relevance of these findings to underwater navigation in engineered systems and live organisms.

  18. Fast Reactor Based on the Self-Sustained Regime of Nuclear Burning Wave

    NASA Astrophysics Data System (ADS)

    Fomin, S. P.; Mel'nik, Yu. P.; Pilipenko, V. V.; Shul'ga, N. F.

    An approach for description of the space-time evolution of self-organizing nuclear burning wave regime in a critical fast neutron reactor has been developed in the effective multigroup approximation. It is based on solving the non-stationary neutron diffusion equation together with the fuel burn-up equations and the equations of nuclear kinetics for delayed neutron precursor nuclei. The calculations have been carried out in the plane one-dimensional model for a two-zone homogeneous reactor with the metal U-Pu fuel, the Na coolant and constructional material Fe.

  19. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  20. A Fast and Precise Indoor Localization Algorithm Based on an Online Sequential Extreme Learning Machine †

    PubMed Central

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-01

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics. PMID:25599427

  1. A fast and precise indoor localization algorithm based on an online sequential extreme learning machine.

    PubMed

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-15

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics.

  2. Improvement on post-OPC verification efficiency for contact/via coverage check by final CD biasing of metal lines and considering their location on the metal layout

    NASA Astrophysics Data System (ADS)

    Kim, Youngmi; Choi, Jae-Young; Choi, Kwangseon; Choi, Jung-Hoe; Lee, Sooryong

    2011-04-01

    As IC design complexity keeps increasing, it is more and more difficult to ensure the pattern transfer after optical proximity correction (OPC) due to the continuous reduction of layout dimensions and lithographic limitation by k1 factor. To guarantee the imaging fidelity, resolution enhancement technologies (RET) such as off-axis illumination (OAI), different types of phase shift masks and OPC technique have been developed. In case of model-based OPC, to cross-confirm the contour image versus target layout, post-OPC verification solutions continuously keep developed - contour generation method and matching it to target structure, method for filtering and sorting the patterns to eliminate false errors and duplicate patterns. The way to detect only real errors by excluding false errors is the most important thing for accurate and fast verification process - to save not only reviewing time and engineer resource, but also whole wafer process time and so on. In general case of post-OPC verification for metal-contact/via coverage (CC) check, verification solution outputs huge of errors due to borderless design, so it is too difficult to review and correct all points of them. It should make OPC engineer to miss the real defect, and may it cause the delay time to market, at least. In this paper, we studied method for increasing efficiency of post-OPC verification, especially for the case of CC check. For metal layers, final CD after etch process shows various CD bias, which depends on distance with neighbor patterns, so it is more reasonable that consider final metal shape to confirm the contact/via coverage. Through the optimization of biasing rule for different pitches and shapes of metal lines, we could get more accurate and efficient verification results and decrease the time for review to find real errors. In this paper, the suggestion in order to increase efficiency of OPC verification process by using simple biasing rule to metal layout instead of etch model

  3. Simulation Modeling of a Facility Layout in Operations Management Classes

    ERIC Educational Resources Information Center

    Yazici, Hulya Julie

    2006-01-01

    Teaching quantitative courses can be challenging. Similarly, layout modeling and lean production concepts can be difficult to grasp in an introductory OM (operations management) class. This article describes a simulation model developed in PROMODEL to facilitate the learning of layout modeling and lean manufacturing. Simulation allows for the…

  4. Layout and cabling considerations for a large communications antenna array

    NASA Technical Reports Server (NTRS)

    Logan, R. T., Jr.

    1993-01-01

    Layout considerations for a large deep space communications antenna array are discussed. A fractal geometry for the antenna layout is described that provides optimal packing of antenna elements, efficient cable routing, and logical division of the array into identical sub-arrays.

  5. Layout Geometry in Encoding and Retrieval of Spatial Memory

    ERIC Educational Resources Information Center

    Mou, Weimin; Liu, Xianyun; McNamara, Timothy P.

    2009-01-01

    Two experiments investigated whether the spatial reference directions that are used to specify objects' locations in memory can be solely determined by layout geometry. Participants studied a layout of objects from a single viewpoint while their eye movements were recorded. Subsequently, participants used memory to make judgments of relative…

  6. CMOS VLSI Layout and Verification of a SIMD Computer

    NASA Technical Reports Server (NTRS)

    Zheng, Jianqing

    1996-01-01

    A CMOS VLSI layout and verification of a 3 x 3 processor parallel computer has been completed. The layout was done using the MAGIC tool and the verification using HSPICE. Suggestions for expanding the computer into a million processor network are presented. Many problems that might be encountered when implementing a massively parallel computer are discussed.

  7. Layout as Political Expression: Visual Literacy and the Peruvian Press.

    ERIC Educational Resources Information Center

    Barnhurst, Kevin G.

    Newspaper layout and design studies ignore politics, and most studies of newspaper politics ignore visual design. News layout is generally thought to be a set of neutral, efficient practices. This study suggests that the political position of Peruvian newspapers parallels their visual presentation of terrorism. The liberal "La Republica"…

  8. Agriculture Education. Elements of Farm and Building Layout.

    ERIC Educational Resources Information Center

    Stuttgart Public Schools, AR.

    This curriculum guide is designed for group instruction of secondary agricultural education students enrolled in one or two semester-long courses in elements of farm and building layout. The guide presents units of study in the following areas: (1) sketching and drawing equipment, (2) gothic lettering, (3) layout of a standard sheet, (4) job…

  9. Two hybrid compaction algorithms for the layout optimization problem.

    PubMed

    Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn

    2007-01-01

    In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.

  10. Simplify to survive: prescriptive layouts ensure profitable scaling to 32nm and beyond

    NASA Astrophysics Data System (ADS)

    Liebmann, Lars; Pileggi, Larry; Hibbeler, Jason; Rovner, Vyacheslav; Jhaveri, Tejas; Northrop, Greg

    2009-03-01

    The time-to-market driven need to maintain concurrent process-design co-development, even in spite of discontinuous patterning, process, and device innovation is reiterated. The escalating design rule complexity resulting from increasing layout sensitivities in physical and electrical yield and the resulting risk to profitable technology scaling is reviewed. Shortcomings in traditional Design for Manufacturability (DfM) solutions are identified and contrasted to the highly successful integrated design-technology co-optimization used for SRAM and other memory arrays. The feasibility of extending memory-style design-technology co-optimization, based on a highly simplified layout environment, to logic chips is demonstrated. Layout density benefits, modeled patterning and electrical yield improvements, as well as substantially improved layout simplicity are quantified in a conventional versus template-based design comparison on a 65nm IBM PowerPC 405 microprocessor core. The adaptability of this highly regularized template-based design solution to different yield concerns and design styles is shown in the extension of this work to 32nm with an increased focus on interconnect redundancy. In closing, the work not covered in this paper, focused on the process side of the integrated process-design co-optimization, is introduced.

  11. Profiling Fast Healthcare Interoperability Resources (FHIR) of Family Health History based on the Clinical Element Models

    PubMed Central

    Lee, Jaehoon; Hulse, Nathan C.; Wood, Grant M.; Oniki, Thomas A.; Huff, Stanley M.

    2016-01-01

    In this study we developed a Fast Healthcare Interoperability Resources (FHIR) profile to support exchanging a full pedigree based family health history (FHH) information across multiple systems and applications used by clinicians, patients, and researchers. We used previously developed clinical element models (CEMs) that are capable of representing the FHH information, and derived essential data elements including attributes, constraints, and value sets. We analyzed gaps between the FHH CEM elements and existing FHIR resources. Based on the analysis, we developed a profile that consists of 1) FHIR resources for essential FHH data elements, 2) extensions for additional elements that were not covered by the resources, and 3) a structured definition to integrate patient and family member information in a FHIR message. We implemented the profile using an open-source based FHIR framework and validated it using patient-entered FHH data that was captured through a locally developed FHH tool. PMID:28269871

  12. Profiling Fast Healthcare Interoperability Resources (FHIR) of Family Health History based on the Clinical Element Models.

    PubMed

    Lee, Jaehoon; Hulse, Nathan C; Wood, Grant M; Oniki, Thomas A; Huff, Stanley M

    2016-01-01

    In this study we developed a Fast Healthcare Interoperability Resources (FHIR) profile to support exchanging a full pedigree based family health history (FHH) information across multiple systems and applications used by clinicians, patients, and researchers. We used previously developed clinical element models (CEMs) that are capable of representing the FHH information, and derived essential data elements including attributes, constraints, and value sets. We analyzed gaps between the FHH CEM elements and existing FHIR resources. Based on the analysis, we developed a profile that consists of 1) FHIR resources for essential FHH data elements, 2) extensions for additional elements that were not covered by the resources, and 3) a structured definition to integrate patient and family member information in a FHIR message. We implemented the profile using an open-source based FHIR framework and validated it using patient-entered FHH data that was captured through a locally developed FHH tool.

  13. Optimal Control Surface Layout for an Aeroservoelastic Wingbox

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.

    2017-01-01

    This paper demonstrates a technique for locating the optimal control surface layout of an aeroservoelastic Common Research Model wingbox, in the context of maneuver load alleviation and active utter suppression. The combinatorial actuator layout design is solved using ideas borrowed from topology optimization, where the effectiveness of a given control surface is tied to a layout design variable, which varies from zero (the actuator is removed) to one (the actuator is retained). These layout design variables are optimized concurrently with a large number of structural wingbox sizing variables and control surface actuation variables, in order to minimize the sum of structural weight and actuator weight. Results are presented that demonstrate interdependencies between structural sizing patterns and optimal control surface layouts, for both static and dynamic aeroelastic physics.

  14. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  15. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction

    PubMed Central

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining. PMID:26751200

  16. Fast axial and lateral displacement estimation in myocardial elastography based on RF signals with predictions.

    PubMed

    Zhang, Yaonan; Sun, Tingting; Teng, Yueyang; Li, Hong; Kang, Yan

    2015-01-01

    Myocardial elastography (ME) is a strain imaging technique used to diagnose myocardial diseases. Axial and lateral displacement calculations are pre-conditions of strain image acquisition in ME. W.N. Lee et al. proposed a normalized cross-correlation (NCC) and recorrelation method to obtain both axial and lateral displacements in ME. However, this method is not noise-resistant and of high computational cost. This paper proposes a predicted fast NCC algorithm based on W.N. Lee's method, with the additions of sum-table NCC and a displacement prediction algorithm, to obtain efficient and accurate axial and lateral displacements. Compared to experiments based on the NCC and recorrelation methods, the results indicate that the proposed NCC method is much faster (predicted fast NCC method, 69.75s for a 520×260 image; NCC and recorrelation method, 1092.25s for a 520×260 image) and demonstrates better performance in eliminating decorrelation noise (SNR of the axial and lateral strain using the proposed method, 5.87 and 1.25, respectively; SNR of the axial and lateral strain using the NCC and recorrelation method, 1.48 and 1.09, respectively).

  17. Predictive-based cross line for fast motion estimation in MPEG-4 videos

    NASA Astrophysics Data System (ADS)

    Fang, Hui; Jiang, Jianmin

    2004-05-01

    Block-based motion estimation is widely used in the field of video compression due to its feature of high processing speed and competitive compression efficiency. In the chain of compression operations, however, motion estimation still remains to be the most time-consuming process. As a result, any improvement in fast motion estimation will enable practical applications of MPEG techniques more efficient and more sustainable in terms of both processing speed and computing cost. To meet the requirements of real-time compression of videos and image sequences, such as video conferencing, remote video surveillance and video phones etc., we propose a new search algorithm and achieve fast motion estimation for MPEG compression standards based on existing algorithm developments. To evaluate the proposed algorithm, we adopted MPEG-4 and the prediction line search algorithm as the benchmarks to design the experiments. Their performances are measured by: (i) reconstructed video quality; (ii) processing time. The results reveal that the proposed algorithm provides a competitive alternative to the existing prediction line search algorithm. In comparison with MPEG-4, the proposed algorithm illustrates significant advantages in terms of processing speed and video quality.

  18. Sample pretreatment and nucleic acid-based detection for fast diagnosis utilizing microfluidic systems.

    PubMed

    Wang, Jung-Hao; Wang, Chih-Hung; Lee, Gwo-Bin

    2012-06-01

    Recently, micro-electro-mechanical-systems (MEMS) technology and micromachining techniques have enabled miniaturization of biomedical devices and systems. Not only do these techniques facilitate the development of miniaturized instrumentation for biomedical analysis, but they also open a new era for integration of microdevices for performing accurate and sensitive diagnostic assays. A so-called "micro-total-analysis-system", which integrates sample pretreatment, transport, reaction, and detection on a small chip in an automatic format, can be realized by combining functional microfluidic components manufactured by specific MEMS technologies. Among the promising applications using microfluidic technologies, nucleic acid-based detection has shown considerable potential recently. For instance, micro-polymerase chain reaction chips for rapid DNA amplification have attracted considerable interest. In addition, microfluidic devices for rapid sample pretreatment prior to nucleic acid-based detection have also achieved significant progress in the recent years. In this review paper, microfluidic systems for sample preparation, nucleic acid amplification and detection for fast diagnosis will be reviewed. These microfluidic devices and systems have several advantages over their large-scale counterparts, including lower sample/reagent consumption, lower power consumption, compact size, faster analysis, and lower per unit cost. The development of these microfluidic devices and systems may provide a revolutionary platform technology for fast sample pretreatment and accurate, sensitive diagnosis.

  19. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.

  20. Fast Electromagnetic Analysis of MRI Transmit RF Coils Based on Accelerated Integral Equation Methods.

    PubMed

    Villena, Jorge Fernandez; Polimeridis, Athanasios G; Eryaman, Yigitcan; Adalsteinsson, Elfar; Wald, Lawrence L; White, Jacob K; Daniel, Luca

    2016-11-01

    A fast frequency domain full-wave electromagnetic simulation method is introduced for the analysis of MRI coils loaded with the realistic human body models. The approach is based on integral equation methods decomposed into two domains: 1) the RF coil array and shield, and 2) the human body region where the load is placed. The analysis of multiple coil designs is accelerated by introducing the precomputed magnetic resonance Green functions (MRGFs), which describe how the particular body model used responds to the incident fields from external sources. These MRGFs, which are precomputed once for a given body model, can be combined with any integral equation solver and reused for the analysis of many coil designs. This approach provides a fast, yet comprehensive, analysis of coil designs, including the port S-parameters and the electromagnetic field distribution within the inhomogeneous body. The method solves the full-wave electromagnetic problem for a head array in few minutes, achieving a speed up of over 150 folds with root mean square errors in the electromagnetic field maps smaller than 0.4% when compared to the unaccelerated integral equation-based solver. This enables the characterization of a large number of RF coil designs in a reasonable time, which is a first step toward an automatic optimization of multiple parameters in the design of transmit arrays, as illustrated in this paper, but also receive arrays.

  1. Distributed measurement of dynamic strain based on multi-slope assisted fast BOTDA.

    PubMed

    Ba, Dexin; Wang, Benzhang; Zhou, Dengwang; Yin, Mingjing; Dong, Yongkang; Li, Hui; Lu, Zhiwei; Fan, Zhigang

    2016-05-02

    We propose and demonstrate a dynamic Brillouin optical fiber sensing based on the multi-slope assisted fast Brillouin optical time-domain analysis (F-BOTDA), which enables the measurement of a large strain with real-time data processing. The multi-slope assisted F-BOTDA is realized based on the double-slope demodulation and frequency-agile modulation, which significantly increases the measurement range compared with the single- or double- slope assisted F-BOTDA, while maintaining the advantage of fast data processing and being suitable for real-time on-line monitoring. A maximum strain variation up to 5000με is measured in a 32-m fiber with a spatial resolution of ~1m and a sampling rate of 1kHz. The frequency of the strain is 12.8Hz, which is limited by the rotation rate of the motor used to load the force on the fiber. Furthermore, the influence of the frequency difference between two adjacent probe tones on the measurement error is studied theoretically and experimentally for optimization. For a Brillouin gain spectrum with a 78-MHz width, the optimum frequency difference is ~40MHz. The measurement error of Brillouin frequency shift is less than 3MHz over the whole measurement range (241MHz).

  2. Analysis on spatial transfer model of energy development layout and the ecological footprint affection

    NASA Astrophysics Data System (ADS)

    Wei, Xiaoxia; Zhang, Jinfang

    2017-01-01

    Consider the global energy interconnection, the global is concentrating on carrying out clean energy alternative, which is mainly focusing on using the clean energy to take place of fossil energy, and change the global energy layout and ecological atmosphere condition. This research gives the energy spatial transfer model of energy development layout to analyse the global energy development layout condition and ecological affection. And it is a fast and direct method to analyse its energy usage process and environmental affection. The paper also gives out a system dynamics model of energy spatial transfer shows, which electric power transmission is better than original energy usage and transportation. It also gives the comparison of different parameters. The energy spatial transfer can affect the environment directly. Consider its three environmental factors, including energy saving, climate changing and conventional pollutant emission reduction, synthetic combine with the spatial transfer model, it can get the environmental change parameters, which showed that with the clean energy wide usage, the ecological footprint affection will be affected significantly.

  3. A ZnO nanowire-based photo-inverter with pulse-induced fast recovery

    NASA Astrophysics Data System (ADS)

    Ali Raza, Syed Raza; Lee, Young Tack; Hosseini Shokouh, Seyed Hossein; Ha, Ryong; Choi, Heon-Jin; Im, Seongil

    2013-10-01

    We demonstrate a fast response photo-inverter comprised of one transparent gated ZnO nanowire field-effect transistor (FET) and one opaque FET respectively as the driver and load. Under ultraviolet (UV) light the transfer curve of the transparent gate FET shifts to the negative side and so does the voltage transfer curve (VTC) of the inverter. After termination of UV exposure the recovery of photo-induced current takes a long time in general. This persistent photoconductivity (PPC) is due to hole trapping on the surface of ZnO NWs. Here, we used a positive voltage short pulse after UV exposure, for the first time resolving the PPC issue in nanowire-based photo-detectors by accumulating electrons at the ZnO/dielectric interface. We found that a pulse duration as small as 200 ns was sufficient to reach a full recovery to the dark state from the UV induced state, realizing a fast UV detector with a voltage output.We demonstrate a fast response photo-inverter comprised of one transparent gated ZnO nanowire field-effect transistor (FET) and one opaque FET respectively as the driver and load. Under ultraviolet (UV) light the transfer curve of the transparent gate FET shifts to the negative side and so does the voltage transfer curve (VTC) of the inverter. After termination of UV exposure the recovery of photo-induced current takes a long time in general. This persistent photoconductivity (PPC) is due to hole trapping on the surface of ZnO NWs. Here, we used a positive voltage short pulse after UV exposure, for the first time resolving the PPC issue in nanowire-based photo-detectors by accumulating electrons at the ZnO/dielectric interface. We found that a pulse duration as small as 200 ns was sufficient to reach a full recovery to the dark state from the UV induced state, realizing a fast UV detector with a voltage output. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr03801g

  4. TH-E-BRE-08: GPU-Monte Carlo Based Fast IMRT Plan Optimization

    SciTech Connect

    Li, Y; Tian, Z; Shi, F; Jiang, S; Jia, X

    2014-06-15

    Purpose: Intensity-modulated radiation treatment (IMRT) plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC) methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow. Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, rough beamlet dose calculations is conducted with only a small number of particles per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final Result. Results: For a lung case with 5317 beamlets, 10{sup 5} particles per beamlet in the first round, and 10{sup 8} particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec. Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.

  5. Fast Multiplexed Readout of Xmon Qubits Part I: Design

    NASA Astrophysics Data System (ADS)

    Sank, Daniel; Jeffrey, E.; Mutus, J. Y.; White, T. C.; Barends, R.; Kelly, J.; Chen, Y.; Roushan, P.; Campbell, B.; Chen, Z.; Chiaro, B.; Dunsworth, A.; Megrant, A.; Neill, C.; O'Malley, P.; Quintana, C.; Vainsencher, A.; Wenner, J.; Cleland, A. N.; Martinis, J. M.

    2014-03-01

    Realization of a surface code quantum computer requires fast scalable qubit readout. Previous systems have shown accurate readout in continuous wave mode. This neglects the transient response time which is crucial for the operation of the surface code and for measurement accuracy in the presence of finite qubit T1. We have designed a readout system, based on an integrated band pass filter, which achieves very fast transient response while maintaining long qubit T1. Our design uses separate readout resonators for each qubit. This allows individual qubit readout with frequency multiplexing while preventing correlated measurement errors. By connecting each resonator to a single filter the device requires zero additional on chip area and no extra control lines. We present design considerations, theory of operation, and physical layout of the device. With high fidelity gates this system forms the final element needed for a surface code cell.

  6. Fast QRS Detection with an Optimized Knowledge-Based Method: Evaluation on 11 Standard ECG Databases

    PubMed Central

    Elgendi, Mohamed

    2013-01-01

    The current state-of-the-art in automatic QRS detection methods show high robustness and almost negligible error rates. In return, the methods are usually based on machine-learning approaches that require sufficient computational resources. However, simple-fast methods can also achieve high detection rates. There is a need to develop numerically efficient algorithms to accommodate the new trend towards battery-driven ECG devices and to analyze long-term recorded signals in a time-efficient manner. A typical QRS detection method has been reduced to a basic approach consisting of two moving averages that are calibrated by a knowledge base using only two parameters. In contrast to high-accuracy methods, the proposed method can be easily implemented in a digital filter design. PMID:24066054

  7. Region-based image denoising through wavelet and fast discrete curvelet transform

    NASA Astrophysics Data System (ADS)

    Gu, Yanfeng; Guo, Yan; Liu, Xing; Zhang, Ye

    2008-10-01

    Image denoising always is one of important research topics in the image processing field. In this paper, fast discrete curvelet transform (FDCT) and undecimated wavelet transform (UDWT) are proposed for image denoising. A noisy image is first denoised by FDCT and UDWT separately. The whole image space is then divided into edge region and non-edge regions. After that, wavelet transform is performed on the images denoised by FDCT and UDWT respectively. Finally, the resultant image is fused through using both of edge region wavelet cofficients of the image denoised by FDCT and non-edge region wavelet cofficients of the image denoised by UDWT. The proposed method is validated through numerical experiments conducted on standard test images. The experimental results show that the proposed algorithm outperforms wavelet-based and curvelet-based image denoising methods and preserve linear features well.

  8. Theory of ion transport with fast acid-base equilibrations in bioelectrochemical systems.

    PubMed

    Dykstra, J E; Biesheuvel, P M; Bruning, H; Ter Heijne, A

    2014-07-01

    Bioelectrochemical systems recover valuable components and energy in the form of hydrogen or electricity from aqueous organic streams. We derive a one-dimensional steady-state model for ion transport in a bioelectrochemical system, with the ions subject to diffusional and electrical forces. Since most of the ionic species can undergo acid-base reactions, ion transport is combined in our model with infinitely fast ion acid-base equilibrations. The model describes the current-induced ammonia evaporation and recovery at the cathode side of a bioelectrochemical system that runs on an organic stream containing ammonium ions. We identify that the rate of ammonia evaporation depends not only on the current but also on the flow rate of gas in the cathode chamber, the diffusion of ammonia from the cathode back into the anode chamber, through the ion exchange membrane placed in between, and the membrane charge density.

  9. A fast and low-power microelectromechanical system-based non-volatile memory device

    PubMed Central

    Lee, Sang Wook; Park, Seung Joo; Campbell, Eleanor E. B.; Park, Yung Woo

    2011-01-01

    Several new generation memory devices have been developed to overcome the low performance of conventional silicon-based flash memory. In this study, we demonstrate a novel non-volatile memory design based on the electromechanical motion of a cantilever to provide fast charging and discharging of a floating-gate electrode. The operation is demonstrated by using an electromechanical metal cantilever to charge a floating gate that controls the charge transport through a carbon nanotube field-effect transistor. The set and reset currents are unchanged after more than 11 h constant operation. Over 500 repeated programming and erasing cycles were demonstrated under atmospheric conditions at room temperature without degradation. Multinary bit programming can be achieved by varying the voltage on the cantilever. The operation speed of the device is faster than a conventional flash memory and the power consumption is lower than other memory devices. PMID:21364559

  10. Theory of ion transport with fast acid-base equilibrations in bioelectrochemical systems

    NASA Astrophysics Data System (ADS)

    Dykstra, J. E.; Biesheuvel, P. M.; Bruning, H.; Ter Heijne, A.

    2014-07-01

    Bioelectrochemical systems recover valuable components and energy in the form of hydrogen or electricity from aqueous organic streams. We derive a one-dimensional steady-state model for ion transport in a bioelectrochemical system, with the ions subject to diffusional and electrical forces. Since most of the ionic species can undergo acid-base reactions, ion transport is combined in our model with infinitely fast ion acid-base equilibrations. The model describes the current-induced ammonia evaporation and recovery at the cathode side of a bioelectrochemical system that runs on an organic stream containing ammonium ions. We identify that the rate of ammonia evaporation depends not only on the current but also on the flow rate of gas in the cathode chamber, the diffusion of ammonia from the cathode back into the anode chamber, through the ion exchange membrane placed in between, and the membrane charge density.

  11. Fast fabrication of curved microlens array using DMD-based lithography

    NASA Astrophysics Data System (ADS)

    Zhang, Zhimin; Gao, Yiqing; Luo, Ningning; Zhong, Kejun

    2016-01-01

    Curved microlens array is the core element of the biologically inspired artificial compound eye. Many existing fabrication processes remain expensive and complicated, which limits a broad range of application of the artificial compound eye. In this paper, we report a fast fabrication method for curved microlens array by using DMD-based maskless lithography. When a three-dimensional (3D) target curved profile is projected into a two-dimensional (2D) mask, arbitrary curved microlens array can be flexibly and efficiently obtained by utilizing DMD-based lithography. In order to verify the feasibility of this method, a curved PDMS microlens array with 90 micro lenslets has been fabricated. The physical and optical characteristics of the fabricated microlens array suggest that this method is potentially suitable for applications in artificial compound eye.

  12. Correlated image set compression system based on new fast efficient algorithm of Karhunen-Loeve transform

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-10-01

    The paper presents improved version of our new method for compression of correlated image sets Optimal Image Coding using Karhunen-Loeve transform (OICKL). It is known that Karhunen-Loeve (KL) transform is most optimal representation for such a purpose. The approach is based on fact that every KL basis function gives maximum possible average contribution in every image and this contribution decreases most quickly among all possible bases. So, we lossy compress every KL basis function by Embedded Zerotree Wavelet (EZW) coding with essentially different loss that depends on the functions' contribution in the images. The paper presents new fast low memory consuming algorithm of KL basis construction for compression of correlated image ensembles that enable our OICKL system to work on common hardware. We also present procedure for determining of optimal losses of KL basic functions caused by compression. It uses modified EZW coder which produce whole PSNR (bitrate) curve during the only compression pass.

  13. Fast intensity-modulated arc therapy based on 2-step beam segmentation

    SciTech Connect

    Bratengeier, Klaus; Gainey, Mark; Sauer, Otto A.; Richter, Anne; Flentje, Michael

    2011-01-15

    Purpose: Single or few arc intensity-modulated arc therapy (IMAT) is intended to be a time saving irradiation method, potentially replacing classical intensity-modulated radiotherapy (IMRT). The aim of this work was to evaluate the quality of different IMAT methods with the potential of fast delivery, which also has the possibility of adapting to the daily shape of the target volume. Methods: A planning study was performed. Novel double and triple IMAT techniques based on the geometrical analysis of the target organ at risk geometry (2-step IMAT) were evaluated. They were compared to step and shoot IMRT reference plans generated using direct machine parameter optimization (DMPO). Volumetric arc (VMAT) plans from commercial preclinical software (SMARTARC) were used as an additional benchmark to classify the quality of the novel techniques. Four cases with concave planning target volumes (PTV) with one dominating organ at risk (OAR), viz., the PTV/OAR combination of the ESTRO Quasimodo phantom, breast/lung, spine metastasis/spinal cord, and prostate/rectum, were used for the study. The composite objective value (COV) and other parameters representing the plan quality were studied. Results: The novel 2-step IMAT techniques with geometry based segment definition were as good as or better than DMPO and were superior to the SMARTARC VMAT techniques. For the spine metastasis, the quality measured by the COV differed only by 3%, whereas the COV of the 2-step IMAT for the other three cases decreased by a factor of 1.4-2.4 with respect to the reference plans. Conclusions: Rotational techniques based on geometrical analysis of the optimization problem (2-step IMAT) provide similar or better plan quality than DMPO or the research version of SMARTARC VMAT variants. The results justify pursuing the goal of fast IMAT adaptation based on 2-step IMAT techniques.

  14. Evaluation of Carrying Capacity Land-Based Layout to Mitigate Flood Risk (Case Study in Tempuran Floodplain, Ponorogo Regency) Novia Lusiana1 Bambang Rahadi2 Tunggul Sutanhaji3 1Environmental and Natural Resources Management Graduate Program University of Brawijaya, Malang, Indonesia 23Laboratory of Environment and Natural Resources Engineering, Department of Agricultural Engineering, Faculty of Agricultural Technology, University of Brawijaya, Malang, Indonesia Email : novialusiana@rocketmail.com, jbrahadi@ub.ac.id, tunggulsutanhaji@yahoo.com

    NASA Astrophysics Data System (ADS)

    Lusiana, N.

    2013-12-01

    Abstract Floods haves frequently hit Indonesia and have had greater negative impacts. In Javaboth the area affected by flooding and the amount of damage caused by floods have increased. At least, five factors, affect the flooding in Indonesia, including rainfall, reduced retention capacity of the watershed, erroneous design of river channel development, silting-up of the river, and erroneous regional layout. The level of the disastrous risks can be evaluated based on the extent of the threat and susceptibility of a region. One methode for risk assessment is Geographical Information System (GIS)-based mapping. Objectives of this research are: 1) evaluating current flood risk in susceptible areas, 2) applying supported land-based layout as effort to mitigate floodrisk, and 3) evaluating floodrisk for the period 2031 in the Tempuran floodplain of Ponorogo Regency. Result show that the area categorized as high risk covers 104. 6 ha (1. 2%), moderate risk covers 2512. 9 ha (28. 4%), low risk covers 3140. 8 ha (35. 5%), and the lowest risk covers 3096. 1 (34. 9%). Using Regional Layout Design for the years 2011 - 2031, the high risk area covers 67. 9 ha (0.8%), moderate risk covers 3033 ha (34. 3%), low risk covers 2770. 8 ha (31, 3%), and the lowest risk covers 2982. 6 ha (34%). Based on supported land suitability, the high-risk areais only 2. 9 ha (0.1%), moderate risk covers of 426. 1 ha (4. 8%), low risk covers 4207. 4 ha (47. 5%), and the lowest risk covers 4218 ha (47. 6%). Flood risk can be mitigated by applying supported land-based layout as shown by the reduced high-risk area, and the fact that > 90% of the areas are categorized as low or lowest risk of disaster. Keywords : Carrying Capacity, Land Capacity, Flood Risk

  15. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.

    PubMed

    Jabbari, Keyvan; Seuntjens, Jan

    2014-07-01

    An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.

  16. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX

    PubMed Central

    Jabbari, Keyvan; Seuntjens, Jan

    2014-01-01

    An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994

  17. Fast template matching based on grey prediction for real-time object tracking

    NASA Astrophysics Data System (ADS)

    Lv, Mingming; Hou, Yuanlong; Liu, Rongzhong; Hou, Runmin

    2017-02-01

    Template matching is a basic algorithm for image processing, and real-time is a crucial requirement of object tracking. For real-time tracking, a fast template matching algorithm based on grey prediction is presented, where computation cost can be reduced dramatically by minimizing search range. First, location of the tracked object in the current image is estimated by Grey Model (GM). GM(1,1), which is the basic model of grey prediction, can use some known information to foretell the location. Second, the precise position of the object in the frame is computed by template matching. Herein, Sequential Similarity Detection Algorithm (SSDA) with a self-adaptive threshold is employed to obtain the matching position in the neighborhood of the predicted location. The role of threshold in SSDA is important, as a proper threshold can make template matching fast and accurate. Moreover, a practical weighted strategy is utilized to handle scale and rotation changes of the object, as well as illumination changes. The experimental results show the superior performance of the proposed algorithm over the conventional full-search method, especially in terms of executive time.

  18. Understanding and eliminating the fast creep problem in Fe-based superconductors

    NASA Astrophysics Data System (ADS)

    Civale, Leonardo; Eley, Serena; Maiorov, Boris; Miura, Masashi

    One surprising characteristic of Fe-based superconductors is that they exhibit flux creep rates (S) as large as, or larger than, those found in oxide high temperature superconductors (HTS). This very fast vortex dynamics appears to be inconsistent with the estimate of the influence of the thermal fluctuations as quantified by the Ginzburg number (Gi), which measures the ratio of the thermal energy to the condensation energy in an elemental superconducting volume. In particular, compounds of the AFe2As2 family (``122'') have Gi ~10-5 to 10-4, so S could be expected to lie between that of low Tc materials (where typically Gi ~ 10-8) and HTS such as YBa2Cu3O7 (Gi ~ 10-2) , as indeed occurs in other superconductors with intermediate fluctuations, such as MgB2 (Gi ~10-6 to 10-4) . We have found the solution to this puzzle: the fast creep rates in 122 compounds are due to non-optimized pinning landscapes. Initial evidence comes from our previous studies showing that the introduction of additional disorder by irradiation decreases creep significantly in 122 single crystals, although still remaining well above the ideal limit. We now have new evidence from 122 thin films demonstrating that S can be reduced to the lower limit set by Gi by appropriate engineering of the pinning landscape.

  19. A ZnO nanowire-based photo-inverter with pulse-induced fast recovery.

    PubMed

    Raza, Syed Raza Ali; Lee, Young Tack; Hosseini Shokouh, Seyed Hossein; Ha, Ryong; Choi, Heon-Jin; Im, Seongil

    2013-11-21

    We demonstrate a fast response photo-inverter comprised of one transparent gated ZnO nanowire field-effect transistor (FET) and one opaque FET respectively as the driver and load. Under ultraviolet (UV) light the transfer curve of the transparent gate FET shifts to the negative side and so does the voltage transfer curve (VTC) of the inverter. After termination of UV exposure the recovery of photo-induced current takes a long time in general. This persistent photoconductivity (PPC) is due to hole trapping on the surface of ZnO NWs. Here, we used a positive voltage short pulse after UV exposure, for the first time resolving the PPC issue in nanowire-based photo-detectors by accumulating electrons at the ZnO/dielectric interface. We found that a pulse duration as small as 200 ns was sufficient to reach a full recovery to the dark state from the UV induced state, realizing a fast UV detector with a voltage output.

  20. Fast single photon avalanche photodiode-based time-resolved diffuse optical tomography scanner

    PubMed Central

    Mu, Ying; Niedre, Mark

    2015-01-01

    Resolution in diffuse optical tomography (DOT) is a persistent problem and is primarily limited by high degree of light scatter in biological tissue. We showed previously that the reduction in photon scatter between a source and detector pair at early time points following a laser pulse in time-resolved DOT is highly dependent on the temporal response of the instrument. To this end, we developed a new single-photon avalanche photodiode (SPAD) based time-resolved DOT scanner. This instrument uses an array of fast SPADs, a femto-second Titanium Sapphire laser and single photon counting electronics. In combination, the overall instrument temporal impulse response function width was 59 ps. In this paper, we report the design of this instrument and validate its operation in symmetrical and irregularly shaped optical phantoms of approximately small animal size. We were able to accurately reconstruct the size and position of up to 4 absorbing inclusions, with increasing image quality at earlier time windows. We attribute these results primarily to the rapid response time of our instrument. These data illustrate the potential utility of fast SPAD detectors in time-resolved DOT. PMID:26417526

  1. Cygrid: A fast Cython-powered convolution-based gridding module for Python

    NASA Astrophysics Data System (ADS)

    Winkel, B.; Lenz, D.; Flöer, L.

    2016-06-01

    Context. Data gridding is a common task in astronomy and many other science disciplines. It refers to the resampling of irregularly sampled data to a regular grid. Aims: We present cygrid, a library module for the general purpose programming language Python. Cygrid can be used to resample data to any collection of target coordinates, although its typical application involves FITS maps or data cubes. The FITS world coordinate system standard is supported. Methods: The regridding algorithm is based on the convolution of the original samples with a kernel of arbitrary shape. We introduce a lookup table scheme that allows us to parallelize the gridding and combine it with the HEALPix tessellation of the sphere for fast neighbor searches. Results: We show that for n input data points, cygrids runtime scales between O(n) and O(nlog n) and analyze the performance gain that is achieved using multiple CPU cores. We also compare the gridding speed with other techniques, such as nearest-neighbor, and linear and cubic spline interpolation. Conclusions: Cygrid is a very fast and versatile gridding library that significantly outperforms other third-party Python modules, such as the linear and cubic spline interpolation provided by SciPy. http://https://github.com/bwinkel/cygrid

  2. Fast phase unwrapping algorithm based on region partition for structured light vision measurement

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Su, Hang

    2014-04-01

    Phase unwrapping is a key problem of phase-shifting profilometry vision measurement for complex object surface shapes. The simple path-following phase unwrapping algorithm is fast but has serious unwrapping error for complex shapes. The Goldstein+flood phase unwrapping algorithm can handle some complex shape object measurement; however, it is time consuming. We propose a fast phase unwrapping algorithm based on region partition according to a quality map of wrapped phase. In this algorithm, wrapped phase image is divided into several regions using partition thresholds, which are determined according to histogram of quality value. Each region is unwrapped by using a simple path-following phase algorithm and several groups with different priorities are generated. These groups are merged according to their priorities from high to low order and a final absolute phase is obtained. The proposed method is applied to wrapped phase images of three objects with and without noise. Experiments show that the proposed method is much faster, more accurate, and robust to noise than the Goldstein+flood algorithm in unwrapping complex phase image.

  3. Fast multichannel astronomical photometer based on silicon photo multipliers mounted at the Telescopio Nazionale Galileo

    NASA Astrophysics Data System (ADS)

    Ambrosino, Filippo; Meddi, Franco; Rossi, Corinne; Sclavi, Silvia; Nesci, Roberto; Bruni, Ivan; Ghedina, Adriano; Riverol, Luis; Di Fabrizio, Luca

    2014-07-01

    The realization of low-cost instruments with high technical performance is a goal that deserves efforts in an epoch of fast technological developments. Such instruments can be easily reproduced and therefore allow new research programs to be opened in several observatories. We realized a fast optical photometer based on the SiPM (Silicon Photo Multiplier) technology, using commercially available modules. Using low-cost components, we developed a custom electronic chain to extract the signal produced by a commercial MPPC (Multi Pixel Photon Counter) module produced by Hamamatsu Photonics to obtain sub-millisecond sampling of the light curve of astronomical sources (typically pulsars). We built a compact mechanical interface to mount the MPPC at the focal plane of the TNG (Telescopio Nazionale Galileo), using the space available for the slits of the LRS (Low Resolution Spectrograph). On February 2014 we observed the Crab pulsar with the TNG with our prototype photometer, deriving its period and the shape of its light curve, in very good agreement with the results obtained in the past with other much more expensive instruments. After the successful run at the telescope we describe here the lessons learned and the ideas that burst to optimize this instrument and make it more versatile.

  4. A fast continuous magnetic field measurement system based on digital signal processors

    SciTech Connect

    Velev, G.V.; Carcagno, R.; DiMarco, J.; Kotelnikov, S.; Lamm, M.; Makulski, A.; Maroussov, V.; Nehring, R.; Nogiec, J.; Orris, D.; Poukhov, O.; Prakoshyn, F.; Schlabach, P.; Tompkins, J.C.; /Fermilab

    2005-09-01

    In order to study dynamic effects in accelerator magnets, such as the decay of the magnetic field during the dwell at injection and the rapid so-called ''snapback'' during the first few seconds of the resumption of the energy ramp, a fast continuous harmonics measurement system was required. A new magnetic field measurement system, based on the use of digital signal processors (DSP) and Analog to Digital (A/D) converters, was developed and prototyped at Fermilab. This system uses Pentek 6102 16 bit A/D converters and the Pentek 4288 DSP board with the SHARC ADSP-2106 family digital signal processor. It was designed to acquire multiple channels of data with a wide dynamic range of input signals, which are typically generated by a rotating coil probe. Data acquisition is performed under a RTOS, whereas processing and visualization are performed under a host computer. Firmware code was developed for the DSP to perform fast continuous readout of the A/D FIFO memory and integration over specified intervals, synchronized to the probe's rotation in the magnetic field. C, C++ and Java code was written to control the data acquisition devices and to process a continuous stream of data. The paper summarizes the characteristics of the system and presents the results of initial tests and measurements.

  5. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    NASA Astrophysics Data System (ADS)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  6. Beam test results of a 16 ps timing system based on ultra-fast silicon detectors

    NASA Astrophysics Data System (ADS)

    Cartiglia, N.; Staiano, A.; Sola, V.; Arcidiacono, R.; Cirio, R.; Cenna, F.; Ferrero, M.; Monaco, V.; Mulargia, R.; Obertino, M.; Ravera, F.; Sacchi, R.; Bellora, A.; Durando, S.; Mandurrino, M.; Minafra, N.; Fadeyev, V.; Freeman, P.; Galloway, Z.; Gkougkousis, E.; Grabas, H.; Gruey, B.; Labitan, C. A.; Losakul, R.; Luce, Z.; McKinney-Martinez, F.; Sadrozinski, H. F.-W.; Seiden, A.; Spencer, E.; Wilder, M.; Woods, N.; Zatserklyaniy, A.; Pellegrini, G.; Hidalgo, S.; Carulla, M.; Flores, D.; Merlos, A.; Quirion, D.; Cindro, V.; Kramberger, G.; Mandić, I.; Mikuž, M.; Zavrtanik, M.

    2017-04-01

    In this paper we report on the timing resolution obtained in a beam test with pions of 180 GeV/c momentum at CERN for the first production of 45 μm thick Ultra-Fast Silicon Detectors (UFSD). UFSD are based on the Low-Gain Avalanche Detector (LGAD) design, employing n-on-p silicon sensors with internal charge multiplication due to the presence of a thin, low-resistivity diffusion layer below the junction. The UFSD used in this test had a pad area of 1.7 mm2. The gain was measured to vary between 5 and 70 depending on the sensor bias voltage. The experimental setup included three UFSD and a fast trigger consisting of a quartz bar readout by a SiPM. The timing resolution was determined by doing Gaussian fits to the time-of-flight of the particles between one or more UFSD and the trigger counter. For a single UFSD the resolution was measured to be 34 ps for a bias voltage of 200 V, and 27 ps for a bias voltage of 230 V. For the combination of 3 UFSD the timing resolution was 20 ps for a bias voltage of 200 V, and 16 ps for a bias voltage of 230 V.

  7. Fast Coalescent-Based Computation of Local Branch Support from Quartet Frequencies

    PubMed Central

    Sayyari, Erfan; Mirarab, Siavash

    2016-01-01

    Species tree reconstruction is complicated by effects of incomplete lineage sorting, commonly modeled by the multi-species coalescent model (MSC). While there has been substantial progress in developing methods that estimate a species tree given a collection of gene trees, less attention has been paid to fast and accurate methods of quantifying support. In this article, we propose a fast algorithm to compute quartet-based support for each branch of a given species tree with regard to a given set of gene trees. We then show how the quartet support can be used in the context of the MSC to compute (1) the local posterior probability (PP) that the branch is in the species tree and (2) the length of the branch in coalescent units. We evaluate the precision and recall of the local PP on a wide set of simulated and biological datasets, and show that it has very high precision and improved recall compared with multi-locus bootstrapping. The estimated branch lengths are highly accurate when gene tree estimation error is low, but are underestimated when gene tree estimation error increases. Computation of both the branch length and local PP is implemented as new features in ASTRAL. PMID:27189547

  8. Yield-centric layout optimization with precise quantification of lithographic yield loss

    NASA Astrophysics Data System (ADS)

    Kobayashi, Sachiko; Kyoh, Suigen; Kinoshita, Koichi; Urakawa, Yukihiro; Morifuji, Eiji; Kuramoto, Satoshi; Inoue, Soichi

    2008-05-01

    Continuous shrinkage of the design rule in LSI devices brings about greater difficulty in the manufacturing process. Since not only process engineers' efforts but also yield-centric layout optimization is becoming increasingly important, such optimization has recently become a focus of interest. One of the approached is lithographic hotspot modification in design data. Using lithography compliance check and a hotspot fixing system in the early stage of design, design with wider process margin can be obtained. In order to achieve higher process yield after hotspot fixing, layout should be carefully optimized to decrease pattern-dependent yield loss. Since yield value for the design will fluctuate sensitively as designed pattern are modified, pattern should be optimized based on a comprehensive consideration of yield loss covering parametric, systematic and random effects. In this work, using lithography simulation, a lithographic yield loss model is defined and applied for precise quantification of process yield loss in 45 nm logic design. Yield loss values of each cell for lithographic, parametric and random effects are estimated, and then layouts through multiple layers are optimized to decrease total yield loss. As a result, litho-yield loss is greatly improved without deteriorating total yield value. Thus, layout is obtained that reflects an awareness of overall process yield.

  9. Learning from graphic designers: using grids as a scaffolding for automatic print layout

    NASA Astrophysics Data System (ADS)

    O'Brien-Strain, Eamonn; Liu, Jerry

    2010-02-01

    We describe an approach for automatically laying out content for high quality printed formats such as magazines or brochures, producing an aesthetically pleasing layout that correctly conveys the semantic structure of the content and elicits the desired experiential affect in the reader. The semantic structure of the content includes the reading order graph, the association of illustrations with referring paragraphs, and the preservation of perceived text hierarchies. We appropriate a popular conceptual tool used by graphic designers called the grid. A well-designed grid will cause a pleasing uniformity through all the pages of a publication while still allowing flexibility in the layout of each page. In the space of different automatic layout systems, our approach is somewhere between template-based techniques and generative techniques, with the aesthetics determined by the combination of the grid and a generative algorithm One consequence of using the grid is that it greatly reduces the space of possible layouts from a high dimensional continuous space to a discrete space. Using a simple greedy algorithm, our first results are promising.

  10. ITER plant layout and site services

    NASA Astrophysics Data System (ADS)

    Chuyanov, V. A.

    2000-03-01

    The ITER site has not yet been determined. Nevertheless, to develop a construction plan and a cost estimate, it is necessary to have a detailed layout of the buildings, structures and outdoor equipment integrated with the balance of plant service systems prototypical of large fusion power plants. These services include electrical power for magnet feeds and plasma heating systems, cryogenic and conventional cooling systems, compressed air, gas supplies, demineralized water, steam and drainage. Nuclear grade facilities are provided to handle tritium fuel and activated waste, as well as to prevent radiation exposure of workers and the public. To prevent interference between services of different types and for efficient arrangement of buildings, structures and equipment within the site area, a plan was developed which segregated different classes of services to four quadrants surrounding the tokamak building, placed at the approximate geographical centre of the site. The locations of the buildings on the generic site were selected to meet all design requirements at minimum total project cost. A similar approach was used to determine the locations of services above, at and below grade. The generic site plan can be adapted to the site selected for ITER without significant changes to the buildings or equipment. Some rearrangements may be required by site topography, resulting primarily in changes to the length of services that link the buildings and equipment.

  11. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  12. Two linear time, low overhead algorithms for graph layout

    SciTech Connect

    Wylie, Brian; Baumes, Jeff

    2008-01-10

    The software comprises two algorithms designed to perform a 2D layout of a graph structure in time linear with respect to the vertices and edges in the graph, whereas most other layout algorithms have a running time that is quadratic with respect to the number of vertices or greater. Although these layout algorithms run in a fraction of the time as their competitors, they provide competitive results when applied to most real-world graphs. These algorithms also have a low constant running time and small memory footprint, making them useful for small to large graphs.

  13. The perception of surface layout during low level flight

    NASA Technical Reports Server (NTRS)

    Perrone, John A.

    1991-01-01

    Although it is fairly well established that information about surface layout can be gained from motion cues, it is not so clear as to what information humans can use and what specific information they should be provided. Theoretical analyses tell us that the information is in the stimulus. It will take more experiments to verify that this information can be used by humans to extract surface layout from the 2D velocity flow field. The visual motion factors that can affect the pilot's ability to control an aircraft and to infer the layout of the terrain ahead are discussed.

  14. A fast density-based clustering algorithm for real-time Internet of Things stream.

    PubMed

    Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.

  15. An optimized fast image resizing method based on content-aware

    NASA Astrophysics Data System (ADS)

    Lu, Yan; Gao, Kun; Wang, Kewang; Xu, Tingfa

    2014-11-01

    In traditional image resizing theory based on interpolation, the prominent object may cause distortion, and the image resizing method based on content-aware has become a research focus in image processing because the prominent content and structural features of images are considered in this method. In this paper, we present an optimized fast image resizing method based on content-aware. Firstly, an appropriate energy function model is constructed on the basis of image meshes, and multiple energy constraint templates are established. In addition, this paper deducts the image saliency constraints, and then the problem of image resizing is used to reformulate a kind of convex quadratic program task. Secondly, a method based on neural network is presented in solving the problem of convex quadratic program. The corresponding neural network model is constructed; moreover, some sufficient conditions of the neural network stability are given. Compared with the traditional numerical algorithm such as iterative method, the neural network method is essentially parallel and distributed, which can expedite the calculation speed. Finally, the effects of image resizing by the proposed method and traditional image resizing method based on interpolation are compared by adopting MATLAB software. Experiment results show that this method has a higher performance of identifying the prominent object, and the prominent features can be preserved effectively after the image is resized. It also has the advantages of high portability and good real-time performance with low visual distortion.

  16. PCM-Based Durable Write Cache for Fast Disk I/O

    SciTech Connect

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick; Li, Dong; Vetter, Jeffrey S; Yu, Weikuan

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop a novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.

  17. Project FAST.

    ERIC Educational Resources Information Center

    Essexville-Hampton Public Schools, MI.

    Described are components of Project FAST (Functional Analysis Systems Training) a nationally validated project to provide more effective educational and support services to learning disordered children and their regular elementary classroom teachers. The program is seen to be based on a series of modules of delivery systems ranging from mainstream…

  18. Fault Diagnosis of Rolling Bearing Based on Fast Nonlocal Means and Envelop Spectrum

    PubMed Central

    Lv, Yong; Zhu, Qinglin; Yuan, Rui

    2015-01-01

    The nonlocal means (NL-Means) method that has been widely used in the field of image processing in recent years effectively overcomes the limitations of the neighborhood filter and eliminates the artifact and edge problems caused by the traditional image denoising methods. Although NL-Means is very popular in the field of 2D image signal processing, it has not received enough attention in the field of 1D signal processing. This paper proposes a novel approach that diagnoses the fault of a rolling bearing based on fast NL-Means and the envelop spectrum. The parameters of the rolling bearing signals are optimized in the proposed method, which is the key contribution of this paper. This approach is applied to the fault diagnosis of rolling bearing, and the results have shown the efficiency at detecting roller bearing failures. PMID:25585105

  19. Fast depth decision for HEVC inter prediction based on spatial and temporal correlation

    NASA Astrophysics Data System (ADS)

    Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi

    2016-07-01

    High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.

  20. A fast and accurate image-based measuring system for isotropic reflection materials

    NASA Astrophysics Data System (ADS)

    Kim, Duck Bong; Kim, Kang Yeon; Park, Kang Su; Seo, Myoung Kook; Lee, Kwan H.

    2008-08-01

    We present a novel image-based BRDF (Bidirectional Reflectance Distribution Function) measurement system for materials that have isotropic reflectance properties. Our proposed system is fast due to simple set up and automated operations. It also provides a wide angular coverage and noise reduction capability so that it achieves accuracy that is needed for computer graphics applications. We test the uniformity and constancy of the light source and the reciprocity of the measurement system. We perform a photometric calibration of HDR (High Dynamic Range) camera to recover an accurate radiance map from each HDR image. We verify our proposed system by comparing it with a previous imagebased BRDF measurement system. We demonstrate the efficiency and accuracy of our proposed system by generating photorealistic images of the measured BRDF data that include glossy blue, green plastics, gold coated metal and gold metallic paints.

  1. Fast approach to infrared image restoration based on shrinkage functions calibration

    NASA Astrophysics Data System (ADS)

    Zhang, Chengshuo; Shi, Zelin; Xu, Baoshu; Feng, Bin

    2016-05-01

    High-quality image restoration in real time is a challenge for infrared imaging systems. We present a fast approach to infrared image restoration based on shrinkage functions calibration. Rather than directly modeling the prior of sharp images to obtain the shrinkage functions, we calibrate them for restoration directly by using the acquirable sharp and blurred image pairs from the same infrared imaging system. The calibration method is employed to minimize the sum of squared errors between sharp images and restored images from the blurred images. Our restoration algorithm is noniterative and its shrinkage functions are stored in the look-up tables, so an architecture solution of pipeline structure can work in real time. We demonstrate the effectiveness of our approach by testing its quantitative performance from simulation experiments and its qualitative performance from a developed wavefront coding infrared imaging system.

  2. A robust and fast line segment detector based on top-down smaller eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing

    2014-01-01

    In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.

  3. Proton linac for hospital-based fast neutron therapy and radioisotope production

    SciTech Connect

    Lennox, A.J.; Hendrickson, F.R.; Swenson, D.A.; Winje, R.A.; Young, D.E.; Rush Univ., Chicago, IL; Science Applications International Corp., Princeton, NJ; Fermi National Accelerator Lab., Batavia, IL )

    1989-09-01

    Recent developments in linac technology have led to the design of a hospital-based proton linac for fast neutron therapy. The 180 microamp average current allows beam to be diverted for radioisotope production during treatments while maintaining an acceptable dose rate. During dedicated operation, dose rates greater than 280 neutron rads per minute are achievable at depth, DMAX = 1.6 cm with source to axis distance, SAD = 190 cm. Maximum machine energy is 70 MeV and several intermediate energies are available for optimizing production of isotopes for Positron Emission Tomography and other medical applications. The linac can be used to produce a horizontal or a gantry can be added to the downstream end of the linac for conventional patient positioning. The 70 MeV protons can also be used for proton therapy for ocular melanomas. 17 refs., 1 fig., 1 tab.

  4. Fast Dynamic Meshing Method Based on Delaunay Graph and Inverse Distance Weighting Interpolation

    NASA Astrophysics Data System (ADS)

    Wang, Yibin; Qin, Ning; Zhao, Ning

    2016-06-01

    A novel mesh deformation technique is developed based on the Delaunay graph mapping method and the inverse distance weighting (IDW) interpolation. The algorithm maintains the advantages of the efficiency of Delaunay-graph-mapping mesh deformation while possess the ability for better controlling the near surface mesh quality. The Delaunay graph is used to divide the mesh domain into a number of sub-domains. On each of the sub-domains, the inverse distance weighting interpolation is applied to build a much smaller sized translation matrix between the original mesh and the deformed mesh, resulting a similar efficiency for the mesh deformation as compared to the fast Delaunay graph mapping method. The paper will show how the near-wall mesh quality is controlled and improved by the new method while the computational time is compared with the original Delaunay graph mapping method.

  5. Analysis of Nickel Based Hardfacing Materials Manufactured by Laser Cladding for Sodium Fast Reactor

    NASA Astrophysics Data System (ADS)

    Aubry, P.; Blanc, C.; Demirci, I.; Dal, M.; Malot, T.; Maskrot, H.

    For improving the operational capacity, the maintenance and the decommissioning of the future French Sodium Fast Reactor ASTRID which is under study, it is asked to find or develop a cobalt free hardfacing alloy and the associated manufacturing process that will give satisfying wear performances. This article presents recent results obtained on some selected nickel-based hardfacing alloys manufactured by laser cladding, particularly on Tribaloy 700 alloy. A process parameter search is made and associated the microstructural analysis of the resulting clads. A particular attention is made on the solidification of the main precipitates (chromium carbides, boron carbides, Laves phases,…) that will mainly contribute to the wear properties of the material. Finally, the wear resistance of some samples is evaluated in simple wear conditions evidencing promising results on tribology behavior of Tribaloy 700.

  6. Two-dimensional electronic spectroscopy based on conventional optics and fast dual chopper data acquisition

    NASA Astrophysics Data System (ADS)

    Heisler, Ismael A.; Moca, Roberta; Camargo, Franco V. A.; Meech, Stephen R.

    2014-06-01

    We report an improved experimental scheme for two-dimensional electronic spectroscopy (2D-ES) based solely on conventional optical components and fast data acquisition. This is accomplished by working with two choppers synchronized to a 10 kHz repetition rate amplified laser system. We demonstrate how scattering and pump-probe contributions can be removed during 2D measurements and how the pump probe and local oscillator spectra can be generated and saved simultaneously with each population time measurement. As an example the 2D-ES spectra for cresyl violet were obtained. The resulting 2D spectra show a significant oscillating signal during population evolution time which can be assigned to an intramolecular vibrational mode.

  7. A fast image retrieval method based on SVM and imbalanced samples in filtering multimedia message spam

    NASA Astrophysics Data System (ADS)

    Chen, Zhang; Peng, Zhenming; Peng, Lingbing; Liao, Dongyi; He, Xin

    2011-11-01

    With the swift and violent development of the Multimedia Messaging Service (MMS), it becomes an urgent task to filter the Multimedia Message (MM) spam effectively in real-time. For the fact that most MMs contain images or videos, a method based on retrieving images is given in this paper for filtering MM spam. The detection method used in this paper is a combination of skin-color detection, texture detection, and face detection, and the classifier for this imbalanced problem is a very fast multi-classification combining Support vector machine (SVM) with unilateral binary decision tree. The experiments on 3 test sets show that the proposed method is effective, with the interception rate up to 60% and the average detection time for each image less than 1 second.

  8. A fast and scalable content transfer protocol (FSCTP) for VANET based architecture

    NASA Astrophysics Data System (ADS)

    Santamaria, A. F.; Scala, F.; Sottile, C.; Tropea, M.; Raimondo, P.

    2016-05-01

    In the modern Vehicular Ad-hoc Networks (VANET) based systems even more applications require lot of data to be exchanged among vehicles and infrastructure entities. Due to mobility issues and unplanned events that may occurs it is important that contents should be transferred as fast as possible by taking into account consistence of the exchanged data and reliability of the connections. In order to face with these issues, in this work we propose a new transfer data protocol called Fast and Scalable Content Transfer Protocol (FSCTP). This protocol allows a data transfer by using a bidirectional channel among content suppliers and receivers exploiting several cooperative sessions. Each session will be based on User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) to start and manage data transfer. Often in urban area the VANET scenario is composed of several vehicle and infrastructures points. The main idea is to exploit ad-hoc connections between vehicles to reach content suppliers. Moreover, in order to obtain a faster data transfer, more than one session is exploited to achieve a higher transfer rate. Of course it is important to manage data transfer between suppliers to avoid redundancy and resource wastages. The main goal is to instantiate a cooperative multi-session layer efficiently managed in a VANET environment exploiting the wide coverage area and avoiding common issues known in this kind of scenario. High mobility and unstable connections between nodes are some of the most common issues to address, thus a cooperative work between network, transport and application layers needs to be designed.

  9. CRBLASTER: A Fast Parallel-Processing Program for Cosmic Ray Rejection in Space-Based Observations

    NASA Astrophysics Data System (ADS)

    Mighell, K.

    Many astronomical image analysis tasks are based on algorithms that can be described as being embarrassingly parallel - where the analysis of one subimage generally does not affect the analysis of another subimage. Yet few parallel-processing astrophysical image-analysis programs exist that can easily take full advantage of today's fast multi-core servers costing a few thousands of dollars. One reason for the shortage of state-of-the-art parallel processing astrophysical image-analysis codes is that the writing of parallel codes has been perceived to be difficult. I describe a new fast parallel-processing image-analysis program called CRBLASTER which does cosmic ray rejection using van Dokkum's L.A.Cosmic algorithm. CRBLASTER is written in C using the industry standard Message Passing Interface library. Processing a single 800 x 800 Hubble Space Telescope Wide-Field Planetary Camera 2 (WFPC2) image takes 1.9 seconds using 4 processors on an Apple Xserve with two dual-core 3.0-GHz Intel Xeons; the efficiency of the program running with the 4 cores is 82%. The code has been designed to be used as a software framework for the easy development of parallel-processing image-analysis programs using embarrassing parallel algorithms; all that needs to be done is to replace the core image processing task (in this case the C function that performs the L.A.Cosmic algorithm) with an alternative image analysis task based on a single processor algorithm. I describe the design and implementation of the program and then discuss how it could possibly be used to quickly do time-critical analysis applications such as those involved with space surveillance or do complex calibration tasks as part of the pipeline processing of images from large focal plane arrays.

  10. Fast GPU-based absolute intensity determination for energy-dispersive X-ray Laue diffraction

    NASA Astrophysics Data System (ADS)

    Alghabi, F.; Send, S.; Schipper, U.; Abboud, A.; Pietsch, U.; Kolb, A.

    2016-01-01

    This paper presents a novel method for fast determination of absolute intensities in the sites of Laue spots generated by a tetragonal hen egg-white lysozyme crystal after exposure to white synchrotron radiation during an energy-dispersive X-ray Laue diffraction experiment. The Laue spots are taken by means of an energy-dispersive X-ray 2D pnCCD detector. Current pnCCD detectors have a spatial resolution of 384 × 384 pixels of size 75 × 75 μm2 each and operate at a maximum of 400 Hz. Future devices are going to have higher spatial resolution and frame rates. The proposed method runs on a computer equipped with multiple Graphics Processing Units (GPUs) which provide fast and parallel processing capabilities. Accordingly, our GPU-based algorithm exploits these capabilities to further analyse the Laue spots of the sample. The main contribution of the paper is therefore an alternative algorithm for determining absolute intensities of Laue spots which are themselves computed from a sequence of pnCCD frames. Moreover, a new method for integrating spectral peak intensities and improved background correction, a different way of calculating mean count rate of the background signal and also a new method for n-dimensional Poisson fitting are presented.We present a comparison of the quality of results from the GPU-based algorithm with the quality of results from a prior (base) algorithm running on CPU. This comparison shows that our algorithm is able to produce results with at least the same quality as the base algorithm. Furthermore, the GPU-based algorithm is able to speed up one of the most time-consuming parts of the base algorithm, which is n-dimensional Poisson fitting, by a factor of more than 3. Also, the entire procedure of extracting Laue spots' positions, energies and absolute intensities from a raw dataset of pnCCD frames is accelerated by a factor of more than 3.

  11. Compressive sensing for seismic data reconstruction via fast projection onto convex sets based on seislet transform

    NASA Astrophysics Data System (ADS)

    Gan, Shuwei; Wang, Shoudong; Chen, Yangkang; Chen, Xiaohong; Huang, Weiling; Chen, Hanming

    2016-07-01

    According to the compressive sensing (CS) theory in the signal-processing field, we proposed a new CS approach based on a fast projection onto convex sets (POCS) algorithm with sparsity constraint in the seislet transform domain. The seislet transform appears to be the sparest among the state-of-the-art sparse transforms. The FPOCS can obtain much faster convergence than conventional POCS (about two thirds of conventional iterations can be saved), while maintaining the same recovery performance. The FPOCS can obtain faster and better performance than FISTA for relatively cleaner data but will get slower and worse performance than FISTA, which becomes a reference to decide which algorithm to use in practice according the noise level in the seismic data. The seislet transform based CS approach can achieve obviously better data recovery results than f - k transform based scenarios, considering both signal-to-noise ratio (SNR), local similarity comparison, and visual observation, because of a much sparser structure in the seislet transform domain. We have used both synthetic and field data examples to demonstrate the superior performance of the proposed seislet-based FPOCS approach.

  12. PRIMAL: Fast and accurate pedigree-based imputation from sequence data in a founder population.

    PubMed

    Livne, Oren E; Han, Lide; Alkorta-Aranburu, Gorka; Wentworth-Sheilds, William; Abney, Mark; Ober, Carole; Nicolae, Dan L

    2015-03-01

    Founder populations and large pedigrees offer many well-known advantages for genetic mapping studies, including cost-efficient study designs. Here, we describe PRIMAL (PedigRee IMputation ALgorithm), a fast and accurate pedigree-based phasing and imputation algorithm for founder populations. PRIMAL incorporates both existing and original ideas, such as a novel indexing strategy of Identity-By-Descent (IBD) segments based on clique graphs. We were able to impute the genomes of 1,317 South Dakota Hutterites, who had genome-wide genotypes for ~300,000 common single nucleotide variants (SNVs), from 98 whole genome sequences. Using a combination of pedigree-based and LD-based imputation, we were able to assign 87% of genotypes with >99% accuracy over the full range of allele frequencies. Using the IBD cliques we were also able to infer the parental origin of 83% of alleles, and genotypes of deceased recent ancestors for whom no genotype information was available. This imputed data set will enable us to better study the relative contribution of rare and common variants on human phenotypes, as well as parental origin effect of disease risk alleles in >1,000 individuals at minimal cost.

  13. A fast region-based active contour model for boundary detection of echocardiographic images.

    PubMed

    Saini, Kalpana; Dewal, M L; Rohit, Manojkumar

    2012-04-01

    This paper presents the boundary detection of atrium and ventricle in echocardiographic images. In case of mitral regurgitation, atrium and ventricle may get dilated. To examine this, doctors draw the boundary manually. Here the aim of this paper is to evolve the automatic boundary detection for carrying out segmentation of echocardiography images. Active contour method is selected for this purpose. There is an enhancement of Chan-Vese paper on active contours without edges. Our algorithm is based on Chan-Vese paper active contours without edges, but it is much faster than Chan-Vese model. Here we have developed a method by which it is possible to detect much faster the echocardiographic boundaries. The method is based on the region information of an image. The region-based force provides a global segmentation with variational flow robust to noise. Implementation is based on level set theory so it easy to deal with topological changes. In this paper, Newton-Raphson method is used which makes possible the fast boundary detection.

  14. Effects of simulation-based practice on focused assessment with sonography for trauma (FAST) window identification, acquisition, and diagnosis.

    PubMed

    Chung, Gregory K W K; Gyllenhammer, Ruth G; Baker, Eva L; Savitsky, Eric

    2013-10-01

    We compared the effects of simulator-based virtual ultrasound scanning practice with classroom-based ultrasound scanning practice on participants' knowledge of focused assessment with sonography for trauma (FAST) window quadrants and interpretation, and on participants' performance on live patient FAST examinations. Novices with little or no ultrasound training experience received simulation-based practice (n = 24) or classroom-based practice (n = 24). Participants who received simulation-based practice scored significantly higher on interpreting static images of FAST windows. On live patient examinations where participants scanned the right upper quadrant (RUQ), left upper quadrant (LUQ), and suprapubic quadrant of a normal patient and an ascites-positive patient, the classroom-based practice condition had a shorter scan time for the LUQ and a higher number of participants attaining high-quality window on the RUQ (normal patient only) and suprapubic quadrant (positive patient only) and correct window interpretation on the LUQ (normal patient only). Overall, classroom-based practice appeared to promote physical acquisition skills and simulator-based practice appeared to promote window interpretation skills. Accurate window interpretation is critical to identification of blunt abdominal trauma injuries. The simulator used (SonoSimulator) appears promising as a training tool to increase probe time and to increase exposure to FAST windows reflecting various anatomy and disease states.

  15. Supporting the design of office layout meeting ergonomics requirements.

    PubMed

    Margaritis, Spyros; Marmaras, Nicolas

    2007-11-01

    This paper proposes a method and an information technology tool aiming to support the ergonomics layout design of individual workstations in a given space (building). The proposed method shares common ideas with previous generic methods for office layout. However, it goes a step forward and focuses on the cognitive tasks which have to be carried out by the designer or the design team trying to alleviate them. This is achieved in two ways: (i) by decomposing the layout design problem to six main stages, during which only a limited number of variables and requirements are considered and (ii) by converting the ergonomics requirements to functional design guidelines. The information technology tool (ErgoOffice 0.1) automates certain phases of the layout design process, and supports the design team either by its editing and graphical facilities or by providing adequate memory support.

  16. 122. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    122. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF EXTENSION TO PIER Sheet 4 of 11 (#3276) - Huntington Beach Municipal Pier, Pacific Coast Highway at Main Street, Huntington Beach, Orange County, CA

  17. 121. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    121. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF EXISTING PIER Sheet 3 of 11 (#3275) - Huntington Beach Municipal Pier, Pacific Coast Highway at Main Street, Huntington Beach, Orange County, CA

  18. 120. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    120. PLAN OF IMPROVEMENT, HUNTINGTON BEACH MUNICIPAL PIER: LAYOUT OF EXISTING PIER Sheet 2 of 11 (#3274) - Huntington Beach Municipal Pier, Pacific Coast Highway at Main Street, Huntington Beach, Orange County, CA

  19. Layout of barracks, Building No. 909 (right) and Building No. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Layout of barracks, Building No. 909 (right) and Building No. 910 (left), looking 282 degrees west - Presidio of San Francisco, Enlisted Men's Barracks Type, West end of Crissy Field, between Pearce & Maudlin Streets, San Francisco, San Francisco County, CA

  20. 10. Floor Layout of Thermal Hydraulics Laboratory, from The Thermal ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. Floor Layout of Thermal Hydraulics Laboratory, from The Thermal Hydraulics Laboratory at Hanford. General Electric Company, Hanford Atomic Products Operation, Richland, Washington, 1961. - D-Reactor Complex, Deaeration Plant-Refrigeration Buildings, Area 100-D, Richland, Benton County, WA

  1. 20. BUILDINGS 243247. PRIMER DRYHOUSES. HEATING LAYOUT. October 16, 1917 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    20. BUILDINGS 243-247. PRIMER DRYHOUSES. HEATING LAYOUT. October 16, 1917 - Frankford Arsenal, Building Nos. 242-246A, South side Craig Road between Eakin & Walbach Streets, Philadelphia, Philadelphia County, PA

  2. 19. BUILDINGS 243247. PRIMER DRYHOUSES. BUILDING LAYOUT. February 16, 1917 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    19. BUILDINGS 243-247. PRIMER DRYHOUSES. BUILDING LAYOUT. February 16, 1917 - Frankford Arsenal, Building Nos. 242-246A, South side Craig Road between Eakin & Walbach Streets, Philadelphia, Philadelphia County, PA

  3. 18. BUILDING 243247. PRIMER DRYHOUSES. GENERAL LAYOUT. February 16, 1917 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. BUILDING 243-247. PRIMER DRYHOUSES. GENERAL LAYOUT. February 16, 1917 - Frankford Arsenal, Building Nos. 242-246A, South side Craig Road between Eakin & Walbach Streets, Philadelphia, Philadelphia County, PA

  4. 32. INTERIOR LAYOUT PLAN OF CROSSCUT STEAM AND DIESEL PLANT, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    32. INTERIOR LAYOUT PLAN OF CROSSCUT STEAM AND DIESEL PLANT, TRACED FROM DRAWING BY C.C. MOORE AND CO., ENGINEERS. July 1947 - Crosscut Steam Plant, North side Salt River near Mill Avenue & Washington Street, Tempe, Maricopa County, AZ

  5. 35. CONDUIT LAYOUT FOR BASCULE General overview with motors, brakes, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    35. CONDUIT LAYOUT FOR BASCULE General overview with motors, brakes, etc. Courtesy of Norwood Noonan Company, Chicago, 1930. - Congress Street Bascule Bridge, Spanning Fort Point Channel at Congress Street, Boston, Suffolk County, MA

  6. Ad Layout Students Become "Artists" with Viewer Device

    ERIC Educational Resources Information Center

    Engel, Jack

    1977-01-01

    Suggests that the use of a projection viewer employed by professional art studios to make revised enlargements or reductions of existing art can improve the appearance of layouts done by creative, but artistically unskilled, students. (KS)

  7. 29. TRACK LAYOUT, INDEX TO DRAWINGS AND INDEX TO MATERIALS, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    29. TRACK LAYOUT, INDEX TO DRAWINGS AND INDEX TO MATERIALS, REED & STEM ARCHITECTS, ST. PAUL, NEW YORK, 1909 (Burlington Northern Collection, Seattle, Washington) - Union Passenger Station Concourse, 1713 Pacific Avenue, Tacoma, Pierce County, WA

  8. Intelligent data layout mechanism for high-performance image retrieval

    NASA Astrophysics Data System (ADS)

    Leung, Kelvin T.; Tao, Wenchao; Yang, Limin; Kimme-Smith, Carolyn; Bassett, Lawrence W.; Valentino, Daniel J.

    1998-06-01

    Trends in medical imaging indicate that the storage requirements for digital medical datasets require a more efficient, scalable storage architecture for large-scale RIS/PACS to support high-speed retrieval for multiple concurrent clients. As storage and networking technologies mature, the cost of applying such technologies in medical imaging has become more economically viable. We propose to take advantage of such economies of scale in technology to provide an effective network workstation storage solution for achieving (1) faster display and navigation response time, (2) higher server throughput and (3) better data storage management. Full-field direct digital mammography presents a challenging problem in the design of digital workstation systems for screening and diagnosis. Due to the spatial and contrast resolution required for mammography, the digital images are large (exceeding 5K X 6K X 14 bits approximately equals 60MB per image) and therefore difficult to display using commercially available technology. We are developing clinically useful methods of storing, displaying and manipulating large digital images in a medical media server using commercial technology. In this paper we propose an Intelligent Grid-based Data Layout Mechanism to optimize the total response time of a reading by minimizing the speed of image access (data I/O time) and the number of data access requests to the server (queueing effects) during the image navigation. A Navigation Threads Model is developed to characterize the performance of many navigation threads involved in the course of performing a reading session. In our grid-based data layout approach, a large 2D direct-digital mammogram image is divided spatially into many small 2D grids and is stored into an array of magnetic disks to provide parallel grid-based readout services to clients. Such a grid- based approach not only provides fine-granularity control, but also provides a means of collecting statistical information about

  9. A Fast Multi-Object Extraction Algorithm Based on Cell-Based Connected Components Labeling

    NASA Astrophysics Data System (ADS)

    Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku

    We describe a cell-based connected component labeling algorithm to calculate the 0th and 1st moment features as the attributes for labeled regions. These can be used to indicate their sizes and positions for multi-object extraction. Based on the additivity in moment features, the cell-based labeling algorithm can label divided cells of a certain size in an image by scanning the image only once to obtain the moment features of the labeled regions with remarkably reduced computational complexity and memory consumption for labeling. Our algorithm is a simple-one-time-scan cell-based labeling algorithm, which is suitable for hardware and parallel implementation. We also compared it with conventional labeling algorithms. The experimental results showed that our algorithm is faster than conventional raster-scan labeling algorithms.

  10. 58. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    58. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21" Space Command. AL-5 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - LAYOUT 3RD, 3A, 4TH LEVELS. DRAWING NO. AL-5 - SHEET 6 OF 21 - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  11. 57. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    57. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21" Space Command. AL-3 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - LAYOUT 1 FLOOR AND 1sr FLOOR ROOF. DRAWING NO. AL-3 - SHEET 4 OF 21. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  12. 59. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    59. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21" Space Command. AL-6 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - LAYOUT 4-A, 5TH & 5-A. DRAWING NO. AL-6 - SHEET 7 OF 21. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  13. 44. Photograph of a line drawing. 'PLAN LAYOUT OF PART ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    44. Photograph of a line drawing. 'PLAN LAYOUT OF PART III, SECTION 1, EQUIPMENT LAYOUT, BUILDINGS H-1 TO H-10 INCL., GRINDING, MANUFACTURING AREA, PLANT 'B'.' From U.S. Army Corps of Engineers. Industrial Facilities Inventory, Holston Ordnance Works, Kingsport, Tennessee. Plant 8, Parts II, III. (Nashville, TN: Office of the District Engineer, 1944). - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  14. 36. Photograph of a line drawing. 'PLAN LAYOUT OF PART ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    36. Photograph of a line drawing. 'PLAN LAYOUT OF PART III, SECTION 1, EQUIPMENT LAYOUT, BUILDINGS E-1 TO E-10 INCL., WASHING, MANUFACTURING AREA PLANT 'B'.' From the U.S. Army Corps of Engineers. Industrial Facilities Inventory, Holston Ordnance Works, Kingsport, Tennessee. Plant B, Parts II, III. (Nashville, TN: Office of the District Engineer, 1944). - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  15. 27. Photograph of a line drawing. 'PLAN LAYOUT AND CROSS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    27. Photograph of a line drawing. 'PLAN LAYOUT AND CROSS SECTION OF PART III, SECTION 1, EQUIPMENT LAYOUT, BUILDINGS C-1, C-3, C-5, C-6, C-7, C-9 INCL., MIXING, MANUFACTURING AREA, PLANT 'B'.' From the U.S. Army Corps of Engineers. Industrial Facilities Inventory, Holston Ordnance Works, Kingsport, Tennessee. Plant B, Parts II, III. (Nashville, TN: Office of the District Engineer, 1944). - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  16. 31. Photograph of a line drawing. 'PLAN LAYOUT OF PART ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    31. Photograph of a line drawing. 'PLAN LAYOUT OF PART III, SECTION 1, EQUIPMENT LAYOUT, BUILDINGS D-1 TO U-10 INCL., NITRATION, MANUFACTURING AREA, PLANT 'B'.' From U.S. Army Corps of Engineers. Industrial Facilities Inventory, Holston Ordnance Works, Kingsport, Tennessee. Plant B, Parts II, III. (Nashville, TN: Office of the District Engineer, 1944). - Holston Army Ammunition Plant, RDX-and-Composition-B Manufacturing Line 9, Kingsport, Sullivan County, TN

  17. Sensitive and Fast Humidity Sensor Based on A Redox Conducting Supramolecular Ionic Material for Respiration Monitoring.

    PubMed

    Yan, Hailong; Zhang, Li; Yu, Ping; Mao, Lanqun

    2017-01-03

    Real-time monitoring of respiratory rate (RR) is highly important for human health, clinical diagnosis, and fundamental scientific research. Exhaled humidity-based RR monitoring has recently attracted increased attention because of its accuracy and portability. Here, we report a new design of an exhaled humidity sensor for the real-time monitoring of the RR based on a synthetic redox conducting supramolecular ionic material (SIM). The humidity-dependent conducting SIM is prepared by ionic self-assembly in aqueous solutions of electroactive 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) and 1,10-bis(3-methylimidazolium-1-yl) decane (C10(mim)2). By taking full advantage of the high hygroscopicity and water stability arising from the ionic and hydrophobic interactions between two building blocks (i.e., ABTS and C10(mim)2), the SIM-based humidity sensor exhibits both high sensitivity (less than 0.1% relative humidity) and fast response time (∼37 ms). These excellent properties allow this humidity sensor to noninvasively monitor the RRs of not only humans but also rats that have a much faster RR and much smaller tidal volume than humans. Moreover, this sensor could also be efficiently used for the real-time monitoring of the recovery process of rats from anesthesia.

  18. Diffuse correlation spectroscopy with a fast Fourier transform-based software autocorrelator

    NASA Astrophysics Data System (ADS)

    Dong, Jing; Bi, Renzhe; Ho, Jun Hui; Thong, Patricia S. P.; Soo, Khee-Chee; Lee, Kijoon

    2012-09-01

    Diffuse correlation spectroscopy (DCS) is an emerging noninvasive technique that probes the deep tissue blood flow, by using the time-averaged intensity autocorrelation function of the fluctuating diffuse reflectance signal. We present a fast Fourier transform (FFT)-based software autocorrelator that utilizes the graphical programming language LabVIEW (National Instruments) to complete data acquisition, recording, and processing tasks. The validation and evaluation experiments were conducted on an in-house flow phantom, human forearm, and photodynamic therapy (PDT) on mouse tumors under the acquisition rate of ˜400 kHz. The software autocorrelator in general has certain advantages, such as flexibility in raw photon count data preprocessing and low cost. In addition to that, our FFT-based software autocorrelator offers smoother starting and ending plateaus when compared to a hardware correlator, which could directly benefit the fitting results without too much sacrifice in speed. We show that the blood flow index (BFI) obtained by using a software autocorrelator exhibits better linear behavior in a phantom control experiment when compared to a hardware one. The results indicate that an FFT-based software autocorrelator can be an alternative solution to the conventional hardware ones in DCS systems with considerable benefits.

  19. Infrared image guidance for ground vehicle based on fast wavelet image focusing and tracking

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Kobayashi, Nobuaki; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu

    2009-08-01

    We studied the infrared image guidance for ground vehicle based on the fast wavelet image focusing and tracking. Here we uses the image of the uncooled infrared imager mounted on the two axis gimbal system and the developed new auto focusing algorithm on the Daubechies wavelet transform. The developed new focusing algorithm on the Daubechies wavelet transform processes the result of the high pass filter effect to meet the direct detection of the objects. This new focusing gives us the distance information of the outside world smoothly, and the information of the gimbal system gives us the direction of objects in the outside world to match the sense of the spherical coordinate system. We installed this system on the hand made electric ground vehicle platform powered by 24VDC battery. The electric vehicle equips the rotary encoder units and the inertia rate sensor units to make the correct navigation process. The image tracking also uses the developed newt wavelet focusing within several image processing. The size of the hand made electric ground vehicle platform is about 1m long, 0.75m wide, 1m high, and 50kg weight. We tested the infrared image guidance for ground vehicle based on the new wavelet image focusing and tracking using the electric vehicle indoor and outdoor. The test shows the good results by the developed infrared image guidance for ground vehicle based on the new wavelet image focusing and tracking.

  20. Robust and fast license plate detection based on the fusion of color and edge feature

    NASA Astrophysics Data System (ADS)

    Cai, De; Shi, Zhonghan; Liu, Jin; Hu, Chuanping; Mei, Lin; Qi, Li

    2014-11-01

    Extracting a license plate is an important stage in automatic vehicle identification. The degradation of images and the computation intense make this task difficult. In this paper, a robust and fast license plate detection based on the fusion of color and edge feature is proposed. Based on the dichromatic reflection model, two new color ratios computed from the RGB color model are introduced and proved to be two color invariants. The global color feature extracted by the new color invariants improves the method's robustness. The local Sobel edge feature guarantees the method's accuracy. In the experiment, the detection performance is good. The detection results show that this paper's method is robust to the illumination, object geometry and the disturbance around the license plates. The method can also detect license plates when the color of the car body is the same as the color of the plates. The processing time for image size of 1000x1000 by pixels is nearly 0.2s. Based on the comparison, the performance of the new ratios is comparable to the common used HSI color model.

  1. A fast algorithm for voxel-based deterministic simulation of X-ray imaging

    NASA Astrophysics Data System (ADS)

    Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2008-04-01

    Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video

  2. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    PubMed

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM.

  3. GPU-based fast Monte Carlo dose calculation for proton therapy.

    PubMed

    Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B

    2012-12-07

    Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ∼1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.

  4. GPU-based fast Monte Carlo dose calculation for proton therapy

    NASA Astrophysics Data System (ADS)

    Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B.

    2012-12-01

    Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ˜1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.

  5. Revisiting the layout decomposition problem for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Kahng, Andrew B.; Park, Chul-Hong; Xu, Xu; Yao, Hailong

    2008-10-01

    In double patterning lithography (DPL) layout decomposition for 45nm and below process nodes, two features must be assigned opposite colors (corresponding to different exposures) if their spacing is less than the minimum coloring spacing.5, 11, 14 However, there exist pattern configurations for which pattern features separated by less than the minimum coloring spacing cannot be assigned different colors. In such cases, DPL requires that a layout feature be split into two parts. We address this problem using a layout decomposition algorithm that incorporates integer linear programming (ILP), phase conflict detection (PCD), and node-deletion bipartization (NDB) methods. We evaluate our approach on both real-world and artificially generated testcases in 45nm technology. Experimental results show that our proposed layout decomposition method effectively decomposes given layouts to satisfy the key goals of minimized line-ends and maximized overlap margin. There are no design rule violations in the final decomposed layout. While we have previously reported other facets of our research on DPL pattern decomposition,6 the present paper differs from that work in the following key respects: (1) instead of detecting conflict cycles and splitting nodes in conflict cycles to achieve graph bipartization,6 we split all nodes of the conflict graph at all feasible dividing points and then formulate a problem of bipartization by ILP, PCD8 and NDB9 methods; and (2) instead of reporting unresolvable conflict cycles, we report the number of deleted conflict edges to more accurately capture the needed design changes in the experimental results.

  6. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  7. Large-scale double-patterning compliant layouts for DP engine and design rule development

    NASA Astrophysics Data System (ADS)

    Cork, Christopher; Lucas, Kevin; Hapli, John; Raffard, Herve; Barnes, Levi

    2009-03-01

    Double Patterning is seen as the prime technology to keep Moore's law on path while EUV technology is still maturing into production worthiness. As previously seen for alternating-Phase Shift Mask technology[1], layout compliance of double patterning is not trivial [2,3] and blind shrinks of anything but the most simplistic existing layouts, will not be directly suitable for double patterning. Evaluating a production worthy double patterning engine with highly non-compliant layouts would put unrealistic expectations on that engine and provide metrics with poor applicability for eventual large designs. The true production use-case would be for designs that have at least some significant double patterning compliance already enforced at the design stage. With this in mind a set of ASIC design blocks of different sizes and complexities were created that were double patterning compliant. To achieve this, a set of standard cells were generated, which individually and in isolation were double patterning compliant, for multiple layers simultaneously. This was done using the automated Standard Cell creation tool CadabraTM [4]. To create a full ASIC, however, additional constraints were added to make sure compliance would not be broken across the boundaries between standard cells when placed next to each other [5]. These standard cells were then used to create a variety of double patterning compliant ASICs using iCCompilerTM to place the cells correctly. Now with a compliant layout, checks were made to see if the constraints made at the micro level really do ensure a fully compliant layout on the whole chip and if the coloring engine could cope with such large datasets. A production worthy double patterning engine is ideally distributable over multiple processors [6,7] so that fast turn-around time can be achievable on even the largest designs. We demonstrate the degree of linearity of scaling achievable with our double patterning engine. These results can be understood

  8. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance.

  9. Fast Wavelet Based Functional Models for Transcriptome Analysis with Tiling Arrays

    PubMed Central

    Clement, Lieven; De Beuf, Kristof; Thas, Olivier; Vuylsteke, Marnik; Irizarry, Rafael A.; Crainiceanu, Ciprian M.

    2013-01-01

    For a better understanding of the biology of an organism, a complete description is needed of all regions of the genome that are actively transcribed. Tiling arrays are used for this purpose. They allow for the discovery of novel transcripts and the assessment of differential expression between two or more experimental conditions such as genotype, treatment, tissue, etc. In tiling array literature, many efforts are devoted to transcript discovery, whereas more recent developments also focus on differential expression. To our knowledge, however, no methods for tiling arrays have been described that can simultaneously assess transcript discovery and identify differentially expressed transcripts. In this paper, we adopt wavelet based functional models to the context of tiling arrays. The high dimensionality of the data triggered us to avoid inference based on Bayesian MCMC methods. Instead, we introduce a fast empirical Bayes method that provides adaptive regularization of the functional effects. A simulation study and a case study illustrate that our approach is well suited for the simultaneous assessment of transcript discovery and differential expression in tiling array studies, and that it outperforms methods that accomplish only one of these tasks. PMID:22499683

  10. Fast video shot boundary detection based on SVD and pattern matching.

    PubMed

    Lu, Zhe-Ming; Shi, Yong

    2013-12-01

    Video shot boundary detection (SBD) is the first and essential step for content-based video management and structural analysis. Great efforts have been paid to develop SBD algorithms for years. However, the high computational cost in the SBD becomes a block for further applications such as video indexing, browsing, retrieval, and representation. Motivated by the requirement of the real-time interactive applications, a unified fast SBD scheme is proposed in this paper. We adopted a candidate segment selection and singular value decomposition (SVD) to speed up the SBD. Initially, the positions of the shot boundaries and lengths of gradual transitions are predicted using adaptive thresholds and most non-boundary frames are discarded at the same time. Only the candidate segments that may contain the shot boundaries are preserved for further detection. Then, for all frames in each candidate segment, their color histograms in the hue-saturation-value) space are extracted, forming a frame-feature matrix. The SVD is then performed on the frame-feature matrices of all candidate segments to reduce the feature dimension. The refined feature vector of each frame in the candidate segments is obtained as a new metric for boundary detection. Finally, cut and gradual transitions are identified using our pattern matching method based on a new similarity measurement. Experiments on TRECVID 2001 test data and other video materials show that the proposed scheme can achieve a high detection speed and excellent accuracy compared with recent SBD schemes.

  11. Accurate and fast simulation of channel noise in conductance-based model neurons by diffusion approximation.

    PubMed

    Linaro, Daniele; Storace, Marco; Giugliano, Michele

    2011-03-01

    Stochastic channel gating is the major source of intrinsic neuronal noise whose functional consequences at the microcircuit- and network-levels have been only partly explored. A systematic study of this channel noise in large ensembles of biophysically detailed model neurons calls for the availability of fast numerical methods. In fact, exact techniques employ the microscopic simulation of the random opening and closing of individual ion channels, usually based on Markov models, whose computational loads are prohibitive for next generation massive computer models of the brain. In this work, we operatively define a procedure for translating any Markov model describing voltage- or ligand-gated membrane ion-conductances into an effective stochastic version, whose computer simulation is efficient, without compromising accuracy. Our approximation is based on an improved Langevin-like approach, which employs stochastic differential equations and no Montecarlo methods. As opposed to an earlier proposal recently debated in the literature, our approximation reproduces accurately the statistical properties of the exact microscopic simulations, under a variety of conditions, from spontaneous to evoked response features. In addition, our method is not restricted to the Hodgkin-Huxley sodium and potassium currents and is general for a variety of voltage- and ligand-gated ion currents. As a by-product, the analysis of the properties emerging in exact Markov schemes by standard probability calculus enables us for the first time to analytically identify the sources of inaccuracy of the previous proposal, while providing solid ground for its modification and improvement we present here.

  12. Reference Beam Pattern Design for Frequency Invariant Beamforming Based on Fast Fourier Transform

    PubMed Central

    Zhang, Wang; Su, Tao

    2016-01-01

    In the field of fast Fourier transform (FFT)-based frequency invariant beamforming (FIB), there is still an unsolved problem. That is the selection of the reference beam to make the designed wideband pattern frequency invariant (FI) over a given frequency range. This problem is studied in this paper. The research shows that for a given array, the selection of the reference beam pattern is determined by the number of sensors and the ratio of the highest frequency to the lowest frequency of the signal (RHL). The length of the weight vector corresponding to a given reference beam pattern depends on the reference frequency. In addition, the upper bound of the weight length to ensure the FI property over the whole frequency band of interest is also given. When the constraints are added to the reference beam, it does not affect the FI property of the designed wideband beam as long as the symmetry of the reference beam is ensured. Based on this conclusion, a scheme for reference beam design is proposed. PMID:27669242

  13. A Fast Radiative Transfer Model for the Meteor- M satellite-based hyperspectral IR sounders

    NASA Astrophysics Data System (ADS)

    Uspensky, A. B.; Rublev, A. N.; Rusin, E. V.; Pyatkin, V. P.

    2014-12-01

    The methodological and computational aspects of Fast Radiative Transfer Model (FRTM) development designed for the analysis and validation of the data of measurements using satellite-based instrument-hyperspectral IR sounders of high spectral resolution—are considered. A description of the FRTM is given for the analysis and modeling of the measurements by the IRFS-2 IR Fourier spectrometer for polarorbiting meteorological satellites of the Meteor-M series based on the known RTTOV FRTM. Computational efficiency is estimated and the results of the verification of developed FRTM are presented. They were obtained from a comparison of model simulations with exact line-by-line calculations for the IRFS-2 IR sounder. The increase in computational performance and the accuracy of the FRTM, caused by the application of the algorithms of the principal components method, are discussed. The construction of radiative models, which use the algorithm of the Monte Carlo method and are applicable for the analysis and modeling of the data of IR sounders under conditions of cloudiness in the instrument field of view, is considered.

  14. Reference Beam Pattern Design for Frequency Invariant Beamforming Based on Fast Fourier Transform.

    PubMed

    Zhang, Wang; Su, Tao

    2016-09-22

    In the field of fast Fourier transform (FFT)-based frequency invariant beamforming (FIB), there is still an unsolved problem. That is the selection of the reference beam to make the designed wideband pattern frequency invariant (FI) over a given frequency range. This problem is studied in this paper. The research shows that for a given array, the selection of the reference beam pattern is determined by the number of sensors and the ratio of the highest frequency to the lowest frequency of the signal (RHL). The length of the weight vector corresponding to a given reference beam pattern depends on the reference frequency. In addition, the upper bound of the weight length to ensure the FI property over the whole frequency band of interest is also given. When the constraints are added to the reference beam, it does not affect the FI property of the designed wideband beam as long as the symmetry of the reference beam is ensured. Based on this conclusion, a scheme for reference beam design is proposed.

  15. Estimates for Pu-239 loadings in burial ground culverts based on fast/slow neutron measurements

    SciTech Connect

    Winn, W.G.; Hochel, R.C.; Hofstetter, K.J.; Sigg, R.A.

    1989-08-15

    This report provides guideline estimates for Pu-239 mass loadings in selected burial ground culverts. The relatively high recorded Pu-239 contents of these culverts have been appraised as suspect relative to criticality concerns, because they were assayed only with the solid waste monitor (SWM) per gamma-ray counting. After 1985, subsequent waste was also assayed with the neutron coincidence counter (NCC), and a comparison of the assay methods showed that the NCC generally yielded higher assays than the SWM. These higher NCC readings signaled a need to conduct non-destructive/non-intrusive nuclear interrogations of these culverts, and a technical team conducted scoping measurements to illustrate potential assay methods based on neutron and/or gamma counting. A fast/slow neutron method has been developed to estimate the Pu-239 in the culverts. In addition, loading records include the SWM assays of all Pu-239 cuts of some of the culvert drums and these data are useful in estimating the corresponding NCC drum assays from NCC vs SWM data. Together, these methods yield predictions based on direct measurements and statistical inference.

  16. Fast k-space-based evaluation of imaging properties of ultrasound apertures

    NASA Astrophysics Data System (ADS)

    Zapf, M.; Dapp, R.; Hardt, M.; Henning, P. A.; Ruiter, N. V.

    2011-03-01

    At the Karlsruhe Institute of Technology (KIT) a three-dimensional ultrasound computer tomography (3D USCT) system for early breast cancer diagnosis is being developed. This method promises reproducible volume images of the female breast in 3D. Initial measurements and a simulation based optimization method, which took several physical properties into account, led to a new aperture setup. Yet this simulation is computational too demanding to systematically evaluate the different 'virtual' apertures which can be achieved by rotation and lifting of the system. In optics a Fourier based approach is available to simulate imaging systems as linear systems. For the two apertures used in our project and one hypothetical linear array aperture this concept was evaluated and compared to a reference simulation. An acceptable conformity between the new approach and the reference simulation could be shown. With this approach a fast evaluation of optimal 'virtual' apertures for specific measurement objects and imaging constraints can be carried out within an acceptable time constraint.

  17. Fast optimization method based on the diffuser dot density for uniformity of the backlight module.

    PubMed

    Huang, Bing-Le; Guo, Tai-Liang

    2016-02-20

    A fast optimization method based on the diffuser dot density (DDD) for uniformity of the backlight module (BLM) is proposed in the paper. First, the relationship between the efficiency of the light emerging and the DDD is analyzed, and then a simulation model that is employed to acquire a serial of simulating data is constructed. Second, a mathematic method to profit the relationship is adopted, and a polynomial relationship is derived. Finally, an algorithm to adjust the DDD and optimize the uniformity of the BLM based on the DDD is constructed. The simulation results prove that only by three times optimization, the uniformity of the BLM can reach 85.6%, and the experimental result indicates that the algorithm proposed in the paper can improve the uniformity rapidly. The final experimental result is that the uniformity of the third optimization reaches 77.4%, which satisfies the target 75% in the phase of designing the BLM. Compared to the conventional optimization method, the method can speed up the procedure and lower the expense of developing the BLM in fabricating the liquid-crystal display.

  18. Specification of hierarchical-model-based fast quarter-pixel motion estimation

    NASA Astrophysics Data System (ADS)

    Cho, Junsang; Suh, Jung W.; Jeon, Gwanggil; Jeong, Jechang

    2010-06-01

    We propose a robust and fast quarter-pixel motion estimation algorithm. This algorithm is an advanced version of the previously proposed model-based quarter-pixel motion estimation (MBQME). MBQME has many advantages in computational complexity, running speed, and hardware implementations. But it has the problem that it does not find the quarter-pixel positions that locate beyond the half-pixel positions. That is one of limitations of model-based motion estimation methods, and it leads to both peak-SNR degradation and bit-rate increase. To solve this problem, we propose a hierarchical mathematical model with minimum interpolations. Through this model, we can determine a motion vector at every quarter-pixel point, which is perfectly compatible with the quarter-pixel motion estimation method within international video coding standards such as MPEG-4 and H.264/AVC. The simulation results show that the proposed method yields almost the same or even better peak-SNR performance than that of full-search quarter-pixel motion estimation, with much lower computational complexity.

  19. Accurate calculation and Matlab based fast realization of merit function's Hesse matrix for the design of multilayer optical coating

    NASA Astrophysics Data System (ADS)

    Wu, Su-Yong; Long, Xing-Wu; Yang, Kai-Yong

    2009-09-01

    To improve the current status of home multilayer optical coating design with low speed and poor efficiency when a large layer number occurs, the accurate calculation and fast realization of merit function’s gradient and Hesse matrix is pointed out. Based on the matrix method to calculate the spectral properties of multilayer optical coating, an analytic model is established theoretically. And the corresponding accurate and fast computation is successfully achieved by programming with Matlab. Theoretical and simulated results indicate that this model is mathematically strict and accurate, and its maximal precision can reach floating-point operations in the computer, with short time and fast speed. Thus it is very suitable to improve the optimal search speed and efficiency of local optimization methods based on the derivatives of merit function. It has outstanding performance in multilayer optical coating design with a large layer number.

  20. 32 CFR 553.7 - Design and layout of Army national cemeteries.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Design and layout of Army national cemeteries... RESERVATIONS AND NATIONAL CEMETERIES ARMY NATIONAL CEMETERIES § 553.7 Design and layout of Army national cemeteries. (a) General cemetery layout plans, landscape planting plans and gravesite layout plans for...

  1. Fast computation of Hessian-based enhancement filters for medical images.

    PubMed

    Yang, Shih-Feng; Cheng, Ching-Hsue

    2014-10-01

    This paper presents a method for fast computation of Hessian-based enhancement filters, whose conditions for identifying particular structures in medical images are associated only with the signs of Hessian eigenvalues. The computational costs of Hessian-based enhancement filters come mainly from the computation of Hessian eigenvalues corresponding to image elements to obtain filter responses, because computing eigenvalues of a matrix requires substantial computational effort. High computational cost has become a challenge in the application of Hessian-based enhancement filters. Using a property of the characteristic polynomial coefficients of a matrix and the well-known Routh-Hurwitz criterion in control engineering, it is shown that under certain conditions, the response of a Hessian-based enhancement filter to an image element can be obtained without having to compute Hessian eigenvalues. The computational cost can thus be reduced. Experimental results on several medical images show that the method proposed in this paper can reduce significantly the number of computations of Hessian eigenvalues and the processing times of images. The percentage reductions of the number of computations of Hessian eigenvalues for enhancing blob- and tubular-like structures in two-dimensional images are approximately 90% and 65%, respectively. For enhancing blob-, tubular-, and plane-like structures in three-dimensional images, the reductions are approximately 97%, 75%, and 12%, respectively. For the processing times, the percentage reductions for enhancing blob- and tubular-like structures in two-dimensional images are approximately 31% and 7.5%, respectively. The reductions for enhancing blob-, tubular-, and plane-like structures in three-dimensional images are approximately 68%, 55%, and 3%, respectively.

  2. Fast Mean-Shift Based Classification of Very High Resolution Images: Application to Forest Cover Mapping

    NASA Astrophysics Data System (ADS)

    Boukir, S.; Jones, S.; Reinke, K.

    2012-07-01

    This paper presents a new unsupervised classification method which aims to effectively and efficiently map remote sensing data. The Mean-Shift (MS) algorithm, a non parametric density-based clustering technique, is at the core of our method. This powerful clustering algorithm has been successfully used for both the classification and the segmentation of gray scale and color images during the last decade. However, very little work has been reported regarding the performance of this technique on remotely sensed images. The main disadvantage of the MS algorithm lies on its high computational costs. Indeed, it is based on an optimization procedure to determine the modes of the pixels density. To investigate the MS algorithm in the difficult context of very high resolution remote sensing imagery, we use a fast version of this algorithm which has been recently proposed, namely the Path-Assigned Mean Shift (PAMS). This algorithm is up to 5 times faster than other fast MS algorithms while inducing a low loss in quality compared to the original MS version. To compensate for this loss, we propose to use the K modes (cluster centroids) obtained after convergence of the PAMS algorithm as an initialization of a K-means clustering algorithm. The latter converges very quickly to a refined solution to the underlying clustering problem. Furthermore, it does not suffer the main drawback of the classic K-means algorithm (the number of clusters K needs to be specified) as K is automatically determined via the MS mode-seeking procedure. We demonstrate the effectiveness of this two-stage clustering method in performing automatic classification of aerial forest images. Both individual bands and band combination trails are presented. When compared to the classical PAMS algorithm, our technique is better in terms of classification quality. The improvement in classification is significant both visually and statistically. The whole classification process is performed in a few seconds on

  3. (abstract) A Low-Cost Mission to 2060 Chiron Based on the Pluto Fast Flyby

    NASA Technical Reports Server (NTRS)

    Stern, S. A.; Salvo, C. G.; Wallace, R. A.; Weinstein, S. S.; Weissman, P. R.

    1994-01-01

    The Pluto Fast Flyby-based mission to Chiron described in this paper is a low cost, scientifically rewarding, focused mission in the outer solar system. The proposed mission will make a flyby of 2060 Chiron, an active 'comet' with over 10(sup 4) times the mass of Halley, and an eccentric, Saturn-crossing orbit which ranges from 8.5 to 19 AU. This mission concept achieves the flyby 4.2 years after launch on a direct trajectory from Earth, is independent of Jupiter launch windows, and fits within Discovery cost guidelines. This mission offers the scientific opportunity to examine a class of object left unsampled by the trail-blazing Mariners, Pioneers, Voyagers, and missions to Halley. Spacecraft reconnaissance of Chiron addresses unique objectives relating to cometary science, other small bodies, the structure of quasi-bound atmospheres on modest-sized bodies, and the origin of primitive bodies and the giant planets. Owing to Chiron's large size (180based on the opportunity to use the planned Pluto Flyby spare spacecraft and a Proton Expendable Launch Vehicle (ELV) (the pluto spacecraft is being designed to be compatible with a Proton launch). Backup

  4. Human Factors Analysis and Layout Guideline Development for the Canadian Surface Combatant (CSC) Project

    DTIC Science & Technology

    2013-04-01

    The room layout design process described in this Section (3) is largely based on the methodology described in ISO 11064 – Ergonomic design of control...Control Room Design and Ergonomics [12]. To this end, such pieces of equipment should be considered as part of the communications analysis to assess...Consider a seated workstation; the following could be considered critical anthropometric dimensions for each aspect of the design problem: Eye

  5. The relationship between contrast, resolution and detectability in accelerator-based fast neutron radiography

    SciTech Connect

    Ambrosi, R. M.; Watterson, J. I. W.

    1999-06-10

    Fast neutron radiography as a method for non destructive testing is a fast growing field of research. At the Schonland Research Center for Nuclear Sciences we have been engaged in the formulation of a model for the physics of image formation in fast neutron radiography (FNR). This involves examining all the various factors that affect image formation in FNR by experimental and Monte Carlo methods. One of the major problems in the development of a model for fast neutron radiography is the determination of the factors that affect image contrast and resolution. Monte Carlo methods offer an ideal tool for the determination of the origin of many of these factors. In previous work the focus of these methods has been the determination of the scattered neutron field in both a scintillator and a fast neutron radiography facility. As an extension of this work MCNP has been used to evaluate the role neutron scattering in a specimen plays in image detectability. Image processing of fast neutron radiographs is a necessary method of enhancing the detectability of features in an image. MCNP has been used to determine the part it can play in indirectly improving image resolution and aiding in image processing. The role noise plays in fast neutron radiography and its impact on image reconstruction has been evaluated. All these factors aid in the development of a model describing the relationship between contrast, resolution and detectability.

  6. SU-E-T-806: Very Fast GPU-Based IMPT Dose Computation

    SciTech Connect

    Sullivan, A; Brand, M

    2015-06-15

    Purpose: Designing particle therapy treatment plans is a dosimetrist-in-the-loop optimization wherein the conflicting constraints of achieving a desired tumor dose distribution must be balanced against the need to minimize the dose to nearby OARs. IMPT introduces an additional, inner, numerical optimization step in which the dosimetrist’s current set of constraints are used to determine the weighting of beam spots. Very fast dose calculations are needed to enable the dosimetrist to perform many iterations of the outer optimization in a commercially reasonable time. Methods: We have developed a GPU-based convolution-type dose computation algorithm that more accurately handles heterogeneities than earlier algorithms by redistributing energy from dose computed in a water volume. The depth dependence of the beam size is handled by pre-processing Bragg curves using a weighted superposition of Gaussian bases. Additionally, scattering, the orientation of treatment ports, and the non-parallel propagation of beams are handled by large, but sparse, energy-redistribution matrices that implement affine transforms. Results: We tested our algorithm using a brain tumor dataset with 1 mm voxels and a single treatment port from the patient’s anterior through the sinuses. The resulting dose volume is 100 × 100 × 230 mm with 66,200 beam spots on a 3 × 3 × 2 mm grid. The dose computation takes <1 msec on a GeForce GTX Titan GPU with the Gamma passing rate for 2mm/2% criterion of 99.1% compared to dose calculated by an alternative dose algorithm based on pencil beams. We will present comparisons to Monte Carlo dose calculations. Conclusion: Our high-speed dose computation method enables the IMPT spot weights to be optimized in <1 second, resulting in a nearly instantaneous response to user changes to dose constraints. This permits the creation of higher quality plans by allowing the dosimetrist to evaluate more alternatives in a short period of time.

  7. Fast and powerful heritability inference for family-based neuroimaging studies

    PubMed Central

    Ganjgahi, Habib; Winkler, Anderson M.; Glahn, David C.; Blangero, John; Kochunov, Peter; Nichols, Thomas E.

    2015-01-01

    Heritability estimation has become an important tool for imaging genetics studies. The large number of voxel- and vertex-wise measurements in imaging genetics studies presents a challenge both in terms of computational intensity and the need to account for elevated false positive risk because of the multiple testing problem. There is a gap in existing tools, as standard neuroimaging software cannot estimate heritability, and yet standard quantitative genetics tools cannot provide essential neuroimaging inferences, like family-wise error corrected voxel-wise or cluster-wiseP-values. Moreover, available heritability tools rely on P-values that can be inaccurate with usual parametric inference methods. In this work we develop fast estimation and inference procedures for voxel-wise heritability, drawing on recent methodological results that simplify heritability likelihood computations (Blangero etal., 2013). We review the family of score and Wald tests and propose novel inference methods based on explained sum of squares of an auxiliary linear model. To address problems with inaccuracies with the standard results used to find P-values, we propose four different permutation schemes to allow semi-parametric inference (parametric likelihood-based estimation, non-parametric sampling distribution). In total, we evaluate 5 different significance tests for heritability, with either asymptotic parametric or permutation-basedP-value computations. We identify a number of tests that are both computationally efficient and powerful, making them ideal candidates for heritability studies in the massive data setting. We illustrate our method on fractional anisotropy measures in 859 subjects from the Genetics of Brain Structure study. PMID:25812717

  8. Fast simulation of x-ray projections of spline-based surfaces using an append buffer

    NASA Astrophysics Data System (ADS)

    Maier, Andreas; Hofmann, Hannes G.; Schwemmer, Chris; Hornegger, Joachim; Keil, Andreas; Fahrig, Rebecca

    2012-10-01

    Many scientists in the field of x-ray imaging rely on the simulation of x-ray images. As the phantom models become more and more realistic, their projection requires high computational effort. Since x-ray images are based on transmission, many standard graphics acceleration algorithms cannot be applied to this task. However, if adapted properly, the simulation speed can be increased dramatically using state-of-the-art graphics hardware. A custom graphics pipeline that simulates transmission projections for tomographic reconstruction was implemented based on moving spline surface models. All steps from tessellation of the splines, projection onto the detector and drawing are implemented in OpenCL. We introduced a special append buffer for increased performance in order to store the intersections with the scene for every ray. Intersections are then sorted and resolved to materials. Lastly, an absorption model is evaluated to yield an absorption value for each projection pixel. Projection of a moving spline structure is fast and accurate. Projections of size 640 × 480 can be generated within 254 ms. Reconstructions using the projections show errors below 1 HU with a sharp reconstruction kernel. Traditional GPU-based acceleration schemes are not suitable for our reconstruction task. Even in the absence of noise, they result in errors up to 9 HU on average, although projection images appear to be correct under visual examination. Projections generated with our new method are suitable for the validation of novel CT reconstruction algorithms. For complex simulations, such as the evaluation of motion-compensated reconstruction algorithms, this kind of x-ray simulation will reduce the computation time dramatically.

  9. Triple patterning lithography layout decomposition using end-cutting

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Roy, Subhendu; Gao, Jhih-Rong; Pan, David Z.

    2015-01-01

    Triple patterning lithography (TPL) is one of the most promising techniques in the 14-nm logic node and beyond. Conventional LELELE type TPL technology suffers from native conflict and overlapping problems. Recently, as an alternative process, TPL with end-cutting (LELE-EC) was proposed to overcome the limitations of LELELE manufacturing. In the LELE-EC process, the first two masks are LELE type double patterning, while the third mask is used to generate the end-cuts. Although the layout decomposition problem for LELELE has been well studied in the literature, only a few attempts have been made to address the LELE-EC layout decomposition problem. We propose a comprehensive study for LELE-EC layout decomposition. Layout graph and end-cut graph are constructed to extract all the geometrical relationships of both input layout and end-cut candidates. Based on these graphs, integer linear programming is formulated to minimize the conflict and the stitch numbers. The experimental results demonstrate the effectiveness of the proposed algorithms.

  10. High-fidelity, broadband stimulated-Brillouin-scattering-based slow light using fast noise modulation.

    PubMed

    Zhu, Yunhui; Lee, Myungjun; Neifeld, Mark A; Gauthier, Daniel J

    2011-01-17

    We demonstrate a 5-GHz-broadband tunable slow-light device based on stimulated Brillouin scattering in a standard highly-nonlinear optical fiber pumped by a noise-current-modulated laser beam. The noisemodulation waveform uses an optimized pseudo-random distribution of the laser drive voltage to obtain an optimal flat-topped gain profile, which minimizes the pulse distortion and maximizes pulse delay for a given pump power. In comparison with a previous slow-modulation method, eye-diagram and signal-to-noise ratio (SNR) analysis show that this broadband slow-light technique significantly increases the fidelity of a delayed data sequence, while maintaining the delay performance. A fractional delay of 0.81 with a SNR of 5.2 is achieved at the pump power of 350 mW using a 2-km-long highly nonlinear fiber with the fast noise-modulation method, demonstrating a 50% increase in eye-opening and a 36% increase in SNR in the comparison.

  11. Support vector machine based classification of fast Fourier transform spectroscopy of proteins

    NASA Astrophysics Data System (ADS)

    Lazarevic, Aleksandar; Pokrajac, Dragoljub; Marcano, Aristides; Melikechi, Noureddine

    2009-02-01

    Fast Fourier transform spectroscopy has proved to be a powerful method for study of the secondary structure of proteins since peak positions and their relative amplitude are affected by the number of hydrogen bridges that sustain this secondary structure. However, to our best knowledge, the method has not been used yet for identification of proteins within a complex matrix like a blood sample. The principal reason is the apparent similarity of protein infrared spectra with actual differences usually masked by the solvent contribution and other interactions. In this paper, we propose a novel machine learning based method that uses protein spectra for classification and identification of such proteins within a given sample. The proposed method uses principal component analysis (PCA) to identify most important linear combinations of original spectral components and then employs support vector machine (SVM) classification model applied on such identified combinations to categorize proteins into one of given groups. Our experiments have been performed on the set of four different proteins, namely: Bovine Serum Albumin, Leptin, Insulin-like Growth Factor 2 and Osteopontin. Our proposed method of applying principal component analysis along with support vector machines exhibits excellent classification accuracy when identifying proteins using their infrared spectra.

  12. A fast SPAD-based small animal imager for early-photon diffuse optical tomography.

    PubMed

    Mu, Ying; Niedre, Mark

    2014-01-01

    Photon scatter is the dominant light transport process in biological tissue and is well understood to degrade imaging performance in near-infrared diffuse optical tomography. Measurement of photons arriving at early times following a short laser pulse is considered to be an effective method to improve this limitation, i.e. by systematically selecting photons that have experienced fewer scattering events. Previously, we tested the performance of single photon avalanche photodiode (SPAD) in measurement of early transmitted photons through diffusive media and showed that it outperformed photo-multiplier tube (PMT) systems in similar configurations, principally due to its faster temporal response. In this paper, we extended this work and developed a fast SPAD-based time-resolved diffuse optical tomography system. As a first validation of the instrument, we scanned an optical phantom with multiple absorbing inclusions and measured full time-resolved data at 3240 scan points per axial slice. We performed image reconstruction with very early-arriving photon data and showed significant improvements compared to time-integrated data. Extension of this work to mice in vivo and measurement of time-resolved fluorescence data is the subject of ongoing research.

  13. A Generalized Grid-Based Fast Multipole Method for Integrating Helmholtz Kernels.

    PubMed

    Parkkinen, Pauli; Losilla, Sergio A; Solala, Eelis; Toivanen, Elias A; Xu, Wen-Hua; Sundholm, Dage

    2017-02-14

    A grid-based fast multipole method (GB-FMM) for optimizing three-dimensional (3D) numerical molecular orbitals in the bubbles and cube double basis has been developed and implemented. The present GB-FMM method is a generalization of our recently published GB-FMM approach for numerically calculating electrostatic potentials and two-electron interaction energies. The orbital optimization is performed by integrating the Helmholtz kernel in the double basis. The steep part of the functions in the vicinity of the nuclei is represented by one-center bubbles functions, whereas the remaining cube part is expanded on an equidistant 3D grid. The integration of the bubbles part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kind, analogously to the numerical inward and outward integration approach for calculating two-electron interaction potentials in atomic structure calculations. The expressions and algorithms for massively parallel calculations on general purpose graphics processing units (GPGPU) are described. The accuracy and the correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations (HF-SCF) on H2, H2O, and CO. Our calculations show that an accuracy of 10(-4) to 10(-7) Eh can be reached in HF-SCF calculations on general molecules.

  14. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    PubMed

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.

  15. EEG-based classification of fast and slow hand movements using Wavelet-CSP algorithm.

    PubMed

    Robinson, Neethu; Vinod, A P; Ang, Kai Keng; Tee, Keng Peng; Guan, Cuntai T

    2013-08-01

    A brain-computer interface (BCI) acquires brain signals, extracts informative features, and translates these features to commands to control an external device. This paper investigates the application of a noninvasive electroencephalography (EEG)-based BCI to identify brain signal features in regard to actual hand movement speed. This provides a more refined control for a BCI system in terms of movement parameters. An experiment was performed to collect EEG data from subjects while they performed right-hand movement at two different speeds, namely fast and slow, in four different directions. The informative features from the data were obtained using the Wavelet-Common Spatial Pattern (W-CSP) algorithm that provided high-temporal-spatial-spectral resolution. The applicability of these features to classify the two speeds and to reconstruct the speed profile was studied. The results for classifying speed across seven subjects yielded a mean accuracy of 83.71% using a Fisher Linear Discriminant (FLD) classifier. The speed components were reconstructed using multiple linear regression and significant correlation of 0.52 (Pearson's linear correlation coefficient) was obtained between recorded and reconstructed velocities on an average. The spatial patterns of the W-CSP features obtained showed activations in parietal and motor areas of the brain. The results achieved promises to provide a more refined control in BCI by including control of movement speed.

  16. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.

    PubMed

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-03-09

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.

  17. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic

    PubMed Central

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  18. Error-Based Observer of a Charge Couple Device Tracking Loop for Fast Steering Mirror

    PubMed Central

    Tang, Tao; Deng, Chao; Yang, Tao; Zhong, Daijun; Ren, Ge; Huang, Yongmei; Fu, Chengyu

    2017-01-01

    The charge couple device (CCD) tracking loop of a fast steering mirror (FSM) is usually used to stabilize line of sight (LOS). High closed-loop bandwidth facilitates good performance. However, low-rate sample and time delay of the CCD greatly limit the high control bandwidth. This paper proposes an error-based observer (EBO) to improve the low-frequency performance of the CCD tracking system. The basic idea is by combining LOS error from the CCD and the controller output to produce the high-gain observer, forwarding into the originally closed-loop control system. This proposed EBO can improve the system both in target tracking and disturbance suppression due to LOS error from the CCD’s sensing of the two signals. From a practical engineering view, the closed-loop stability and robustness of the EBO system are investigated on the condition of gain margin and phase margin of the open-loop transfer function. Two simulations of CCD experiments are provided to verify the benefits of the proposed algorithm. PMID:28264504

  19. Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics

    NASA Astrophysics Data System (ADS)

    Hošek, Petr; Spiwok, Vojtěch

    2016-01-01

    Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.

  20. A portable intra-oral scanner based on sinusoidal pattern of fast phase-shifting

    NASA Astrophysics Data System (ADS)

    Jan, Chia-Ming; Lin, Ying-Chieh

    2016-03-01

    This paper presented our current research about the intra-oral scanner made by MIRDC. Utilizing the sinusoidal pattern for fast phase-shifting technique to deal with 3D digitalization of human dental surface profile, the development of pseudo-phase shifting digital projection can easily achieve one type of full-field scanning instead of the common technique of the laser line scanning. Based on traditional Moiré method, we adopt projecting fringes and retrieve phase reconstruction to forward phase unwrapping. The phase difference between the plane and object can be exactly calculated from the desired fringe images, and the surface profile of object was probably reconstructed by using the phase differences information directly. According to our algorithm of space mapping between projections and capturing orientation exchange of our intra-oral scanning configuration, the system we made certainly can be proved to achieve the required accuracy of +/-10μm to deal with intra-oral scanning on the basis of utilizing active triangulation method. The final purpose aimed to the scanning of object surface profile with its size about 10x10x10mm3.

  1. Fast terahertz optoelectronic amplitude modulator based on plasmonic metamaterial antenna arrays and graphene

    NASA Astrophysics Data System (ADS)

    Jessop, David S.; Sol, Christian W. O.; Xiao, Long; Kindness, Stephen J.; Braeuninger-Weimer, Philipp; Lin, Hungyen; Griffiths, Jonathan P.; Ren, Yuan; Kamboj, Varun S.; Hofmann, Stephan; Zeitler, J. Axel; Beere, Harvey E.; Ritchie, David A.; Degl'Innocenti, Riccardo

    2016-02-01

    The growing interest in terahertz (THz) technologies in recent years has seen a wide range of demonstrated applications, spanning from security screening, non-destructive testing, gas sensing, to biomedical imaging and communication. Communication with THz radiation offers the advantage of much higher bandwidths than currently available, in an unallocated spectrum. For this to be realized, optoelectronic components capable of manipulating THz radiation at high speeds and high signal-to-noise ratios must be developed. In this work we demonstrate a room temperature frequency dependent optoelectronic amplitude modulator working at around 2 THz, which incorporates graphene as the tuning medium. The architecture of the modulator is an array of plasmonic dipole antennas surrounded by graphene. By electrostatically doping the graphene via a back gate electrode, the reflection characteristics of the modulator are modified. The modulator is electrically characterized to determine the graphene conductivity and optically characterization, by THz time-domain spectroscopy and a single-mode 2 THz quantum cascade laser, to determine the optical modulation depth and cut-off frequency. A maximum optical modulation depth of ~ 30% is estimated and is found to be most (least) sensitive when the electrical modulation is centered at the point of maximum (minimum) differential resistivity of the graphene. A 3 dB cut-off frequency > 5 MHz, limited only by the area of graphene on the device, is reported. The results agree well with theoretical calculations and numerical simulations, and demonstrate the first steps towards ultra-fast, graphene based THz optoelectronic devices.

  2. A Low-Cost and Fast Real-Time PCR System Based on Capillary Convection.

    PubMed

    Qiu, Xianbo; Ge, Shengxiang; Gao, Pengfei; Li, Ke; Yang, Yongliang; Zhang, Shiyin; Ye, Xiangzhong; Xia, Ningshao; Qian, Shizhi

    2017-02-01

    A low-cost and fast real-time PCR system in a pseudo-isothermal manner with disposable capillary tubes based on thermal convection for point-of-care diagnostics is developed and tested. Once stable temperature gradient along the capillary tube has been established, a continuous circulatory flow or thermal convection inside the capillary tube will repeatedly transport PCR reagents through temperature zones associated with the DNA denaturing, annealing, and extension stages of the reaction. To establish stable temperature gradient along the capillary tube, a dual-temperature heating strategy with top and bottom heaters is adopted here. A thermal waveguide is adopted for precise maintenance of the temperature of the top heater. An optimized optical network is developed for monitoring up to eight amplification units for real-time fluorescence detection. The system performance was demonstrated with repeatable detection of influenza A (H1N1) virus nucleic acid targets with a limit of detection of 1.0 TCID50/mL within 30 min.

  3. Fast-dissolving tablets of glyburide based on ternary solid dispersions with PEG 6000 and surfactants.

    PubMed

    Cirri, Marzia; Maestrelli, Francesca; Corti, Giovanna; Mura, Paola; Valleri, Maurizio

    2007-04-01

    Marketed glyburide tablets present unsatisfying dissolution profiles that give rise to variable bioavailability. With the purpose of developing a fast-dissolving tablet formulation able to assure a complete drug dissolution, we investigated the effect of the addition to a reference tablet formulation of different types (anionic and nonionic) and amounts of hydrophilic surfactants, as well as the use of a new technique, based on ternary solid dispersions of the drug with an hydrophilic carrier (polyethylene glycol [PEG] 6000) and a surfactant. Tablets were prepared by direct compression or previous wet granulation of suitable formulations containing the drug with each surfactant or drug:PEG:surfactant ternary dispersions at different PEG:surfactant w/w ratios. The presence of surfactants significantly increased (p<0.01) the drug dissolution rate, but complete drug dissolution was never achieved. On the contrary, in all cases tablets containing ternary solid dispersions achieved 100% dissolved drug within 60 min. The best product was the 10:80:10 w/w ternary dispersion with PEG 6000 and sodium laurylsulphate, showing a dissolution efficiency 5.5-fold greater than the reference tablet formulation and 100% drug dissolution after only 20 min.

  4. A Fast Method to Predict Distributions of Binary Black Hole Masses Based on Gaussian Process Regression

    NASA Astrophysics Data System (ADS)

    Yun, Yuqi; Zevin, Michael; Sampson, Laura; Kalogera, Vassiliki

    2017-01-01

    With more observations from LIGO in the upcoming years, we will be able to construct an observed mass distribution of black holes to compare with binary evolution simulations. This will allow us to investigate the physics of binary evolution such as the effects of common envelope efficiency and wind strength, or the properties of the population such as the initial mass function.However, binary evolution codes become computationally expensive when running large populations of binaries over a multi-dimensional grid of input parameters, and may simulate accurately only for a limited combination of input parameter values. Therefore we developed a fast machine-learning method that utilizes Gaussian Mixture Model (GMM) and Gaussian Process (GP) regression, which together can predict distributions over the entire parameter space based on a limited number of simulated models. Furthermore, Gaussian Process regression naturally provides interpolation errors in addition to interpolation means, which could provide a means of targeting the most uncertain regions of parameter space for running further simulations.We also present a case study on applying this new method to predicting chirp mass distributions for binary black hole systems (BBHs) in Milky-way like galaxies of different metallicities.

  5. Spatiotemporal focusing-based widefield multiphoton microscopy for fast optical sectioning of thick tissues

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Chung; Chang, Chia-Yuan; Yen, Wei-Chung; Chen, Shean-Jen

    2012-10-01

    Conventional multiphoton microscopy employs beam scanning; however, in this study a microscope based on spatiotemporal focusing offering widefield multiphoton excitation has been developed to provide fast optical sectioning images. The microscope integrates a 10 kHz repetition rate ultrafast amplifier featuring strong instantaneous peak power (maximum 400 μJ/pulse at 90 fs pulse width) with a TE-cooled, ultra-sensitive photon detecting, electron multiplying charge-coupled device camera. This configuration can produce multiphoton excited images with an excitation area larger than 200 × 100 μm2 at a frame rate greater than 100 Hz. Brownian motions of fluorescent microbeads as small as 0.5 μm have been instantaneously observed with a lateral spatial resolution of less than 0.5 μm and an axial resolution of approximately 3.5 μm. Moreover, we combine the widefield multiphoton microscopy with structure illuminated technique named HiLo to reject the background scattering noise to get better quality for bioimaging.

  6. Nanometal-decorated exfoliated graphite nanoplatelet based glucose biosensors with high sensitivity and fast response.

    PubMed

    Lu, Jue; Do, Inhwan; Drzal, Lawrence T; Worden, Robert M; Lee, Ilsoon

    2008-09-23

    We report the novel fabrication of a highly sensitive, selective, fast responding, and affordable amperometric glucose biosensor using exfoliated graphite nanoplatelets (xGnPs) decorated with Pt and Pd nanoparticles. Nafion was used to solubilize metal-decorated graphite nanoplatelets, and a simple cast method with high content organic solvent (85 wt %) was used to prepare the biosensors. The addition of precious metal nanoparticles such as platinum (Pt) and palladium (Pd) to xGnP increased the electroactive area of the electrode and substantially decreased the overpotential in the detection of hydrogen peroxide. The Pt-xGnP glucose biosensor had a sensitivity of 61.5+/-0.6 microA/(mM x cm(2)) and gave a linear response up to 20 mM. The response time and detection limit (S/N=3) were determined to be 2 s and 1 microM, respectively. Therefore, this novel glucose biosensor based on the Pt nanoparticle coated xGnP is among the best reported to date in both sensing performance and production cost. In addition, the effects of metal nanoparticle loading and the particle size on the biosensor performance were systematically investigated.

  7. Fast-response humidity-sensing films based on methylene blue aggregates formed on nanoporous semiconductor films

    NASA Astrophysics Data System (ADS)

    Ishizaki, Ryota; Katoh, Ryuzi

    2016-05-01

    We prepared fast-response colorimetric humidity-sensing (vapochromic) films based on methylene blue adsorption onto nanoporous semiconductor (TiO2, Al2O3) films. Color changes caused by changes of humidity could be easily identified visually. A characteristic feature of the vapochromic films was their fast response to changes of humidity. We found that the response began to occur within 10 ms. The response was rapid because all the methylene blue molecules attached to the nanoporous semiconductor surface were directly exposed to the environment. We also deduced that the color changes were caused by structural changes of the methylene blue aggregates on the surface.

  8. How Fast Is Fast?

    ERIC Educational Resources Information Center

    Korn, Abe

    1994-01-01

    Presents an activity that enables students to answer for themselves the question of how fast a body must travel before the nonrelativistic expression must be replaced with the correct relativistic expression by deciding on the accuracy required in describing the kinetic energy of a body. (ZWH)

  9. Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy

    PubMed Central

    Chang, Chia-Yuan; Hu, Yvonne Yuling; Lin, Chun-Yu; Lin, Cheng-Han; Chang, Hsin-Yu; Tsai, Sheng-Feng; Lin, Tzu-Wei; Chen, Shean-Jen

    2016-01-01

    Temporal focusing multiphoton microscopy (TFMPM) has the advantage of area excitation in an axial confinement of only a few microns; hence, it can offer fast three-dimensional (3D) multiphoton imaging. Herein, fast volumetric imaging via a developed digital micromirror device (DMD)-based TFMPM has been realized through the synchronization of an electron multiplying charge-coupled device (EMCCD) with a dynamic piezoelectric stage for axial scanning. The volumetric imaging rate can achieve 30 volumes per second according to the EMCCD frame rate of more than 400 frames per second, which allows for the 3D Brownian motion of one-micron fluorescent beads to be spatially observed. Furthermore, it is demonstrated that the dynamic HiLo structural multiphoton microscope can reject background noise by way of the fast volumetric imaging with high-speed DMD patterned illumination. PMID:27231617

  10. Fast valve based on double-layer eddy-current repulsion for disruption mitigation in Experimental Advanced Superconducting Tokamak.

    PubMed

    Zhuang, H D; Zhang, X D

    2015-05-01

    A fast valve based on the double-layer eddy-current repulsion mechanism has been developed on Experimental Advanced Superconducting Tokamak (EAST). In addition to a double-layer eddy-current coil, a preload system was added to improve the security of the valve, whereby the valve opens more quickly and the open-valve time becomes shorter, making it much safer than before. In this contribution, testing platforms, open-valve characteristics, and throughput of the fast valve are discussed. Tests revealed that by choosing appropriate parameters the valve opened within 0.15 ms, and open-valve times were no longer than 2 ms. By adjusting working parameter values, the maximum number of particles injected during this open-valve time was estimated at 7 × 10(22). The fast valve will become a useful tool to further explore disruption mitigation experiments on EAST in 2015.

  11. Fast GC-FID based metabolic fingerprinting of Japanese green tea leaf for its quality ranking prediction.

    PubMed

    Jumtee, Kanokwan; Bamba, Takeshi; Fukusaki, Eiichiro

    2009-07-01

    There is a need of reliable, rapid, and cost-effective analysis technique to evaluate food and crop compositions, which are important to improve their qualities and quantities. Prior to fast GC-FID development, metabolic fingerprints, and predictive models obtained from a conventional GC-FID were evaluated by comparison to those derived from GC-TOF-MS. A similar chromatographic pattern with higher sensitivity of polyphenol compounds including epicatechin gallate (ECg) and epigallocatechin gallate (EGCg) had been achieved by using conventional GC-FID. Fast gas chromatograph coupled with flame ionization detector (GC-FID) has been carried out with 10 m x 0.18 mm id x 0.20 microm df capillary column. The analysis time per sample was reduced to less than 14 min compared to those of a conventional GC-FID (38 min) and GC-TOF-MS (28 min). The fast GC-FID also offered reliable retention time reproducibility without significant loss of peak resolution. Projections to latent structures by means of partial least squares (PLS) with orthogonal signal correction filtering (OSC) was applied to the fast GC-FID data. The predictive model showed good model fit and predictability with RMSEP of 3.464, suggesting that fast GC-FID based metabolic fingerprinting could be an alternative method for the prediction of Japanese green tea quality.

  12. Layout optimization of DRAM cells using rigorous simulation model for NTD

    NASA Astrophysics Data System (ADS)

    Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe

    2014-03-01

    DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a "super stable" core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by

  13. Evaluation and application of a fast module in a PLC based interlock and control system

    NASA Astrophysics Data System (ADS)

    Zaera-Sanz, M.

    2009-08-01

    The LHC Beam Interlock system requires a controller performing a simple matrix function to collect the different beam dump requests. To satisfy the expected safety level of the Interlock, the system should be robust and reliable. The PLC is a promising candidate to fulfil both aspects but too slow to meet the expected response time which is of the order of μseconds. Siemens has introduced a ``so called'' fast module (FM352-5 Boolean Processor). It provides independent and extremely fast control of a process within a larger control system using an onboard processor, a Field Programmable Gate Array (FPGA), to execute code in parallel which results in extremely fast scan times. It is interesting to investigate its features and to evaluate it as a possible candidate for the beam interlock system. This paper publishes the results of this study. As well, this paper could be useful for other applications requiring fast processing using a PLC.

  14. Characterization and decomposition of self-aligned quadruple patterning friendly layout

    NASA Astrophysics Data System (ADS)

    Zhang, Hongbo; Du, Yuelin; Wong, Martin D. F.; Topaloglu, Rasit O.

    2012-03-01

    Self-aligned quadruple patterning (SAQP) lithography is one of the major techniques for the future process requirement after 16nm/14nm technology node. In this paper, based on the existing knowledge of current 193nm lithography and process flow of SAQP, we will process an early study on the definition of SAQP-friendly layout. With the exploration of the feasible feature regions and possible combinations of adjacent features, we will define several simple but important geometry rules to help define the SAQP-friendliness. Then, we will introduce a conflicting graph algorithm to generate the feature region assignment for SAQP decomposition. Our experimental results validate our SAQP-friendly layout definition, and basic circuit building blocks in the low level metal layer are analyzed.

  15. DESIGN AND LAYOUT CONCEPTS FOR COMPACT, FACTORY-PRODUCED, TRANSPORTABLE, GENERATION IV REACTOR SYSTEMS

    SciTech Connect

    Mynatt Fred R.; Townsend, L.W.; Williamson, Martin; Williams, Wesley; Miller, Laurence W.; Khan, M. Khurram; McConn, Joe; Kadak, Andrew C.; Berte, Marc V.; Sawhney, Rapinder; Fife, Jacob; Sedler, Todd L.; Conway, Larry E.; Felde, Dave K.

    2003-11-12

    The purpose of this research project is to develop compact (100 to 400 MWe) Generation IV nuclear power plant design and layout concepts that maximize the benefits of factory-based fabrication and optimal packaging, transportation and siting. The reactor concepts selected were compact designs under development in the 2000 to 2001 period. This interdisciplinary project was comprised of three university-led nuclear engineering teams identified by reactor coolant type (water, gas, and liquid metal) and a fourth Industrial Engineering team. The reactors included a Modular Pebble Bed helium-cooled concept being developed at MIT, the IRIS water-cooled concept being developed by a team led by Westinghouse Electric Company, and a Lead-Bismuth-cooled concept developed by UT. In addition to the design and layout concepts this report includes a section on heat exchanger manufacturing simulations and a section on construction and cost impacts of proposed modular designs.

  16. A fast video clip retrieval algorithm based on VA-file

    NASA Astrophysics Data System (ADS)

    Liu, Fangjie; Dong, DaoGuo; Miao, Xiaoping; Xue, XiangYang

    2003-12-01

    Video clip retrieval is a significant research topic of content-base multimedia retrieval. Generally, video clip retrieval process is carried out as following: (1) segment a video clip into shots; (2) extract a key frame from each shot as its representative; (3) denote every key frame as a feature vector, and thus a video clip can be denoted as a sequence of feature vectors; (4) retrieve match clip by computing the similarity between the feature vector sequence of a query clip and the feature vector sequence of any clip in database. To carry out fast video clip retrieval the index structure is indispensable. According to our literature survey, S2-tree [17] is the one and only index structure having been applied to support video clip retrieval, which combines the characteristics of both X-tree and Suffix-tree and converts the series vectors retrieval to string matching. But S2-tree structure will not be applicable if the feature vector's dimension is beyond 20, because the X-tree itself cannot be used to sustain similarity query effectively when dimensions of vectors are beyond 20. Furthermore, it cannot support flexible similarity definitions between two vector sequences. VA-file represents the vector approximately by compressing the original data and it maintains the original order when representing vectors in a sequence, which is a very valuable merit for vector sequences matching. In this paper, a new video clip similarity model as well as video clip retrieval algorithm based on VA-File are proposed. The experiments show that our algorithm incredibly shortened the retrieval time compared to sequential scanning without index structure.

  17. A fast color image enhancement algorithm based on Max Intensity Channel.

    PubMed

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-30

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  18. A combinatorial chemistry method for fast screening of perovskite-based NO oxidation catalyst.

    PubMed

    Yoon, Dal Young; Lim, Eunho; Kim, Young Jin; Cho, Byong K; Nam, In-Sik; Choung, Jin Woo; Yoo, Seungbeom

    2014-11-10

    A fast parallel screening method based on combinatorial chemistry (combichem) has been developed and applied in the screening tests of perovskite-based oxide (PBO) catalysts for NO oxidation to hit a promising PBO formulation for the oxidation of NO to NO2. This new method involves three consecutive steps: oxidation of NO to NO2 over a PBO catalyst, adsorption of NOx onto the PBO and K2O/Al2O3, and colorimetric assay of the NOx adsorbed thereon. The combichem experimental data have been used for determining the oxidation activity of NO over PBO catalysts as well as three critical parameters, such as the adsorption efficiency of K2O/Al2O3 for NO2 (α) and NO (β), and the time-average fraction of NO included in the NOx feed stream (ξ). The results demonstrated that the amounts of NO2 produced over PBO catalysts by the combichem method under transient conditions correlate well with those from a conventional packed-bed reactor under steady-state conditions. Among the PBO formulations examined, La0.5Ag0.5MnO3 has been identified as the best chemical formulation for oxidation of NO to NO2 by the present combichem method and also confirmed by the conventional packed-bed reactor tests. The superior efficiency of the combichem method for high-throughput catalyst screening test validated in this study is particularly suitable for saving the time and resources required in developing a new formulation of PBO catalyst whose chemical composition may have an enormous number of possible variations.

  19. Fast in-situ tool inspection based on inverse fringe projection and compact sensor heads

    NASA Astrophysics Data System (ADS)

    Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard

    2016-11-01

    Inspection of machine elements is an important task in production processes in order to ensure the quality of produced parts and to gather feedback for the continuous improvement process. A new measuring system is presented, which is capable of performing the inspection of critical tool geometries, such as gearing elements, inside the forming machine. To meet the constraints on sensor head size and inspection time imposed by the limited space inside the machine and the cycle time of the process, the measuring device employs a combination of endoscopy techniques with the fringe projection principle. Compact gradient index lenses enable a compact design of the sensor head, which is connected to a CMOS camera and a flexible micro-mirror based projector via flexible fiber bundles. Using common fringe projection patterns, the system achieves measuring times of less than five seconds. To further reduce the time required for inspection, the generation of inverse fringe projection patterns has been implemented for the system. Inverse fringe projection speeds up the inspection process by employing object-adapted patterns, which enable the detection of geometry deviations in a single image. Two different approaches to generate object adapted patterns are presented. The first approach uses a reference measurement of a manufactured tool master to generate the inverse pattern. The second approach is based on a virtual master geometry in the form of a CAD file and a ray-tracing model of the measuring system. Virtual modeling of the measuring device and inspection setup allows for geometric tolerancing for free-form surfaces by the tool designer in the CAD-file. A new approach is presented, which uses virtual tolerance specifications and additional simulation steps to enable fast checking of metric tolerances. Following the description of the pattern generation process, the image processing steps required for inspection are demonstrated on captures of gearing geometries.

  20. Determinants of Fast Food Consumption among Iranian High School Students Based on Planned Behavior Theory

    PubMed Central

    Sharifirad, Gholamreza; Yarmohammadi, Parastoo; Azadbakht, Leila; Morowatisharifabad, Mohammad Ali; Hassanzadeh, Akbar

    2013-01-01

    Objective. This study was conducted to identify some factors (beliefs and norms) which are related to fast food consumption among high school students in Isfahan, Iran. We used the framework of the theory planned behavior (TPB) to predict this behavior. Subjects & Methods. Cross-sectional data were available from high school students (n = 521) who were recruited by cluster randomized sampling. All of the students completed a questionnaire assessing variables of standard TPB model including attitude, subjective norms, perceived behavior control (PBC), and the additional variables past behavior, actual behavior control (ABC). Results. The TPB variables explained 25.7% of the variance in intentions with positive attitude as the strongest (β = 0.31, P < 0.001) and subjective norms as the weakest (β = 0.29, P < 0.001) determinant. Concurrently, intentions accounted for 6% of the variance for fast food consumption. Past behavior and ABC accounted for an additional amount of 20.4% of the variance in fast food consumption. Conclusion. Overall, the present study suggests that the TPB model is useful in predicting related beliefs and norms to the fast food consumption among adolescents. Subjective norms in TPB model and past behavior in TPB model with additional variables (past behavior and actual behavior control) were the most powerful predictors of fast food consumption. Therefore, TPB model may be a useful framework for planning intervention programs to reduce fast food consumption by students. PMID:23936635

  1. A no-reference perceptual blurriness metric based fast super-resolution of still pictures using sparse representation

    NASA Astrophysics Data System (ADS)

    Choi, Jae-Seok; Bae, Sung-Ho; Kim, Munchurl

    2015-03-01

    In recent years, perceptually-driven super-resolution (SR) methods have been proposed to lower computational complexity. Furthermore, sparse representation based super-resolution is known to produce competitive high-resolution images with lower computational costs compared to other SR methods. Nevertheless, super-resolution is still difficult to be implemented with substantially low processing power for real-time applications. In order to speed up the processing time of SR, much effort has been made with efficient methods, which selectively incorporate elaborate computation algorithms for perceptually sensitive image regions based on a metric, such as just noticeable distortion (JND). Inspired by the previous works, we first propose a novel fast super-resolution method with sparse representation, which incorporates a no-reference just noticeable blur (JNB) metric. That is, the proposed fast super-resolution method efficiently generates super-resolution images by selectively applying a sparse representation method for perceptually sensitive image areas which are detected based on the JNB metric. Experimental results show that our JNB-based fast super-resolution method is about 4 times faster than a non-perceptual sparse representation based SR method for 256× 256 test LR images. Compared to a JND-based SR method, the proposed fast JNB-based SR method is about 3 times faster, with approximately 0.1 dB higher PSNR and a slightly higher SSIM value in average. This indicates that our proposed perceptual JNB-based SR method generates high-quality SR images with much lower computational costs, opening a new possibility for real-time hardware implementations.

  2. Terahertz-optical-asymmetric-demultiplexer (TOAD)-based arithmetic units for ultra-fast optical information processing

    NASA Astrophysics Data System (ADS)

    Cherri, Abdallah K.

    2010-04-01

    In this paper, designs of ultra-fast all-optical based Terahertz-optical-asymmetric-demultiplexer (TOAD)-based devices are reported. Using TOAD switches, adders/subtracters units are demonstrated. The high speed is achieved due to the use of the nonlinear optical materials and the nonbinary modified signed-digit (MSD) number representation. The proposed all-optical circuits are compared in terms of numbers TOAD switches, optical amplifiers and wavelength converters.

  3. The effect of design modifications to the typographical layout of the New York State elementary science learning standards on user preference and process time

    NASA Astrophysics Data System (ADS)

    Arnold, Jeffery E.

    The purpose of this study was to determine the effect of four different design layouts of the New York State elementary science learning standards on user processing time and preference. Three newly developed layouts contained the same information as the standards core curriculum. In this study, the layout of the core guide is referred to as Book. The layouts of the new documents are referred to as Chart, Map, and Tabloid based on the format used to convey content hierarchy information. Most notably, all the new layouts feature larger page sizes, color, page tabs, and an icon based navigation system (IBNS). A convenience sample of 48 New York State educators representing three educator types (16 pre-service teachers, 16 in-service teachers, and 16 administrators) participated in the study. After completing timed tasks accurately, participants scored each layout based on preference. Educator type and layout were the independent variables, and process time and user preference were the dependent variables. A two-factor experimental design with Educator Type as the between variable and with repeated measures on Layout, the within variable, showed a significant difference in process time for Educator Type and Layout. The main effect for Educator Type (F(2, 45) = 8.03, p <.001) was significant with an observed power of .94, and an effect size of .26. The pair-wise comparisons for process time showed that pre-service teachers (p = .02) and administrators (p =.009) completed the assigned tasks more quickly when compared to in-service teachers. The main effect for Layout (F(3, 135) = 4.47, p =.01) was also significant with an observed power of .80, and an effect size of .09. Pair-wise comparisons showed that the newly developed Chart (p = .019) and Map (p = .032) layouts reduced overall process time when compared to the existing state learning standards (Book). The Layout X Educator type interaction was not significant. The same two-factor experimental design on preference

  4. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  5. Production layout improvement in emergency services: a participatory approach.

    PubMed

    Zanatta, Mateus; Amaral, Fernando Gonçalves

    2012-01-01

    Volunteer fire department is a service that responds emergency situations in places where there are no military emergency services. These services need to respond quickly, because time is often responsible for the operation success besides work environment and setup time interfere with the prompt response to these calls and care efficiency. The layout design is one factor that interferes with the quick setup. In this case, the spaces arrangement can result in excessive or unnecessary movements; also the equipment provision may hinder the selection and collection of these or even create movement barriers for the workers. This work created a new layout for the emergency assistance service, considering the human factors related to work through the task analysis and workers participation on the alternatives of improvement. The results showed an alternate layout with corridors and minimization of unusable sites, allowing greater flexibility and new possibilities of requirements.

  6. Audio video based fast fixed-point independent vector analysis for multisource separation in a room environment

    NASA Astrophysics Data System (ADS)

    Liang, Yanfeng; Naqvi, Syed Mohsen; Chambers, Jonathon A.

    2012-12-01

    Fast fixed-point independent vector analysis (FastIVA) is an improved independent vector analysis (IVA) method, which can achieve faster and better separation performance than original IVA. As an example IVA method, it is designed to solve the permutation problem in frequency domain independent component analysis by retaining the higher order statistical dependency between frequencies during learning. However, the performance of all IVA methods is limited due to the dimensionality of the parameter space commonly encountered in practical frequency-domain source separation problems and the spherical symmetry assumed with the source model. In this article, a particular permutation problem encountered in using the FastIVA algorithm is highlighted, namely the block permutation problem. Therefore a new audio video based fast fixed-point independent vector analysis algorithm is proposed, which uses video information to provide a smart initialization for the optimization problem. The method cannot only avoid the ill convergence resulting from the block permutation problem but also improve the separation performance even in noisy and high reverberant environments. Different multisource datasets including the real audio video corpus AV16.3 are used to verify the proposed method. For the evaluation of the separation performance on real room recordings, a new pitch based evaluation criterion is also proposed.

  7. A New Ticket-Based Authentication Mechanism for Fast Handover in Mesh Network

    PubMed Central

    Lai, Yan-Ming; Cheng, Pu-Jen; Lee, Cheng-Chi; Ku, Chia-Yi

    2016-01-01

    Due to the ever-growing popularity mobile devices of various kinds have received worldwide, the demands on large-scale wireless network infrastructure development and enhancement have been rapidly swelling in recent years. A mobile device holder can get online at a wireless network access point, which covers a limited area. When the client leaves the access point, there will be a temporary disconnection until he/she enters the coverage of another access point. Even when the coverages of two neighboring access points overlap, there is still work to do to make the wireless connection smoothly continue. The action of one wireless network access point passing a client to another access point is referred to as the handover. During handover, for security concerns, the client and the new access point should perform mutual authentication before any Internet access service is practically gained/provided. If the handover protocol is inefficient, in some cases discontinued Internet service will happen. In 2013, Li et al. proposed a fast handover authentication mechanism for wireless mesh network (WMN) based on tickets. Unfortunately, Li et al.’s work came with some weaknesses. For one thing, some sensitive information such as the time and date of expiration is sent in plaintext, which increases security risks. For another, Li et al.’s protocol includes the use of high-quality tamper-proof devices (TPDs), and this unreasonably high equipment requirement limits its applicability. In this paper, we shall propose a new efficient handover authentication mechanism. The new mechanism offers a higher level of security on a more scalable ground with the client’s privacy better preserved. The results of our performance analysis suggest that our new mechanism is superior to some similar mechanisms in terms of authentication delay. PMID:27171160

  8. Fast and automatic depth control of iterative bone ablation based on optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-07-01

    Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.

  9. A New Ticket-Based Authentication Mechanism for Fast Handover in Mesh Network.

    PubMed

    Lai, Yan-Ming; Cheng, Pu-Jen; Lee, Cheng-Chi; Ku, Chia-Yi

    2016-01-01

    Due to the ever-growing popularity mobile devices of various kinds have received worldwide, the demands on large-scale wireless network infrastructure development and enhancement have been rapidly swelling in recent years. A mobile device holder can get online at a wireless network access point, which covers a limited area. When the client leaves the access point, there will be a temporary disconnection until he/she enters the coverage of another access point. Even when the coverages of two neighboring access points overlap, there is still work to do to make the wireless connection smoothly continue. The action of one wireless network access point passing a client to another access point is referred to as the handover. During handover, for security concerns, the client and the new access point should perform mutual authentication before any Internet access service is practically gained/provided. If the handover protocol is inefficient, in some cases discontinued Internet service will happen. In 2013, Li et al. proposed a fast handover authentication mechanism for wireless mesh network (WMN) based on tickets. Unfortunately, Li et al.'s work came with some weaknesses. For one thing, some sensitive information such as the time and date of expiration is sent in plaintext, which increases security risks. For another, Li et al.'s protocol includes the use of high-quality tamper-proof devices (TPDs), and this unreasonably high equipment requirement limits its applicability. In this paper, we shall propose a new efficient handover authentication mechanism. The new mechanism offers a higher level of security on a more scalable ground with the client's privacy better preserved. The results of our performance analysis suggest that our new mechanism is superior to some similar mechanisms in terms of authentication delay.

  10. Family-Joining: A Fast Distance-Based Method for Constructing Generally Labeled Trees

    PubMed Central

    Kalaghatgi, Prabhav; Pfeifer, Nico; Lengauer, Thomas

    2016-01-01

    The widely used model for evolutionary relationships is a bifurcating tree with all taxa/observations placed at the leaves. This is not appropriate if the taxa have been densely sampled across evolutionary time and may be in a direct ancestral relationship, or if there is not enough information to fully resolve all the branching points in the evolutionary tree. In this article, we present a fast distance-based agglomeration method called family-joining (FJ) for constructing so-called generally labeled trees in which taxa may be placed at internal vertices and the tree may contain polytomies. FJ constructs such trees on the basis of pairwise distances and a distance threshold. We tested three methods for threshold selection, FJ-AIC, FJ-BIC, and FJ-CV, which minimize Akaike information criterion, Bayesian information criterion, and cross-validation error, respectively. When compared with related methods on simulated data, FJ-BIC was among the best at reconstructing the correct tree across a wide range of simulation scenarios. FJ-BIC was applied to HIV sequences sampled from individuals involved in a known transmission chain. The FJ-BIC tree was found to be compatible with almost all transmission events. On average, internal branches in the FJ-BIC tree have higher bootstrap support than branches in the leaf-labeled bifurcating tree constructed using RAxML. 36% and 25% of the internal branches in the FJ-BIC tree and RAxML tree, respectively, have bootstrap support greater than 70%. To the best of our knowledge the method presented here is the first attempt at modeling evolutionary relationships using generally labeled trees. PMID:27436007

  11. Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control

    SciTech Connect

    Acharya, Naresh; Baone, Chaitanya; Veda, Santosh; Dai, Jing; Chaudhuri, Nilanjan; Leonardi, Bruno; Sanches-Gasca, Juan; Diao, Ruisheng; Wu, Di; Huang, Zhenyu; Zhang, Yu; Jin, Shuangshuang; Zheng, Bin; Chen, Yousu

    2014-12-31

    Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve grid resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed

  12. Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.

    2003-01-01

    The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and

  13. Sub 10 ns fast switching and resistance control in lateral GeTe-based phase-change memory

    NASA Astrophysics Data System (ADS)

    Yin, You; Zhang, Yulong; Takehana, Yousuke; Kobayashi, Ryota; Zhang, Hui; Hosaka, Sumio

    2016-06-01

    In this study, we investigated the fast switching and resistance control in a lateral GeTe-based phase-change memory (PCM). The resistivity of GeTe as a function of annealing temperature showed that it changed by more than 6 orders of magnitude in a very narrow temperature range. X-ray diffraction patterns of GeTe films indicated that GeTe had only one crystal structure, that is, face-centered cubic. It was demonstrated that the lateral device with a top conducting layer had a good performance. The operation characteristics of the GeTe-based lateral PCM device showed that it could be operated even when sub-10-ns voltage pulses were applied, making it much faster than a Ge2Sb2Te5-based device. The device resistance was successfully controlled by applying a staircase-like pulse, which enables the device to be used for fast multilevel storage.

  14. Satellite antenna layout and optimization in electromagnetic compatibility design

    NASA Astrophysics Data System (ADS)

    Zhang, Jinshuo; Xie, Shuguo; Liu, Yan

    2009-12-01

    This paper firstly analyzes the main factors that impact the layout of satellite antenna. The uniform geometrical theory of diffraction (UTD) is used to establish mathematical model for calculating the coupling of satellite antenna, and set up the objective function of the placement optimization. The genetic algorithm incorporating high-frequency simulation to minimize antenna coupling by optimally positioning satellite antenna is described in detail. The results of antenna placement on a realistic satellite show that this method is effective in the optimal design of satellite antenna layout for the purpose of electromagnetic compatibility.

  15. Comprehensive physics-based compact model for fast p-i-n diode using MATLAB and Simulink

    NASA Astrophysics Data System (ADS)

    Xue, Peng; Fu, Guicui; Zhang, Dong

    2016-07-01

    In this study, a physics-based model for the fast p-i-n diode is proposed. The model is based on the 1-D Fourier-based solution of ambipolar diffusion equation (ADE) implemented in MATLAB and Simulink. The physical characteristics of fast diode design concepts such as local lifetime control (LLC), emitter control (EMCON) and deep field stop are taken into account. Based on these fast diode design concepts, the ADE is solved for all injection levels instead of high-level injection only as usually done. The variation of high-level lifetime due to local lifetime control is also included in the solution. With the deep field stop layer taken into consideration, the depletion behavior in the N-base during reverse recovery is redescribed. Some physical effects such as avalanche generation and carrier recombination in the depletion region are also taken into account. To be self contained, a parameter extraction method is proposed to extract all the parameters of the model. In the end, the static and reverse recovery experiments for a commercial EMCON diode and a LLC diode are used to validate the proposed model. The simulation results are compared with experiment results and good agreement is obtained.

  16. Multiplexed computer-generated holograms with polygonal-aperture layouts optimized by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Gillet, Jean-Numa; Sheng, Yunlong

    2003-07-01

    Using a novel genetic algorithm (GA) with a Lamarckian search we optimize the polygonal layout of a new type of multiplexed computer-generated hologram (MCGH) with polygonal apertures. A period of the MCGH is divided into cells, and the cell is further divided into polygonal apertures according to a polygonal layout, which is to be optimized. Among an ensemble of 1.21 × 1024 possible polygonal layouts, we take a population of 102 solutions, which are coded as chromosomes of bits, and find the optimal solution with our GA. We introduce rank-based selection with cumulative normal distribution fitness, double crossover, exponentially decreasing mutation probability and Lamarckian downhill search with a small number of offspring chromosomes into our GA, which shows a rapid convergence to the global minimum of the cost function. In a second step of optimization the phase distributions over the subholograms in the MCGH are determined with our iterative subhologram design algorithm. Our MCGH designs show large-size reconstructed images with high diffraction efficiency and low reconstruction error.

  17. Multiplexed computer-generated holograms with polygonal-aperture layouts optimized by genetic algorithm.

    PubMed

    Gillet, Jean-Numa; Sheng, Yunlong

    2003-07-10

    Using a novel genetic algorithm (GA) with a Lamarckian search we optimize the polygonal layout of a new type of multiplexed computer-generated hologram (MCGH) with polygonal apertures. A period ofthe MCGH is divided into cells, and the cell is further divided into polygonal apertures according to a polygonal layout, which is to be optimized. Among an ensemble of 1.21 x 10(24) possible polygonal layouts, we take a population of 102 solutions, which are coded as chromosomes of bits, and find the optimal solution with our GA. We introduce rank-based selection with cumulative normal distribution fitness, double crossover, exponentially decreasing mutation probability and Lamarckian downhill search with a small number of offspring chromosomes into our GA, which shows a rapid convergence to the global minimum of the cost function. In a second step of optimization the phase distributions over the subholograms in the MCGH are determined with our iterative subhologram design algorithm. Our MCGH designs show large-sie reconstructed images with high diffraction efficiency and low reconstruction error.

  18. Analysis, quantification, and mitigation of electrical variability due to layout dependent effects in SOC designs

    NASA Astrophysics Data System (ADS)

    Wang, Yangang; Zwolinski, Mark; Appleby, Andrew; Scoones, Mark; Caldwell, Sonia; Azam, Touqeer; Hurat, Philippe; Pitchford, Chris

    2012-03-01

    Variability in performance and power of 40nm and 28nm CMOS cells is highly dependent on the context in which the cells are used. In this study, the effects of context on a number of clock tree cells from standard cell libraries have been investigated. The study also demonstrated how the Litho Electrical Analyzer (LEA) tool from Cadence® is used to analyze the context-dependent variability. During the study, it was observed that the device characteristics including Vth, Idsat, and Ioff are significantly affected by Layout Dependent Effects (LDE), resulting in variability of performance and power of standard cells. Moreover, the dummy diffusions acting as mitigation process offered limited improvement for the effects of context. On the other hand, the cell level variability due to stress was analyzed. So, it is suggested that the relative variability of a cell is determined by its size and structure, and the variability can be improved to some extent by editing the cells' structure. Based on the analysis of the physical sources and properties of LDE, this paper presents a set of layout guidelines for mitigating layout dependent variability of 40 and 28nm CMOS cells.

  19. OpenOrd: an open-source toolbox for large graph layout

    NASA Astrophysics Data System (ADS)

    Martin, Shawn; Brown, W. Michael; Klavans, Richard; Boyack, Kevin W.

    2011-01-01

    We document an open-source toolbox for drawing large-scale undirected graphs. This toolbox is based on a previously implemented closed-source algorithm known as VxOrd. Our toolbox, which we call OpenOrd, extends the capabilities of VxOrd to large graph layout by incorporating edge-cutting, a multi-level approach, average-link clustering, and a parallel implementation. At each level, vertices are grouped using force-directed layout and average-link clustering. The clustered vertices are then re-drawn and the process is repeated. When a suitable drawing of the coarsened graph is obtained, the algorithm is reversed to obtain a drawing of the original graph. This approach results in layouts of large graphs which incorporate both local and global structure. A detailed description of the algorithm is provided in this paper. Examples using datasets with over 600K nodes are given. Code is available at www.cs.sandia.gov/~smartin.

  20. TreePlus: interactive exploration of networks with enhanced tree layouts.

    PubMed

    Lee, Bongshin; Parr, Cynthia S; Plaisant, Catherine; Bederson, Benjamin B; Veksler, Vladislav D; Gray, Wayne D; Kotfila, Christopher

    2006-01-01

    Despite extensive research, it is still difficult to produce effective interactive layouts for large graphs. Dense layout and occlusion make food webs, ontologies, and social networks difficult to understand and interact with. We propose a new interactive Visual Analytics component called TreePlus that is based on a tree-style layout. TreePlus reveals the missing graph structure with visualization and interaction while maintaining good readability. To support exploration of the local structure of the graph and gathering of information from the extensive reading of labels, we use a guiding metaphor of "Plant a seed and watch it grow." It allows users to start with a node and expand the graph as needed, which complements the classic overview techniques that can be effective at (but often limited to) revealing clusters. We describe our design goals, describe the interface, and report on a controlled user study with 28 participants comparing TreePlus with a traditional graph interface for six tasks. In general, the advantage of TreePlus over the traditional interface increased as the density of the displayed data increased. Participants also reported higher levels of confidence in their answers with TreePlus and most of them preferred TreePlus.

  1. The constraints satisfaction problem approach in the design of an architectural functional layout

    NASA Astrophysics Data System (ADS)

    Zawidzki, Machi; Tateyama, Kazuyoshi; Nishikawa, Ikuko

    2011-09-01

    A design support system with a new strategy for finding the optimal functional configurations of rooms for architectural layouts is presented. A set of configurations satisfying given constraints is generated and ranked according to multiple objectives. The method can be applied to problems in architectural practice, urban or graphic design-wherever allocation of related geometrical elements of known shape is optimized. Although the methodology is shown using simplified examples-a single story residential building with two apartments each having two rooms-the results resemble realistic functional layouts. One example of a practical size problem of a layout of three apartments with a total of 20 rooms is demonstrated, where the generated solution can be used as a base for a realistic architectural blueprint. The discretization of design space is discussed, followed by application of a backtrack search algorithm used for generating a set of potentially 'good' room configurations. Next the solutions are classified by a machine learning method (FFN) as 'proper' or 'improper' according to the internal communication criteria. Examples of interactive ranking of the 'proper' configurations according to multiple criteria and choosing 'the best' ones are presented. The proposed framework is general and universal-the criteria, parameters and weights can be individually defined by a user and the search algorithm can be adjusted to a specific problem.

  2. Comparison of eight logger layouts for monitoring animal-level temperature and humidity during commercial feeder cattle transport.

    PubMed

    Goldhawk, C; Crowe, T; González, L A; Janzen, E; Kastelic, J; Pajor, E; Schwartzkopf-Genswein, K

    2014-09-01

    Measuring animal-level conditions during transit provides information regarding the true risk of environmental challenges to cattle welfare during transportation. However, due to constraints on placing loggers at the animal level, there is a need to identify appropriate proxy locations. The objective was to evaluate 8 distributions of ceiling-level loggers in the deck and belly compartments of pot-belly trailers for assessing animal-level temperature and humidity during 5 to 18 h commercial transportation of feeder cattle. Ambient conditions during transportation ranged from 3.6 to 45.2°C (20.3 ± 7.61°C, mean ± SD). When considering the entire journey, average differences between ceiling and animal-level temperatures were similar among logger layouts (P > 0.05). The uncertainty in the difference in temperature and humidity between locations was high relative to the magnitude of the difference between animal- and ceiling-level conditions. Single-logger layouts required larger adjustments to predict animal-level conditions within either compartment, during either the entire journey or when the trailer was stationary (P < 0.05). Within certain logger layouts, there were small but significant differences in the ability of regression equations to predict animal-level conditions that were associated with cattle weight and available space relative to body size. Furthermore, evaluation of logger layouts based solely on the entire journey without consideration of stationary periods did not adequately capture variability in layout performance. In conclusion, to adequately monitor animal-level temperature and humidity, 10 loggers distributed throughout the compartment was recommended over single-logger layouts within both the deck and belly compartments of pot-belly trailers transporting feeder cattle in warm weather.

  3. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  4. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.

    PubMed

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-07

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  5. Metal Optics Based nanoLEDs: In Search of a Fast, Efficient, Nanoscale Light Emitter

    NASA Astrophysics Data System (ADS)

    Eggleston, Michael Scott

    Since the invention of the laser, stimulated emission has been the de facto king of optical communication. Lasers can be directly modulated at rates as high as 50GHz, much faster than a typical solid state light-emitting diode (LED) that is limited by spontaneous emission to <1GHz. Unfortunately, lasers have a severe scaling problem; they require large cavities operated at high power to achieve efficient lasing. A properly designed LED can be made arbitrarily small and still operate with high-efficiency. On-chip interconnects is an area that is in desperate need of a high-speed, low-power optical emitter that can enable on-chip links to replace current high-loss metal wires. In this work, I will show that by utilizing proper antenna design, a nanoLED can be created that is faster than a laser while still operating at >50% efficiency. I start by formulating an optical antenna circuit model whose elements are based completely off of antenna geometry. This allows for intuitive antenna design and suggests that rate enhancements up to ~3,000x are possible while keeping antenna efficiency >50%. Such a massive speed-up in spontaneous emission would enable an LED that can be directly modulated at 100's of GHz, much faster than any laser. I then use the circuit model to design an arch-dipole antenna, a dipole antenna with an inductive arch across the feedgap. I experimentally demonstrate a free-standing arch-dipole based nanoLED with rate enhancement of 115x and 66% antenna efficiency. Because the emitter is InGaAsP, a common III-V material, I experimentally show that this device can be easily and efficiently coupled into an InP waveguide. Experimental coupling efficiencies up to 70% are demonstrated and directional antennas are employed that offer front to back emission ratios of 3:1. Finally, I show that a nanoLED can still have high quantum yield by using a transition metal dichalcogenide, WSe2, as the emitter material. By coupling a monolayer of WSe2 to a cavity

  6. A novel Fast Gas Chromatography based technique for higher time resolution measurements of speciated monoterpenes in air

    NASA Astrophysics Data System (ADS)

    Jones, C. E.; Kato, S.; Nakashima, Y.; Kajii, Y.

    2013-12-01

    Biogenic emissions supply the largest fraction of non-methane volatile organic compounds (VOC) from the biosphere to the atmospheric boundary layer, and typically comprise a complex mixture of reactive terpenes. Due to this chemical complexity, achieving comprehensive measurements of biogenic VOC (BVOC) in air within a satisfactory time resolution is analytically challenging. To address this, we have developed a novel, fully automated Fast Gas Chromatography (Fast-GC) based technique to provide higher time resolution monitoring of monoterpenes (and selected other C9-C15 terpenes) during plant emission studies and in ambient air. To our knowledge, this is the first study to apply a Fast-GC based separation technique to achieve quantification of terpenes in air. Three chromatography methods have been developed for atmospheric terpene analysis under different sampling scenarios. Each method facilitates chromatographic separation of selected BVOC within a significantly reduced analysis time compared to conventional GC methods, whilst maintaining the ability to quantify individual monoterpene structural isomers. Using this approach, the C10-C15 BVOC composition of single plant emissions may be characterised within a ~ 14 min analysis time. Moreover, in situ quantification of 12 monoterpenes in unpolluted ambient air may be achieved within an ~ 11 min chromatographic separation time (increasing to ~ 19 min when simultaneous quantification of multiple oxygenated C9-C10 terpenoids is required, and/or when concentrations of anthropogenic VOC are significant). This corresponds to a two- to fivefold increase in measurement frequency compared to conventional GC methods. Here we outline the technical details and analytical capability of this chromatographic approach, and present the first in situ Fast-GC observations of 6 monoterpenes and the oxygenated BVOC linalool in ambient air. During this field deployment within a suburban forest ~ 30 km west of central Tokyo, Japan, the

  7. Fast set-based association analysis using summary data from GWAS identifies novel gene loci for human complex traits

    PubMed Central

    Bakshi, Andrew; Zhu, Zhihong; Vinkhuyzen, Anna A. E.; Hill, W. David; McRae, Allan F.; Visscher, Peter M.; Yang, Jian

    2016-01-01

    We propose a method (fastBAT) that performs a fast set-based association analysis for human complex traits using summary-level data from genome-wide association studies (GWAS) and linkage disequilibrium (LD) data from a reference sample with individual-level genotypes. We demonstrate using simulations and analyses of real datasets that fastBAT is more accurate and orders of magnitude faster than the prevailing methods. Using fastBAT, we analyze summary data from the latest meta-analyses of GWAS on 150,064–339,224 individuals for height, body mass index (BMI), and schizophrenia. We identify 6 novel gene loci for height, 2 for BMI, and 3 for schizophrenia at PfastBAT < 5 × 10−8. The gain of power is due to multiple small independent association signals at these loci (e.g. the THRB and FOXP1 loci for schizophrenia). The method is general and can be applied to GWAS data for all complex traits and diseases in humans and to such data in other species. PMID:27604177

  8. Analysis of Fast-Scale Bifurcation in Peak Current Controlled Buck-Boost Inverter Based on Unified Averaged Model

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Dong, Shuai; Guan, Weimin; Liu, Ye

    In this paper, a unified averaged modeling method is proposed to investigate the fast-scale period-doubling bifurcation of a full-bridge integrated buck-boost inverter with peak current control. In order to increase the resolution of the conventional classic averaged model to half the switching frequency, sample-and-hold effect of inductor current is absorbed into the averaged model, i.e. the proposed unified averaged model can capture the high-frequency dynamical characteristics of the buck-boost inverter, which is both an extension and a modification of conventional averaged model. Based on the unified mode, fast-scale bifurcation is identified, and the corresponding bifurcation point is predicted with the help of the locus movement of all the poles, and their underlying mechanisms are revealed. Detailed analysis shows that the occurrence of high-frequency oscillation means fast-scale bifurcation, while the occurrence of low-frequency oscillation leads to slow-scale bifurcation. Finally, it is demonstrated that the unified averaged model can provide not only a general method to investigate both the slow- and fast-scale bifurcations in a unified framework but also a quite straightforward design-oriented method which can be directly applicable.

  9. A fast initial alignment of MIMU in the two-dimension trajectory correction fuze for spinning projectile on stationary base

    NASA Astrophysics Data System (ADS)

    Wang, Qin; Li, Shi-yi; Xiao, Hong-bing; Li, Hu-quan

    2008-03-01

    In the paper, a fast initial alignment of strapdown MIMU used in the two-dimension trajectory correction fuze was analyzed. According to the situation that MIMU can't work normally because of high shock on shrapnel of Compound Extended Range by Base Bleed and Rocket at firing, the MIMU in initial alignment with Bar-Itzhack and Berman's error model is presented and the observability was analyzed. It shows that the observability of MIMU on stationary base is poor. The selection of unobservable states was discussed. A Kalman filter estimation algorithm was provided, but the azimuth error converges very slowly in initial alignment. A fast estimation method of the azimuth error was proposed. It reveals that the azimuth error can be entirely estimated from the estimates of leveling error and leveling error rate without gyro output signal. It shows that the method can realize the rapid initial alignment of MIMU.

  10. Preconceptual ABC design definition and system configuration layout: Appendix A

    SciTech Connect

    1995-03-01

    The mission of the ABC system is to destroy as effectively as possible the fissile material inserted into the core without producing any new fissile material. The contents of this report are as follows: operating conditions for the steam-cycle ABC system; flow rates and component dimensions; drawings of the ABC layout; and impact of core design parameters on containment size.

  11. 18. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21' Space Command AL-2 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - SITE PLAN. DRAWING NO. AL-2 - SHEET 3 OF 21. - Cape Cod Air Station, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  12. Preconceptual ABC design definition and system configuration layout

    SciTech Connect

    Barthold, W.

    1995-03-01

    This document is the conceptual design document for the follow-on to the Molten Salt Breeder Reactor, known as the ABC type reactor. It addresses blanket design options, containment options, off-gas systems, drainage systems, and components/layouts of the primary, secondary, and tertiary systems, and it contains a number of diagrams for the configuration of the major systems.

  13. Multiple Regression in a Two-Way Layout.

    ERIC Educational Resources Information Center

    Lindley, Dennis V.

    This paper discusses Bayesian m-group regression where the groups are arranged in a two-way layout into m rows and n columns, there still being a regression of y on the x's within each group. The mathematical model is then provided as applied to the case where the rows correspond to high schools and the columns to colleges: the predictor variables…

  14. Photocopy of original drawing showing Building 3 layout (drawing located ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of original drawing showing Building 3 layout (drawing located at NAWS China Lake, Division of Public Works). J.T. STAFFORD-J.H. DAVIES-H.L. GOGERTY: DISPENSARY, CONNECTING CORRIDORS, FLOOR PLAN, ELEVATIONS, AND DETAILS - Naval Ordnance Test Station Inyokern, Dispensary, Main Site, Lauritsen Road at McIntyre Street, Ridgecrest, Kern County, CA

  15. Performance Analysis of Intelligent Robust Facility Layout Design

    NASA Astrophysics Data System (ADS)

    Moslemipour, G.; Lee, T. S.; Loong, Y. T.

    2017-03-01

    Design of a robust production facility layout with minimum handling cost (MHC) presents an appropriate approach to tackle facility layout problems in a dynamic volatile environment, in which product demands randomly change in each planning period. The objective of the design is to find the robust facility layout with minimum total material handling cost over the entire multi-period planning horizon. This paper proposes a new mathematical model for designing robust machine layout in the stochastic dynamic environment of manufacturing systems using quadratic assignment problem (QAP) formulation. In this investigation, product demands are assumed to be normally distributed random variables with known expected value, variance, and covariance that randomly change from period to period. The proposed model was verified and validated using randomly generated numerical data and benchmark examples. The effect of dependent product demands and varying interest rate on the total cost function of the proposed model has also been investigated. Sensitivity analysis on the proposed model has been performed. Dynamic programming and simulated annealing optimization algorithms were used in solving the modeled example problems.

  16. 21. Historic drawing, Marine Railway. Equalizing Gear Layout, 1917. Photographic ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. Historic drawing, Marine Railway. Equalizing Gear Layout, 1917. Photographic copy of original. Boston National Historical Park Archives, Charlestown Navy Yard. BOSTS 13439, #551-4 - Charlestown Navy Yard, Marine Railway, Between Piers 2 & 3, on Charlestown Waterfront at west end of Navy Yard, Boston, Suffolk County, MA

  17. 26. Historic drawing, Marine Railway. Layout of Hauling Machinery, Building ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. Historic drawing, Marine Railway. Layout of Hauling Machinery, Building 24, 1917. Photographic copy of original. Boston National Historical Park Archives, Charlestown Navy Yark. BOSTS 13439, #551-15 - Charlestown Navy Yard, Marine Railway, Between Piers 2 & 3, on Charlestown Waterfront at west end of Navy Yard, Boston, Suffolk County, MA

  18. How to Choose the Network Layout That's Right for You.

    ERIC Educational Resources Information Center

    Farmer, Lesley S. J.

    1995-01-01

    Examines major network structures and provides criteria for selecting the best layout. Describes daisy-chain, star, bus, tree, ring, and token ring topologies. Discusses factors to consider when connecting computers, including user needs, printing demands, modem and file sharing, user grouping, the physical plant, and connective equipment. (AEF)

  19. IET exhaust gas duct, system layout, plan, and section. shows ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET exhaust gas duct, system layout, plan, and section. shows mounting brackets, concrete braces, divided portion of duct, other details. Ralph M. Parsons 902-5-ANP-712-S 429. Date: May 1954. Approved by INEEL Classification Office for public release. INEEL index code no. 035-0712-60-693-106980 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  20. Layout of barracks, looking 13 degrees northnortheast, with Building No. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Layout of barracks, looking 13 degrees north-northeast, with Building No. 909 (on left) and Building No. 905 (on right) - Presidio of San Francisco, Enlisted Men's Barracks Type, West end of Crissy Field, between Pearce & Maudlin Streets, San Francisco, San Francisco County, CA

  1. Improving Parallel I/O Performance with Data Layout Awareness

    SciTech Connect

    Chen, Yong; Sun, Xian-He; Thakur, Dr. Rajeev; Song, Huaiming; Jin, Hui

    2010-01-01

    Parallel applications can benefit greatly from massive computational capability, but their performance suffers from large latency of I/O accesses. The poor I/O performance has been attributed as a critical cause of the low sustained performance of parallel computing systems. In this study, we propose a data layout-aware optimization strategy to promote a better integration of the parallel I/O middleware and parallel file systems, two major components of the current parallel I/O systems, and to improve the data access performance. We explore the layout-aware optimization in both independent I/O and collective I/O, two primary forms of I/O in parallel applications. We illustrate that the layout-aware I/O optimization could improve the performance of current parallel I/O strategy effectively. The experimental results verify that the proposed strategy could improve parallel I/O performance by nearly 40% on average. The proposed layout-aware parallel I/O has a promising potential in improving the I/O performance of parallel systems.

  2. Fuzzy pattern matching techniques for photomask layout data

    NASA Astrophysics Data System (ADS)

    Kato, Kokoro; Taniguchi, Yoshiyuki; Nishizawa, Kuninori

    2013-06-01

    Pattern matching seems to be promising technique to the mask industry. It can be used for many applications such as hot spot detection of post-OPC data, search of AIMS reference location or CDSEM measurement point extraction. In particular, fuzzy pattern matching is more needed for mask data processing because the mask layout has different derivatives generated by OPC and there are many similar "OPC brothers" that come from the same layout. However, application of fuzzy pattern matching to the mask layout is challenging due to the reasons related to the characteristics of photomask data. In this paper we introduce a novel method of fuzzy pattern matching to cope with the issues that comes from the characteristics of mask data. The rule specification is quite simple - we only need to specify a single tolerance value for each edge displacement. We will show the experimental results using the actual mask layout and prove that the calculation speed and quality of the proposed technique is satisfactory from the view point of realistic MDP processing.

  3. 13. Historic drawing of rocket engine test facility layout, including ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. Historic drawing of rocket engine test facility layout, including Buildings 202, 205, 206, and 206A, February 3, 1984. NASA GRC drawing number CF-101539. On file at NASA Glenn Research Center. - Rocket Engine Testing Facility, NASA Glenn Research Center, Cleveland, Cuyahoga County, OH

  4. Developing a Web Page: Ethics, Prerequisites, Design and Layout.

    ERIC Educational Resources Information Center

    Scarcella, Joseph A.; Lane, Kenneth E.

    For educators interested in developing Web sites, four major issues should be addressed--ethics, prerequisites, design, and layout. By giving attention to these four areas, teachers will develop Web sites that improve their teaching and increase the opportunities for student learning. Each of these areas is addressed in detail, including:…

  5. A fast nonlinear conjugate gradient based method for 3D concentrated frictional contact problems

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Vollebregt, Edwin A. H.; Oosterlee, Cornelis W.

    2015-05-01

    This paper presents a fast numerical solver for a nonlinear constrained optimization problem, arising from 3D concentrated frictional shift and rolling contact problems with dry Coulomb friction. The solver combines an active set strategy with a nonlinear conjugate gradient method. One novelty is to consider the tractions of each slip element in a polar coordinate system, using azimuth angles as variables instead of conventional traction variables. The new variables are scaled by the diagonal of the underlying Jacobian. The fast Fourier transform (FFT) technique accelerates all matrix-vector products encountered, exploiting the matrix' Toeplitz structure. Numerical tests demonstrate a significant reduction of the computational time compared to existing solvers for concentrated contact problems.

  6. New, dense, and fast scintillators based on rare-earth tantalo-niobates

    NASA Astrophysics Data System (ADS)

    Voloshyna, O. V.; Boiaryntseva, I. A.; Baumer, V. N.; Ivanov, A. I.; Korjik, M. V.; Sidletskiy, O. Ts.

    2014-11-01

    Samples of undoped yttrium and gadolinium tantalo-niobates with common formulae RE(NbxTa1-x)O4, where RE=Y or Gd and x=0-1, have been obtained by solid-state reaction. Systematic study of structural, luminescent, and scintillation properties of these compounds was carried out. Lattice parameters and space groups of the mixed compounds were identified. UV- and X-ray luminescence spectra, as well as relative light outputs and scintillation decay times are measured. Gadolinium tantalo-niobate with the formulae GdNb0.2Ta0.8O4 showed the light output around 13 times larger than PbWO4 and fast decay with time constant 12 ns without additional slow component. Gadolinium tantalo-niobates may be considered as promising materials for high energy physics due to extremely high density, substantial light output, and fast decay.

  7. Ground-based complex for detection and investigation of fast optical transients in wide field

    NASA Astrophysics Data System (ADS)

    Molinari, Emilio; Beskin, Grigory; Bondar, Sergey; Karpov, Sergey; Plokhotnichenko, Vladimir; de-Bur, Vjacheslav; Greco, Guiseppe; Bartolini, Corrado; Guarnieri, Adriano; Piccioni, Adalberto

    2008-07-01

    To study short stochastic optical flares of different objects (GRBs, SNs, etc) of unknown localizations as well as NEOs it is necessary to monitor large regions of sky with high time resolution. We developed a system which consists of wide-field camera (FOW is 400-600 sq.deg.) using TV-CCD with time resolution of 0.13 s to record and classify optical transients, and a fast robotic telescope aimed to perform their spectroscopic and photometric investigation just after detection. Such two telescope complex TORTOREM combining wide-field camera TORTORA and robotic telescope REM operated from May 2006 at La Silla ESO observatory. Some results of its operation, including first fast time resolution study of optical transient accompanying GRB and discovery of its fine time structure, are presented. Prospects for improving the complex efficiency are given.

  8. A novel multi-aperture based sun sensor based on a fast multi-point MEANSHIFT (FMMS) algorithm.

    PubMed

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels.

  9. A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm

    PubMed Central

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  10. Computer-Based Video Instruction to Teach Students with Intellectual Disabilities to Verbally Respond to Questions and Make Purchases in Fast Food Restaurants

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Pridgen, Leslie S.; Cronin, Beth A.

    2005-01-01

    Computer-based video instruction (CBVI) was used to teach verbal responses to questions presented by cashiers and purchasing skills in fast food restaurants. A multiple probe design across participants was used to evaluate the effectiveness of CBVI. Instruction occurred through simulations of three fast food restaurants on the computer using video…

  11. Fast mode decision based on human noticeable luminance difference and rate distortion cost for H.264/AVC

    NASA Astrophysics Data System (ADS)

    Li, Mian-Shiuan; Chen, Mei-Juan; Tai, Kuang-Han; Sue, Kuen-Liang

    2013-12-01

    This article proposes a fast mode decision algorithm based on the correlation of the just-noticeable-difference (JND) and the rate distortion cost (RD cost) to reduce the computational complexity of H.264/AVC. First, the relationship between the average RD cost and the number of JND pixels is established by Gaussian distributions. Thus, the RD cost of the Inter 16 × 16 mode is compared with the predicted thresholds from these models for fast mode selection. In addition, we use the image content, the residual data, and JND visual model for horizontal/vertical detection, and then utilize the result to predict the partition in a macroblock. From the experimental results, a greater time saving can be achieved while the proposed algorithm also maintains performance and quality effectively.

  12. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-06-16

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).

  13. A New Computer Program for Plant Layout Design - OPDEP Optimal Plant Design and Evaluation Program.

    DTIC Science & Technology

    1980-01-01

    systematized technique of plant layout can be traced to "Systematic Layout Planning", by Richard Muther (1). He attempts to provide procedures with sufficient...11). This is the same basic assumption stated by Muther (1) in his previously mentioned book on plant layout. The author acknowledges that this may be...analysis and reasoning towards some optimum solution. REFERENCES 1. Muther , R. Systematic Layout Planning. Industrial Education Institute, Boston

  14. Real-time fMRI data analysis using region of interest selection based on fast ICA

    NASA Astrophysics Data System (ADS)

    Xie, Baoquan; Ma, Xinyue; Yao, Li; Long, Zhiying; Zhao, Xiaojie

    2011-03-01

    Real-time functional magnetic resonance imaging (rtfMRI) is a new technique which can present (feedback) brain activity during scanning. Through fast acquisition and online analysis of BOLD signal, fMRI data are processed within one TR. Current rtfMRI provides an activation map under specific task mainly through the GLM analysis to select region of interest (ROI). This study was based on independent component analysis (ICA) and used the result of fast ICA analysis to select the node of the functional network as the ROI. Real-time brain activity within the ROI was presented to the subject who needed to find strategies to control his brain activity. The whole real-time processes involved three parts: pre-processing (including head motion correction and smoothing), fast ICA analysis and feedback. In addition, the result of fast head motion correction was also presented to the experimenter in a curve diagram. Based on the above analysis processes, a real time feedback experiment with a motor imagery task was performed. An overt finger movement task as localizer session was adopted for ICA analysis to get the motor network. Supplementary motor area (SMA) in such network was selected as the ROI. During the feedback session, the average of BOLD signals within ROI was presented to the subjects for self-regulation under a motor imagery task. In this experiment, TR was 1.5 seconds, and the whole time of processing and presentation was within 1 second. Experimental results not only showed that the SMA was controllable, but also proved that the analysis method was effective.

  15. Fast O2 Binding at Dicopper Complexes Containing Schiff-Base Dinucleating Ligands

    PubMed Central

    Company, Anna; Gómez, Laura; Mas-Ballesté, Rubén; Korendovych, Ivan V.; Ribas, Xavi; Poater, Albert; Parella, Teodor; Fontrodona, Xavier; Benet-Buchholz, Jordi; Solà, Miquel; Que, Lawrence; Rybak-Akimova, Elena; Costas, Miquel

    2008-01-01

    A new family of dicopper(I) complexes [CuI2RL](X)2, (R = H, 1X, R = tBu, 2X and R = NO2, 3X, X = CF3SO3, ClO4, SbF6 or BArF, BArF = [B{3,5-(CF3)2-C6H3}4]−), where RL is a Schiff-base ligand containing two tridentate binding sites linked by a xylyl spacer have been prepared, characterized, and their reaction with O2 studied. The complexes were designed with the aim of reproducing structural aspects of the active site of type 3 dicopper proteins; they contain two three-coordinate copper sites and a rather flexible podand ligand backbone. The solid state structures of 1ClO4, 2CF3SO3, 2ClO4 and 3BArF·CH3CN have been established by single crystal X-ray diffraction analysis. 1ClO4 adopts a polymeric structure in solution while 2CF3SO3, 2ClO4 and 3BArF·CH3CN are monomeric. The complexes have been studied in solution by means of 1H and 19F NMR spectroscopy, which put forward the presence of dynamic processes in solution. 1-3BArF and 1-3CF3SO3 in acetone react rapidly with O2 to generate metaestable [CuIII2(μ-O)2(RL)]2+ 1-3(O2) and [CuIII2(μ-O)2(CF3SO3)(RL)]+ 1-3(O2)(CF3SO3) species, respectively that have been characterized by UV-vis spectroscopy and resonance Raman analysis. Instead, reaction of 1-3BArF with O2 in CH2Cl2 results in intermolecular O2 binding. DFT methods have been used to study the chemical identities and structural parameters of the O2 adducts, and the relative stability of the CuIII2(μ-O)2 form with respect to the CuII2(μ-η2: η2-peroxo) isomer. The reaction of 1X, X = CF3SO3 and BArF with O2 in acetone has been studied by stopped-flow exhibiting an unexpected very fast reaction rate (k = 3.82(4) × 103 M−1s−1, ΔH‡ = 4.9 ± 0.5 kJ·mol−1, ΔS‡ = −148 ± 5 J·K−1·mol−1), nearly three orders of magnitude faster than in the parent [CuI2(m-XYLMeAN)]2+. Thermal decomposition of 1-3(O2) does not result in aromatic hydroxylation. The mechanism and kinetics of O2 binding to 1X (X = CF3SO3 and BArF) is discussed and compared with those

  16. Layout optimization with algebraic multigrid methods

    NASA Technical Reports Server (NTRS)

    Regler, Hans; Ruede, Ulrich

    1993-01-01

    Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.

  17. Development of SiPM-based scintillator tile detectors for a multi-layer fast neutron tracker

    NASA Astrophysics Data System (ADS)

    Preston, R.; Jakubek, J.; Prokopovich, D.; Uher, J.

    2012-10-01

    We are developing thin tile scintillator detectors with silicon photomultiplier (SiPM) readout for use in a multi-layer fast-neutron tracker. The tracker is based on interleaved Timepix and plastic scintillator layers. The thin 15 × 15 × 2 mm plastic scintillators require suitable optical readout in order to detect and measure the energy lost by energetic protons that have been recoiled by fast neutrons. Our first prototype used dual SiPMs, coupled to opposite edges of the scintillator tile using light-guides. An alternative readout geometry was designed in an effort to increase the fraction of scintillation light detected by the SiPMs. The new prototype uses a larger SiPM array to cover the entire top face of the tile. This paper details the comparative performance of the two prototype designs. A deuterium-tritium (DT) fast-neutron source was used to compare the relative light collection efficiency of the two designs. A collimated UV light source was scanned across the detector face to map the uniformity. The new prototype was found to have 9.5 times better light collection efficiency over the original design. Both prototypes exhibit spatial non-uniformity in their response. Methods of correcting this non-uniformity are discussed.

  18. Fast freeze-drying cycle design and optimization using a PAT based on the measurement of product temperature.

    PubMed

    Bosca, Serena; Barresi, Antonello A; Fissore, Davide

    2013-10-01

    This paper is focused on the use of an innovative Process Analytical Technology for the fast design and optimization of freeze-drying cycles for pharmaceuticals. The tool is based on a soft-sensor, a device that uses the experimental measure of product temperature during freeze-drying, a mathematical model of the process, and the Extended Kalman Filter algorithm to estimate the sublimation flux, the residual amount of ice in the vial, and some model parameters (heat and mass transfer coefficients). The accuracy of the estimations provided by the soft-sensor has been shown using as test case aqueous solutions containing different excipients (sucrose, polyvinylpyrrolidone), processed at various operating conditions, pointing out that the soft-sensor allows a fast estimation of model parameters and product dynamics without involving expensive hardware or time consuming analysis. The possibility of using the soft-sensor to calculate in-line (or off-line) the design space of the primary drying phase is here presented and discussed. Results evidences that by this way, it is possible to identify the values of the heating fluid temperature that maintain product temperature below the limit value, as well as the operating conditions that maximize the sublimation flux. Various experiments have been carried out to test the effectiveness of the proposed approach for a fast design of the cycle, evidencing that drying time can be significantly reduced, without impairing product quality.

  19. SU-F-BRD-07: Fast Monte Carlo-Based Biological Optimization of Proton Therapy Treatment Plans for Thyroid Tumors

    SciTech Connect

    Wan Chan Tseung, H; Ma, J; Ma, D; Beltran, C

    2015-06-15

    Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based biological planning for the treatment of thyroid tumors in spot-scanning proton therapy. Methods: Recently, we developed a fast and accurate GPU-based MC simulation of proton transport that was benchmarked against Geant4.9.6 and used as the dose calculation engine in a clinically-applicable GPU-accelerated IMPT optimizer. Besides dose, it can simultaneously score the dose-averaged LET (LETd), which makes fast biological dose (BD) estimates possible. To convert from LETd to BD, we used a linear relation based on cellular irradiation data. Given a thyroid patient with a 93cc tumor volume, we created a 2-field IMPT plan in Eclipse (Varian Medical Systems). This plan was re-calculated with our MC to obtain the BD distribution. A second 5-field plan was made with our in-house optimizer, using pre-generated MC dose and LETd maps. Constraints were placed to maintain the target dose to within 25% of the prescription, while maximizing the BD. The plan optimization and calculation of dose and LETd maps were performed on a GPU cluster. The conventional IMPT and biologically-optimized plans were compared. Results: The mean target physical and biological doses from our biologically-optimized plan were, respectively, 5% and 14% higher than those from the MC re-calculation of the IMPT plan. Dose sparing to critical structures in our plan was also improved. The biological optimization, including the initial dose and LETd map calculations, can be completed in a clinically viable time (∼30 minutes) on a cluster of 25 GPUs. Conclusion: Taking advantage of GPU acceleration, we created a MC-based, biologically optimized treatment plan for a thyroid patient. Compared to a standard IMPT plan, a 5% increase in the target’s physical dose resulted in ∼3 times as much increase in the BD. Biological planning was thus effective in escalating the target BD.

  20. Radar cross-section reduction based on an iterative fast Fourier transform optimized metasurface

    NASA Astrophysics Data System (ADS)

    Song, Yi-Chuan; Ding, Jun; Guo, Chen-Jiang; Ren, Yu-Hui; Zhang, Jia-Kai

    2016-07-01

    A novel polarization insensitive metasurface with over 25 dB monostatic radar cross-section (RCS) reduction is introduced. The proposed metasurface is comprised of carefully arranged unit cells with spatially varied dimension, which enables approximate uniform diffusion of incoming electromagnetic (EM) energy and reduces the threat from bistatic radar system. An iterative fast Fourier transform (FFT) method for conventional antenna array pattern synthesis is innovatively applied to find the best unit cell geometry parameter arrangement. Finally, a metasurface sample is fabricated and tested to validate RCS reduction behavior predicted by full wave simulation software Ansys HFSSTM and marvelous agreement is observed.