Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems
NASA Astrophysics Data System (ADS)
Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.
2008-08-01
This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.
AutoBayes Program Synthesis System Users Manual
NASA Technical Reports Server (NTRS)
Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd
2008-01-01
Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.
Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images
NASA Technical Reports Server (NTRS)
Fischer, Bernd
2004-01-01
Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.
Subotin, Michael; Davis, Anthony R
2016-09-01
Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Certifying Auto-Generated Flight Code
NASA Technical Reports Server (NTRS)
Denney, Ewen
2008-01-01
Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.
2012-01-01
Background Structured association mapping is proving to be a powerful strategy to find genetic polymorphisms associated with disease. However, these algorithms are often distributed as command line implementations that require expertise and effort to customize and put into practice. Because of the difficulty required to use these cutting-edge techniques, geneticists often revert to simpler, less powerful methods. Results To make structured association mapping more accessible to geneticists, we have developed an automatic processing system called Auto-SAM. Auto-SAM enables geneticists to run structured association mapping algorithms automatically, using parallelization. Auto-SAM includes algorithms to discover gene-networks and find population structure. Auto-SAM can also run popular association mapping algorithms, in addition to five structured association mapping algorithms. Conclusions Auto-SAM is available through GenAMap, a front-end desktop visualization tool. GenAMap and Auto-SAM are implemented in JAVA; binaries for GenAMap can be downloaded from http://sailing.cs.cmu.edu/genamap. PMID:22471660
Retrofitting the AutoBayes Program Synthesis System with Concrete Syntax
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Visser, Eelco
2004-01-01
AutoBayes is a fully automatic, schema-based program synthesis system for statistical data analysis applications. Its core component is a schema library. i.e., a collection of generic code templates with associated applicability constraints which are instantiated in a problem-specific way during synthesis. Currently, AutoBayes is implemented in Prolog; the schemas thus use abstract syntax (i.e., Prolog terms) to formulate the templates. However, the conceptual distance between this abstract representation and the concrete syntax of the generated programs makes the schemas hard to create and maintain. In this paper we describe how AutoBayes is retrofitted with concrete syntax. We show how it is integrated into Prolog and describe how the seamless interaction of concrete syntax fragments with AutoBayes's remaining legacy meta-programming kernel based on abstract syntax is achieved. We apply the approach to gradually mitigate individual schemas without forcing a disruptive migration of the entire system to a different First experiences show that a smooth migration can be achieved. Moreover, it can result in a considerable reduction of the code size and improved readability of the code. In particular, abstracting out fresh-variable generation and second-order term construction allows the formulation of larger continuous fragments.
Edmands, William M B; Barupal, Dinesh K; Scalbert, Augustin
2015-03-01
MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker-MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC-MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. © The Author 2014. Published by Oxford University Press.
Edmands, William M. B.; Barupal, Dinesh K.; Scalbert, Augustin
2015-01-01
Summary: MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker—MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). Availability and implementation: All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC–MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. Contact: ScalbertA@iarc.fr PMID:25348215
Natural Language Interface for Safety Certification of Safety-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2011-01-01
Model-based design and automated code generation are being used increasingly at NASA. The trend is to move beyond simulation and prototyping to actual flight code, particularly in the guidance, navigation, and control domain. However, there are substantial obstacles to more widespread adoption of code generators in such safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. The AutoCert generator plug-in supports the certification of automatically generated code by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews.
NASA Astrophysics Data System (ADS)
Butykai, A.; Domínguez-García, P.; Mor, F. M.; Gaál, R.; Forró, L.; Jeney, S.
2017-11-01
The present document is an update of the previously published MatLab code for the calibration of optical tweezers in the high-resolution detection of the Brownian motion of non-spherical probes [1]. In this instance, an alternative version of the original code, based on the same physical theory [2], but focused on the automation of the calibration of measurements using spherical probes, is outlined. The new added code is useful for high-frequency microrheology studies, where the probe radius is known but the viscosity of the surrounding fluid maybe not. This extended calibration methodology is automatic, without the need of a user's interface. A code for calibration by means of thermal noise analysis [3] is also included; this is a method that can be applied when using viscoelastic fluids if the trap stiffness is previously estimated [4]. The new code can be executed in MatLab and using GNU Octave. Program Files doi:http://dx.doi.org/10.17632/s59f3gz729.1 Licensing provisions: GPLv3 Programming language: MatLab 2016a (MathWorks Inc.) and GNU Octave 4.0 Operating system: Linux and Windows. Supplementary material: A new document README.pdf includes basic running instructions for the new code. Journal reference of previous version: Computer Physics Communications, 196 (2015) 599 Does the new version supersede the previous version?: No. It adds alternative but compatible code while providing similar calibration factors. Nature of problem (approx. 50-250 words): The original code uses a MatLab-provided user's interface, which is not available in GNU Octave, and cannot be used outside of a proprietary software as MatLab. Besides, the process of calibration when using spherical probes needs an automatic method when calibrating big amounts of different data focused to microrheology. Solution method (approx. 50-250 words): The new code can be executed in the latest version of MatLab and using GNU Octave, a free and open-source alternative to MatLab. This code generates an automatic calibration process which requires only to write the input data in the main script. Additionally, we include a calibration method based on thermal noise statistics, which can be used with viscoelastic fluids if the trap stiffness is previously estimated. Reasons for the new version: This version extends the functionality of PFMCal for the particular case of spherical probes and unknown fluid viscosities. The extended code is automatic, works in different operating systems and it is compatible with GNU Octave. Summary of revisions: The original MatLab program in the previous version, which is executed by PFMCal.m, is not changed. Here, we have added two additional main archives named PFMCal_auto.m and PFMCal_histo.m, which implement automatic calculations of the calibration process and calibration through Boltzmann statistics, respectively. The process of calibration using this code for spherical beads is described in the README.pdf file provided in the new code submission. Here, we obtain different calibration factors, β (given in μm/V), according to [2], related to two statistical quantities: the mean-squared displacement (MSD), βMSD, and the velocity autocorrelation function (VAF), βVAF. Using that methodology, the trap stiffness, k, and the zero-shear viscosity of the fluid, η, can be calculated if the value of the particle's radius, a, is previously known. For comparison, we include in the extended code the method of calibration using the corner frequency of the power-spectral density (PSD) [5], providing a calibration factor βPSD. Besides, with the prior estimation of the trap stiffness, along with the known value of the particle's radius, we can use thermal noise statistics to obtain calibration factors, β, according to the quadratic form of the optical potential, βE, and related to the Gaussian distribution of the bead's positions, βσ2. This method has been demonstrated to be applicable to the calibration of optical tweezers when using non-Newtonian viscoelastic polymeric liquids [4]. An example of the results using this calibration process is summarized in Table 1. Using the data provided in the new code submission, for water and acetone fluids, we calculate all the calibration factors by using the original PFMCal.m and by the new non-GUI code PFMCal_auto.m and PFMCal_histo.m. Regarding the new code, PFMCal_auto.m returns η, k, βMSD, βVAF and βPSD, while PFMCal_histo.m provides βσ2 and βE. Table 1 shows how we obtain the expected viscosity of the two fluids at this temperature and how the different methods provide good agreement between trap stiffnesses and calibration factors. Additional comments including Restrictions and Unusual features (approx. 50-250 words): The original code, PFMCal.m, runs under MatLab using the Statistics Toolbox. The extended code, PFMCal_auto.m and PFMCal_histo.m, can be executed without modification using MatLab or GNU Octave. The code has been tested in Linux and Windows operating systems.
A Generic Software Safety Document Generator
NASA Technical Reports Server (NTRS)
Denney, Ewen; Venkatesan, Ram Prasad
2004-01-01
Formal certification is based on the idea that a mathematical proof of some property of a piece of software can be regarded as a certificate of correctness which, in principle, can be subjected to external scrutiny. In practice, however, proofs themselves are unlikely to be of much interest to engineers. Nevertheless, it is possible to use the information obtained from a mathematical analysis of software to produce a detailed textual justification of correctness. In this paper, we describe an approach to generating textual explanations from automatically generated proofs of program safety, where the proofs are of compliance with an explicit safety policy that can be varied. Key to this is tracing proof obligations back to the program, and we describe a tool which implements this to certify code auto-generated by AutoBayes and AutoFilter, program synthesis systems under development at the NASA Ames Research Center. Our approach is a step towards combining formal certification with traditional certification methods.
Fast Computation of the Two-Point Correlation Function in the Age of Big Data
NASA Astrophysics Data System (ADS)
Pellegrino, Andrew; Timlin, John
2018-01-01
We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.
ERIC Educational Resources Information Center
Hevel, David; Tannehill, Dana, Ed.
This module is the eighth of nine modules in the competency-based Missouri Auto Mechanics Curriculum Guide. Six units cover: introduction to automatic transmission/transaxle; hydraulic control systems; transmission/transaxle diagnosis; automatic transmission/transaxle maintenance and adjustment; in-vehicle transmission repair; and off-car…
Compiler-Driven Performance Optimization and Tuning for Multicore Architectures
2015-04-10
develop a powerful system for auto-tuning of library routines and compute-intensive kernels, driven by the Pluto system for multicores that we are...kernels, driven by the Pluto system for multicores that we are developing. The work here is motivated by recent advances in two major areas of...automatic C-to-CUDA code generator using a polyhedral compiler transformation framework. We have used and adapted PLUTO (our state-of-the-art tool
Auto identification technology and its impact on patient safety in the Operating Room of the Future.
Egan, Marie T; Sandberg, Warren S
2007-03-01
Automatic identification technologies, such as bar coding and radio frequency identification, are ubiquitous in everyday life but virtually nonexistent in the operating room. User expectations, based on everyday experience with automatic identification technologies, have generated much anticipation that these systems will improve readiness, workflow, and safety in the operating room, with minimal training requirements. We report, in narrative form, a multi-year experience with various automatic identification technologies in the Operating Room of the Future Project at Massachusetts General Hospital. In each case, the additional human labor required to make these ;labor-saving' technologies function in the medical environment has proved to be their undoing. We conclude that while automatic identification technologies show promise, significant barriers to realizing their potential still exist. Nevertheless, overcoming these obstacles is necessary if the vision of an operating room of the future in which all processes are monitored, controlled, and optimized is to be achieved.
NASA Astrophysics Data System (ADS)
Jiang, Jingtao; Sui, Rendong; Shi, Yan; Li, Furong; Hu, Caiqi
In this paper 3-D models of combined fixture elements are designed, classified by their functions, and saved in computer as supporting elements library, jointing elements library, basic elements library, localization elements library, clamping elements library, and adjusting elements library etc. Then automatic assembly of 3-D combined checking fixture for auto-body part is presented based on modularization theory. And in virtual auto-body assembly space, Locating constraint mapping technique and assembly rule-based reasoning technique are used to calculate the position of modular elements according to localization points and clamp points of auto-body part. Auto-body part model is transformed from itself coordinate system space to virtual assembly space by homogeneous transformation matrix. Automatic assembly of different functional fixture elements and auto-body part is implemented with API function based on the second development of UG. It is proven in practice that the method in this paper is feasible and high efficiency.
AutoFACT: An Automatic Functional Annotation and Classification Tool
Koski, Liisa B; Gray, Michael W; Lang, B Franz; Burger, Gertraud
2005-01-01
Background Assignment of function to new molecular sequence data is an essential step in genomics projects. The usual process involves similarity searches of a given sequence against one or more databases, an arduous process for large datasets. Results We present AutoFACT, a fully automated and customizable annotation tool that assigns biologically informative functions to a sequence. Key features of this tool are that it (1) analyzes nucleotide and protein sequence data; (2) determines the most informative functional description by combining multiple BLAST reports from several user-selected databases; (3) assigns putative metabolic pathways, functional classes, enzyme classes, GeneOntology terms and locus names; and (4) generates output in HTML, text and GFF formats for the user's convenience. We have compared AutoFACT to four well-established annotation pipelines. The error rate of functional annotation is estimated to be only between 1–2%. Comparison of AutoFACT to the traditional top-BLAST-hit annotation method shows that our procedure increases the number of functionally informative annotations by approximately 50%. Conclusion AutoFACT will serve as a useful annotation tool for smaller sequencing groups lacking dedicated bioinformatics staff. It is implemented in PERL and runs on LINUX/UNIX platforms. AutoFACT is available at . PMID:15960857
Exogean: a framework for annotating protein-coding genes in eukaryotic genomic DNA
Djebali, Sarah; Delaplace, Franck; Crollius, Hugues Roest
2006-01-01
Background Accurate and automatic gene identification in eukaryotic genomic DNA is more than ever of crucial importance to efficiently exploit the large volume of assembled genome sequences available to the community. Automatic methods have always been considered less reliable than human expertise. This is illustrated in the EGASP project, where reference annotations against which all automatic methods are measured are generated by human annotators and experimentally verified. We hypothesized that replicating the accuracy of human annotators in an automatic method could be achieved by formalizing the rules and decisions that they use, in a mathematical formalism. Results We have developed Exogean, a flexible framework based on directed acyclic colored multigraphs (DACMs) that can represent biological objects (for example, mRNA, ESTs, protein alignments, exons) and relationships between them. Graphs are analyzed to process the information according to rules that replicate those used by human annotators. Simple individual starting objects given as input to Exogean are thus combined and synthesized into complex objects such as protein coding transcripts. Conclusion We show here, in the context of the EGASP project, that Exogean is currently the method that best reproduces protein coding gene annotations from human experts, in terms of identifying at least one exact coding sequence per gene. We discuss current limitations of the method and several avenues for improvement. PMID:16925841
AUTO_DERIV: Tool for automatic differentiation of a Fortran code
NASA Astrophysics Data System (ADS)
Stamatiadis, S.; Farantos, S. C.
2010-10-01
AUTO_DERIV is a module comprised of a set of FORTRAN 95 procedures which can be used to calculate the first and second partial derivatives (mixed or not) of any continuous function with many independent variables. The mathematical function should be expressed as one or more FORTRAN 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the FORTRAN 95 language is extensively used to define the differentiation rules. Proper (standard complying) handling of floating-point exceptions is provided by using the IEEE_EXCEPTIONS intrinsic module (Technical Report 15580, incorporated in FORTRAN 2003). New version program summaryProgram title: AUTO_DERIV Catalogue identifier: ADLS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADLS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2963 No. of bytes in distributed program, including test data, etc.: 10 314 Distribution format: tar.gz Programming language: Fortran 95 + (optionally) TR-15580 (Floating-point exception handling) Computer: all platforms with a Fortran 95 compiler Operating system: Linux, Windows, MacOS Classification: 4.12, 6.2 Catalogue identifier of previous version: ADLS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 127 (2000) 343 Does the new version supersede the previous version?: Yes Nature of problem: The need to calculate accurate derivatives of a multivariate function frequently arises in computational physics and chemistry. The most versatile approach to evaluate them by a computer, automatically and to machine precision, is via user-defined types and operator overloading. AUTO_DERIV is a Fortran 95 implementation of them, designed to evaluate the first and second derivatives of a function of many variables. Solution method: The mathematical rules for differentiation of sums, products, quotients, elementary functions in conjunction with the chain rule for compound functions are applied. The function should be expressed as one or more Fortran 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the Fortran 95 language is extensively used to implement the differentiation rules. Reasons for new version: The new version supports Fortran 95, handles properly the floating-point exceptions, and is faster due to internal reorganization. All discovered bugs are fixed. Summary of revisions:The code was rewritten extensively to benefit from features introduced in Fortran 95. Additionally, there was a major internal reorganization of the code, resulting in faster execution. The user interface described in the original paper was not changed. The values that the user must or should specify before compilation (essentially, the number of independent variables) were moved into ad_types module. There were many minor bug fixes. One important bug was found and fixed; the code did not handle correctly the overloading of ∗ in aλ when a=0. The case of division by zero and the discontinuity of the function at the requested point are indicated by standard IEEE exceptions ( IEEE_DIVIDE_BY_ZERO and IEEE_INVALID respectively). If the compiler does not support IEEE exceptions, a module with the appropriate name is provided, imitating the behavior of the 'standard' module in the sense that it raises the corresponding exceptions. It is up to the compiler (through certain flags probably) to detect them. Restrictions: None imposed by the program. There are certain limitations that may appear mostly due to the specific implementation chosen in the user code. They can always be overcome by recoding parts of the routines developed by the user or by modifying AUTO_DERIV according to specific instructions given in [1]. The common restrictions of available memory and the capabilities of the compiler are the same as the original version. Additional comments: The program has been tested using the following compilers: Intel ifort, GNU gfortran, NAGWare f95, g95. Running time: The typical running time for the program depends on the compiler and the complexity of the differentiated function. A rough estimate is that AUTO_DERIV is ten times slower than the evaluation of the analytical ('by hand') function value and derivatives (if they are available). References:S. Stamatiadis, R. Prosmiti, S.C. Farantos, AUTO_DERIV: tool for automatic differentiation of a Fortran code, Comput. Phys. Comm. 127 (2000) 343.
NASA Astrophysics Data System (ADS)
Chairunnisa, S.; Setiawan, N.; Irkham; Ekawati, K.; Anwar, A.; Fitri, A. DP
2018-02-01
Fish Aggregation Device (FAD) is a fishing tool that serves to collect fish at a place to facilitate fishermen in the process of fishing. The use of light is also proven to help the process of fishing at night. AUTO-LION (Automatic Lighting Rumpon) is a FADs innovation equipped with fish-eating sound and solar-powered lights that can be activated automatically when it is dark or nighttime.The purpose of this study was to determine the effect of AUTO-LION use on fishermen catch. The research method used is experimental fishing.The research was conducted on May 2017 on the stationary lift net in Semarang Waters. The results showed the catch as much as 10.55 kg without the use of AUTO-LION, 15.05 kg on the use of FADs, 19.08 kg on the use of FADs with sound, 27.04 kg on the use of FADs with light, and 40.01 kg on the use of AUTO-LION. Based on these results it can be seen that the use of AUTO-LION can increase the catch of fishermen, especially when the light is activated.
Auto-Regulatory RNA Editing Fine-Tunes mRNA Re-Coding and Complex Behaviour in Drosophila
Savva, Yiannis A.; Jepson, James E.C; Sahin, Asli; Sugden, Arthur U.; Dorsky, Jacquelyn S.; Alpert, Lauren; Lawrence, Charles; Reenan, Robert A.
2014-01-01
Auto-regulatory feedback loops are a common molecular strategy used to optimize protein function. In Drosophila many mRNAs involved in neuro-transmission are re-coded at the RNA level by the RNA editing enzyme dADAR, leading to the incorporation of amino acids that are not directly encoded by the genome. dADAR also re-codes its own transcript, but the consequences of this auto-regulation in vivo are unclear. Here we show that hard-wiring or abolishing endogenous dADAR auto-regulation dramatically remodels the landscape of re-coding events in a site-specific manner. These molecular phenotypes correlate with altered localization of dADAR within the nuclear compartment. Furthermore, auto-editing exhibits sexually dimorphic patterns of spatial regulation and can be modified by abiotic environmental factors. Finally, we demonstrate that modifying dAdar auto-editing affects adaptive complex behaviors. Our results reveal the in vivo relevance of auto-regulatory control over post-transcriptional mRNA re-coding events in fine-tuning brain function and organismal behavior. PMID:22531175
Automatic Scaling of Digisonde Ionograms Test and Evaluation Report.
1982-09-01
Labrador 30 21 IManual MUF - Auto MUFj/Manual MUF April 1980 Goose Bay, Labrador 31 22 IManual MUF - Auto MUFI /Manual MUF July 1980 Goose Bay...Labrador 32 23 IManual MUF - Auto MUFI /Manual MUF September 1980 Goose Bay, Labrador 33 24 !Manual M(3000) - Auto M(3000)1 January 1980 Goose Bay, Labrador
46 CFR 97.16-1 - Use of auto pilot.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Use of auto pilot. 97.16-1 Section 97.16-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS OPERATIONS Auto Pilot § 97.16-1 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used...
46 CFR 97.16-1 - Use of auto pilot.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Use of auto pilot. 97.16-1 Section 97.16-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS OPERATIONS Auto Pilot § 97.16-1 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used...
46 CFR 78.19-1 - Use of auto pilot.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 3 2012-10-01 2012-10-01 false Use of auto pilot. 78.19-1 Section 78.19-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PASSENGER VESSELS OPERATIONS Auto Pilot § 78.19-1 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used in— (a...
46 CFR 97.16-1 - Use of auto pilot.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Use of auto pilot. 97.16-1 Section 97.16-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS OPERATIONS Auto Pilot § 97.16-1 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used...
46 CFR 78.19-1 - Use of auto pilot.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 3 2014-10-01 2014-10-01 false Use of auto pilot. 78.19-1 Section 78.19-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PASSENGER VESSELS OPERATIONS Auto Pilot § 78.19-1 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used in— (a...
46 CFR 78.19-1 - Use of auto pilot.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 3 2013-10-01 2013-10-01 false Use of auto pilot. 78.19-1 Section 78.19-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PASSENGER VESSELS OPERATIONS Auto Pilot § 78.19-1 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used in— (a...
46 CFR 97.16-1 - Use of auto pilot.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Use of auto pilot. 97.16-1 Section 97.16-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS OPERATIONS Auto Pilot § 97.16-1 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used...
46 CFR 78.19-1 - Use of auto pilot.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 3 2010-10-01 2010-10-01 false Use of auto pilot. 78.19-1 Section 78.19-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PASSENGER VESSELS OPERATIONS Auto Pilot § 78.19-1 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used in— (a...
46 CFR 78.19-1 - Use of auto pilot.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 3 2011-10-01 2011-10-01 false Use of auto pilot. 78.19-1 Section 78.19-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PASSENGER VESSELS OPERATIONS Auto Pilot § 78.19-1 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used in— (a...
46 CFR 97.16-1 - Use of auto pilot.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Use of auto pilot. 97.16-1 Section 97.16-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS OPERATIONS Auto Pilot § 97.16-1 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used...
DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.
Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques
2008-09-08
Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stukel, Laura; Hoen, Ben; Adomatis, Sandra
Capturing the Sun: A Roadmap for Navigating Data-Access Challenges and Auto-Populating Solar Home Sales Listings supports a vision of solar photovoltaic (PV) advocates and real estate advocates evolving together to make information about solar homes more accessible to home buyers and sellers and to simplify the process when these homes are resold. The Roadmap is based on a concept in the real estate industry known as automatic population of fields. Auto-population (also called auto-pop in the industry) is the technology that allows data aggregated by an outside industry to be matched automatically with home sale listings in a multiple listingmore » service (MLS).« less
Ohlsén, L; Jungner, I; Peterson, H E
2014-05-22
This paper presents the history of data system development steps (1964 - 1986) for the clinical analyzers AutoChemist®, and its successor AutoChemist PRISMA® (PRogrammable Individually Selective Modular Analyzer). The paper also partly recounts the history of development steps of the minicomputer PDP 8 from Digital Equipment. The first PDP 8 had 4 core memory boards of 1 K each and was large as a typical oven baking sheet and about 10 years later, PDP 8 was a "one chip microcomputer" with a 32 K memory chip. The fast developments of PDP 8 come to have a strong influence on the development of the data system for AutoChemist. Five major releases of the software were made during this period (1-5 MIACH). The most important aims were not only to calculate the results, but also be able to monitor their quality and automatically manage the orders, store the results in digital form for later statistical analysis and distribute the results to the physician in charge of the patient using thesame computer as the analyzer. Another result of the data system was the ability to customize AutoChemist to handle sample identification by using bar codes and the presentation of results to different types of laboratories. Digital Equipment launched the PDP 8 just as a new minicomputer was desperately needed. No other known alternatives were available at the time. This was to become a key success factor for AutoChemist. That the AutoChemist with such a high capacity required a computer for data collection was obvious already in the early 1960s. That computer development would be so rapid and that one would be able to accomplish so much with a data system was even suspicious at the time. In total, 75; AutoChemist (31) and PRISMA (44) were delivered Worldwide. The last PRISMA was delivered in 1987 to the Veteran Hospital Houston, TX USA.
Several years of experience with automatic DI-flux systems: theory, validation and results
NASA Astrophysics Data System (ADS)
Poncelet, Antoine; Gonsette, Alexandre; Rasson, Jean
2017-09-01
The previous release of our automatic DI-flux instrument, called AutoDIF mk2.2, has now been running continuously since June 2012 in the absolute house of Dourbes magnetic observatory performing measurement every 30 min. A second one has been working in the tunnel of Conrad observatory (Austria) since December 2013. After this proof of concept, we improved the AutoDIF to version mk2.3, which was presented at the 16th IAGA workshop in Hyderabad. As of publication, we have successfully deployed six AutoDIFs in various environments: two in Dourbes (DOU), one in Manhay (MAB), one in Conrad (CON), one in Daejeon (South Korea) and one is used for tests. The latter was installed for 10 months in Chambon-la-Forêt (CLF) and, since 2016, has been in Kakioka (KAK). In this paper, we will compare the automatic to the human-made measurements and discuss the advantages and disadvantages of automatic measurements.
Pontes, E R; Matos, L C; da Silva, E A; Xavier, L S; Diaz, B L; Small, I A; Reis, E M; Verjovski-Almeida, S; Barcinski, M A; Gimba, E R P
2006-10-01
Here we evaluate auto-antibody response against two potential antigenic determinants of genes highly expressed in low Gleason Score prostate cancer (PC) tumor samples, namely FLJ23438 and VAMP3. RT-PCR assays were used to analyze mRNA expression profiles of FLJ23438 and VAMP3 transcripts. The auto-antibody response against FLJ23438 and VAMP3 recombinant proteins was tested by immunoblot assays using PC, benign prostate hyperplasia (BPH), healthy donors (HD), and other human cancers plasma samples. Our data showed that 37% (10/27) and 7.4% (2/27) of PC plasma samples presented auto-antibodies against FLJ23438 and VAMP3, respectively. Only 8.3% (1/12) of BPH plasma samples were reactive for both auto-antibodies, while none (0/12) of HD plasma samples tested were reactive. The prevalence of 37% of positive PC plasma samples for anti-FLJ23438 antibodies suggests that humoral immune response against this antigenic determinant could be a potential serum marker for this cancer. (c) 2006 Wiley-Liss, Inc.
OpenSQUID: A Flexible Open-Source Software Framework for the Control of SQUID Electronics
Jaeckel, Felix T.; Lafler, Randy J.; Boyd, S. T. P.
2013-02-06
We report commercially available computer-controlled SQUID electronics are usually delivered with software providing a basic user interface for adjustment of SQUID tuning parameters, such as bias current, flux offset, and feedback loop settings. However, in a research context it would often be useful to be able to modify this code and/or to have full control over all these parameters from researcher-written software. In the case of the STAR Cryoelectronics PCI/PFL family of SQUID control electronics, the supplied software contains modules for automatic tuning and noise characterization, but does not provide an interface for user code. On the other hand, themore » Magnicon SQUIDViewer software package includes a public application programming interface (API), but lacks auto-tuning and noise characterization features. To overcome these and other limitations, we are developing an "open-source" framework for controlling SQUID electronics which should provide maximal interoperability with user software, a unified user interface for electronics from different manufacturers, and a flexible platform for the rapid development of customized SQUID auto-tuning and other advanced features. Finally, we have completed a first implementation for the STAR Cryoelectronics hardware and have made the source code for this ongoing project available to the research community on SourceForge (http://opensquid.sourceforge.net) under the GNU public license.« less
46 CFR 185.360 - Use of auto pilot.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 7 2012-10-01 2012-10-01 false Use of auto pilot. 185.360 Section 185.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS (UNDER 100 GROSS TONS) OPERATIONS Miscellaneous Operating Requirements § 185.360 Use of auto pilot. Whenever an automatic pilot is...
46 CFR 131.960 - Use of auto-pilot.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Use of auto-pilot. 131.960 Section 131.960 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OFFSHORE SUPPLY VESSELS OPERATIONS Miscellaneous § 131.960 Use of auto-pilot. When the automatic pilot is used in areas of high traffic density...
46 CFR 185.360 - Use of auto pilot.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 7 2013-10-01 2013-10-01 false Use of auto pilot. 185.360 Section 185.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS (UNDER 100 GROSS TONS) OPERATIONS Miscellaneous Operating Requirements § 185.360 Use of auto pilot. Whenever an automatic pilot is...
46 CFR 131.960 - Use of auto-pilot.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Use of auto-pilot. 131.960 Section 131.960 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OFFSHORE SUPPLY VESSELS OPERATIONS Miscellaneous § 131.960 Use of auto-pilot. When the automatic pilot is used in areas of high traffic density...
46 CFR 131.960 - Use of auto-pilot.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Use of auto-pilot. 131.960 Section 131.960 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OFFSHORE SUPPLY VESSELS OPERATIONS Miscellaneous § 131.960 Use of auto-pilot. When the automatic pilot is used in areas of high traffic density...
46 CFR 185.360 - Use of auto pilot.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 7 2014-10-01 2014-10-01 false Use of auto pilot. 185.360 Section 185.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS (UNDER 100 GROSS TONS) OPERATIONS Miscellaneous Operating Requirements § 185.360 Use of auto pilot. Whenever an automatic pilot is...
46 CFR 131.960 - Use of auto-pilot.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Use of auto-pilot. 131.960 Section 131.960 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OFFSHORE SUPPLY VESSELS OPERATIONS Miscellaneous § 131.960 Use of auto-pilot. When the automatic pilot is used in areas of high traffic density...
46 CFR 185.360 - Use of auto pilot.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 7 2010-10-01 2010-10-01 false Use of auto pilot. 185.360 Section 185.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS (UNDER 100 GROSS TONS) OPERATIONS Miscellaneous Operating Requirements § 185.360 Use of auto pilot. Whenever an automatic pilot is...
46 CFR 185.360 - Use of auto pilot.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 7 2011-10-01 2011-10-01 false Use of auto pilot. 185.360 Section 185.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS (UNDER 100 GROSS TONS) OPERATIONS Miscellaneous Operating Requirements § 185.360 Use of auto pilot. Whenever an automatic pilot is...
46 CFR 131.960 - Use of auto-pilot.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Use of auto-pilot. 131.960 Section 131.960 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OFFSHORE SUPPLY VESSELS OPERATIONS Miscellaneous § 131.960 Use of auto-pilot. When the automatic pilot is used in areas of high traffic density...
Auto Code Generation for Simulink-Based Attitude Determination Control System
NASA Technical Reports Server (NTRS)
MolinaFraticelli, Jose Carlos
2012-01-01
This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.
46 CFR 109.585 - Use of auto pilot.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Use of auto pilot. 109.585 Section 109.585 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) A-MOBILE OFFSHORE DRILLING UNITS OPERATIONS Miscellaneous § 109.585 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used...
46 CFR 109.585 - Use of auto pilot.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Use of auto pilot. 109.585 Section 109.585 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) A-MOBILE OFFSHORE DRILLING UNITS OPERATIONS Miscellaneous § 109.585 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used...
46 CFR 109.585 - Use of auto pilot.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Use of auto pilot. 109.585 Section 109.585 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) A-MOBILE OFFSHORE DRILLING UNITS OPERATIONS Miscellaneous § 109.585 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used...
46 CFR 109.585 - Use of auto pilot.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Use of auto pilot. 109.585 Section 109.585 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) A-MOBILE OFFSHORE DRILLING UNITS OPERATIONS Miscellaneous § 109.585 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used...
46 CFR 109.585 - Use of auto pilot.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Use of auto pilot. 109.585 Section 109.585 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) A-MOBILE OFFSHORE DRILLING UNITS OPERATIONS Miscellaneous § 109.585 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used...
46 CFR 122.360 - Use of auto pilot.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Use of auto pilot. 122.360 Section 122.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS CARRYING MORE THAN 150... Requirements § 122.360 Use of auto pilot. Whenever an automatic pilot is used the master shall ensure that: (a...
46 CFR 122.360 - Use of auto pilot.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Use of auto pilot. 122.360 Section 122.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS CARRYING MORE THAN 150... Requirements § 122.360 Use of auto pilot. Whenever an automatic pilot is used the master shall ensure that: (a...
46 CFR 122.360 - Use of auto pilot.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Use of auto pilot. 122.360 Section 122.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS CARRYING MORE THAN 150... Requirements § 122.360 Use of auto pilot. Whenever an automatic pilot is used the master shall ensure that: (a...
46 CFR 122.360 - Use of auto pilot.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Use of auto pilot. 122.360 Section 122.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS CARRYING MORE THAN 150... Requirements § 122.360 Use of auto pilot. Whenever an automatic pilot is used the master shall ensure that: (a...
Some Thermophysical Properties of Blood Components and Coolants for Frozen Blood Shipping Containers
1989-09-01
SP number by sending a DP reading. Subroutine : AutoControl Automatic control to set temperature. : Autodisplay Get ilL Thermocouple readings and...RETURN 133 AutoControl : Auto control Mode ON TIMER(ReportTime) GOSUB Autodisplay Update Screen in constant interval TIMER ON WHILE Success a 0 Turn off
46 CFR 122.360 - Use of auto pilot.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Use of auto pilot. 122.360 Section 122.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS CARRYING MORE THAN 150... Requirements § 122.360 Use of auto pilot. Whenever an automatic pilot is used the master shall ensure that: (a...
Reflector automatic acquisition and pointing based on auto-collimation theodolite.
Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu
2018-01-01
An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.
Reflector automatic acquisition and pointing based on auto-collimation theodolite
NASA Astrophysics Data System (ADS)
Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu
2018-01-01
An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.
NASA Astrophysics Data System (ADS)
Krey, Mike; Schlatter, Ueli
The tasks and objectives of automatic identification (Auto-ID) are to provide information on goods and products. It has already been established for years in the areas of logistics and trading and can no longer be ignored by the German healthcare sector. Some German hospitals have already discovered the capabilities of Auto-ID. Improvements in quality, safety and reductions in risk, cost and time are aspects and areas where improvements are achievable. Privacy protection, legal restraints, and the personal rights of patients and staff members are just a few aspects which make the heath care sector a sensible field for the implementation of Auto-ID. Auto-ID in this context contains the different technologies, methods and products for the registration, provision and storage of relevant data. With the help of a quantifiable and science-based evaluation, an answer is sought as to which Auto-ID has the highest capability to be implemented in healthcare business.
Generating Code Review Documentation for Auto-Generated Mission-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2009-01-01
Model-based design and automated code generation are increasingly used at NASA to produce actual flight code, particularly in the Guidance, Navigation, and Control domain. However, since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently auto-generated code still needs to be fully tested and certified. We have thus developed AUTOCERT, a generator-independent plug-in that supports the certification of auto-generated code. AUTOCERT takes a set of mission safety requirements, and formally verifies that the autogenerated code satisfies these requirements. It generates a natural language report that explains why and how the code complies with the specified requirements. The report is hyper-linked to both the program and the verification conditions and thus provides a high-level structured argument containing tracing information for use in code reviews.
The performance of two automatic servo-ventilation devices in the treatment of central sleep apnea.
Javaheri, Shahrokh; Goetting, Mark G; Khayat, Rami; Wylie, Paul E; Goodwin, James L; Parthasarathy, Sairam
2011-12-01
This study was conducted to evaluate the therapeutic performance of a new auto Servo Ventilation device (Philips Respironics autoSV Advanced) for the treatment of complex central sleep apnea (CompSA). The features of autoSV Advanced include an automatic expiratory pressure (EPAP) adjustment, an advanced algorithm for distinguishing open versus obstructed airway apnea, a modified auto backup rate which is proportional to subject's baseline breathing rate, and a variable inspiratory support. Our primary aim was to compare the performance of the advanced servo-ventilator (BiPAP autoSV Advanced) with conventional servo-ventilator (BiPAP autoSV) in treating central sleep apnea (CSA). A prospective, multicenter, randomized, controlled trial. Five sleep laboratories in the United States. Thirty-seven participants were included. All subjects had full night polysomnography (PSG) followed by a second night continuous positive airway pressure (CPAP) titration. All had a central apnea index ≥ 5 per hour of sleep on CPAP. Subjects were randomly assigned to 2 full-night PSGs while treated with either the previously marketed autoSV, or the new autoSV Advanced device. The 2 randomized sleep studies were blindly scored centrally. Across the 4 nights (PSG, CPAP, autoSV, and autoSV Advanced), the mean ± 1 SD apnea hypopnea indices were 53 ± 23, 35 ± 20, 10 ± 10, and 6 ± 6, respectively; indices for CSA were 16 ± 19, 19 ± 18, 3 ± 4, and 0.6 ± 1. AutoSV Advanced was more effective than other modes in correcting sleep related breathing disorders. BiPAP autoSV Advanced was more effective than conventional BiPAP autoSV in the treatment of sleep disordered breathing in patients with CSA.
46 CFR 167.65-35 - Use of auto pilot.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 7 2011-10-01 2011-10-01 false Use of auto pilot. 167.65-35 Section 167.65-35 Shipping... Special Operating Requirements § 167.65-35 Use of auto pilot. Except as provided in 33 CFR 164.15, when the automatic pilot is used in— (a) Areas of high traffic density; (b) Conditions of restricted...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y; Liao, Z; Jiang, W
Purpose: To evaluate the feasibility of using an automatic segmentation tool to delineate cardiac substructures from computed tomography (CT) images for cardiac toxicity analysis for non-small cell lung cancer (NSCLC) patients after radiotherapy. Methods: A multi-atlas segmentation tool developed in-house was used to delineate eleven cardiac substructures including the whole heart, four heart chambers, and six greater vessels automatically from the averaged 4DCT planning images for 49 NSCLC patients. The automatic segmented contours were edited appropriately by two experienced radiation oncologists. The modified contours were compared with the auto-segmented contours using Dice similarity coefficient (DSC) and mean surface distance (MSD)more » to evaluate how much modification was needed. In addition, the dose volume histogram (DVH) of the modified contours were compared with that of the auto-segmented contours to evaluate the dosimetric difference between modified and auto-segmented contours. Results: Of the eleven structures, the averaged DSC values ranged from 0.73 ± 0.08 to 0.95 ± 0.04 and the averaged MSD values ranged from 1.3 ± 0.6 mm to 2.9 ± 5.1mm for the 49 patients. Overall, the modification is small. The pulmonary vein (PV) and the inferior vena cava required the most modifications. The V30 (volume receiving 30 Gy or above) for the whole heart and the mean dose to the whole heart and four heart chambers did not show statistically significant difference between modified and auto-segmented contours. The maximum dose to the greater vessels did not show statistically significant difference except for the PV. Conclusion: The automatic segmentation of the cardiac substructures did not require substantial modification. The dosimetric evaluation showed no statistically significant difference between auto-segmented and modified contours except for the PV, which suggests that auto-segmented contours for the cardiac dose response study are feasible in the clinical practice with a minor modification to the PV vessel.« less
A New Design Method of Automotive Electronic Real-time Control System
NASA Astrophysics Data System (ADS)
Zuo, Wenying; Li, Yinguo; Wang, Fengjuan; Hou, Xiaobo
Structure and functionality of automotive electronic control system is becoming more and more complex. The traditional manual programming development mode to realize automotive electronic control system can't satisfy development needs. So, in order to meet diversity and speedability of development of real-time control system, combining model-based design approach and auto code generation technology, this paper proposed a new design method of automotive electronic control system based on Simulink/RTW. Fristly, design algorithms and build a control system model in Matlab/Simulink. Then generate embedded code automatically by RTW and achieve automotive real-time control system development in OSEK/VDX operating system environment. The new development mode can significantly shorten the development cycle of automotive electronic control system, improve program's portability, reusability and scalability and had certain practical value for the development of real-time control system.
Evaluation of auto incident recording system (AIRS).
DOT National Transportation Integrated Search
2005-05-01
The Auto Incident Recording System (AIRS) is a sound-actuated video recording system. It automatically records potential incidents when activated by sound (horns, clashing metal, squealing tires, etc.). The purpose is to detect patterns of crashes at...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-12
... and higher in the Automatic Execution mode of order interaction (``AutoEx'') \\3\\. In addition, the... Rule 11.13(b). \\4\\ Id. AutoEx Take Fee for Securities Priced One Dollar and Higher For orders in securities priced one dollar and above that take liquidity in AutoEx, the proposed rule change lowers the...
2014-01-01
Summary Objectives This paper presents the history of data system development steps (1964 – 1986) for the clinical analyzers AutoChemist®, and its successor AutoChemist PRISMA® (PRogrammable Individually Selective Modular Analyzer). The paper also partly recounts the history of development steps of the minicomputer PDP 8 from Digital Equipment. The first PDP 8 had 4 core memory boards of 1 K each and was large as a typical oven baking sheet and about 10 years later, PDP 8 was a “one chip microcomputer” with a 32 K memory chip. The fast developments of PDP 8 come to have a strong influence on the development of the data system for AutoChemist. Five major releases of the software were made during this period (1-5 MIACH). Results The most important aims were not only to calculate the results, but also be able to monitor their quality and automatically manage the orders, store the results in digital form for later statistical analysis and distribute the results to the physician in charge of the patient using thesame computer as the analyzer. Another result of the data system was the ability to customize AutoChemist to handle sample identification by using bar codes and the presentation of results to different types of laboratories. Conclusions Digital Equipment launched the PDP 8 just as a new minicomputer was desperately needed. No other known alternatives were available at the time. This was to become a key success factor for AutoChemist. That the AutoChemist with such a high capacity required a computer for data collection was obvious already in the early 1960s. That computer development would be so rapid and that one would be able to accomplish so much with a data system was even suspicious at the time. In total, 75; AutoChemist (31) and PRISMA (44) were delivered Worldwide The last PRISMA was delivered in 1987 to the Veteran Hospital Houston, TX USA PMID:24853032
2009-10-01
The F-16D Automatic Collision Avoidance Technology aircraft tests of the Automatic Ground Collision Avoidance System, or Auto-GCAS, included flights in areas of potentially hazardous terrain, including canyons and mountains.
Feedback circuit design of an auto-gating power supply for low-light-level image intensifier
NASA Astrophysics Data System (ADS)
Yang, Ye; Yan, Bo; Zhi, Qiang; Ni, Xiao-bing; Li, Jun-guo; Wang, Yu; Yao, Ze
2015-11-01
This paper introduces the basic principle of auto-gating power supply which using a hybrid automatic brightness control scheme. By the analysis of current as image intensifier to special requirements of auto-gating power supply, a feedback circuit of the auto-gating power supply is analyzed. Find out the reason of the screen flash after the auto-gating power supply assembled image intensifier. This paper designed a feedback circuit which can shorten the response time of auto-gating power supply and improve screen slight flicker phenomenon which the human eye can distinguish under the high intensity of illumination.
Autoclass: An automatic classification system
NASA Technical Reports Server (NTRS)
Stutz, John; Cheeseman, Peter; Hanson, Robin
1991-01-01
The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.
Auto-Coding UML Statecharts for Flight Software
NASA Technical Reports Server (NTRS)
Benowitz, Edward G; Clark, Ken; Watney, Garth J.
2006-01-01
Statecharts have been used as a means to communicate behaviors in a precise manner between system engineers and software engineers. Hand-translating a statechart to code, as done on some previous space missions, introduces the possibility of errors in the transformation from chart to code. To improve auto-coding, we have developed a process that generates flight code from UML statecharts. Our process is being used for the flight software on the Space Interferometer Mission (SIM).
46 CFR 35.20-45 - Use of Auto Pilot-T/ALL.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 1 2011-10-01 2011-10-01 false Use of Auto Pilot-T/ALL. 35.20-45 Section 35.20-45 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS OPERATIONS Navigation § 35.20-45 Use of Auto Pilot—T/ALL. Except as provided in 33 CFR 164.13, when the automatic pilot is used in: (a) Areas...
46 CFR 35.20-45 - Use of Auto Pilot-T/ALL.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 1 2014-10-01 2014-10-01 false Use of Auto Pilot-T/ALL. 35.20-45 Section 35.20-45 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS OPERATIONS Navigation § 35.20-45 Use of Auto Pilot—T/ALL. Except as provided in 33 CFR 164.13, when the automatic pilot is used in: (a) Areas...
46 CFR 35.20-45 - Use of Auto Pilot-T/ALL.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 1 2013-10-01 2013-10-01 false Use of Auto Pilot-T/ALL. 35.20-45 Section 35.20-45 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS OPERATIONS Navigation § 35.20-45 Use of Auto Pilot—T/ALL. Except as provided in 33 CFR 164.13, when the automatic pilot is used in: (a) Areas...
46 CFR 35.20-45 - Use of Auto Pilot-T/ALL.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 1 2012-10-01 2012-10-01 false Use of Auto Pilot-T/ALL. 35.20-45 Section 35.20-45 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS OPERATIONS Navigation § 35.20-45 Use of Auto Pilot—T/ALL. Except as provided in 33 CFR 164.13, when the automatic pilot is used in: (a) Areas...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-12
... and above that add liquidity in the Automatic Execution mode of order interaction (``AutoEx'').\\3\\ \\3... displayed orders in Tape A and C securities priced one dollar and above that add liquidity in AutoEx, the... that add liquidity in AutoEx if such ETP Holder's Liquidity Adding ADV is less than 20 basis points of...
46 CFR 35.20-45 - Use of Auto Pilot-T/ALL.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 1 2010-10-01 2010-10-01 false Use of Auto Pilot-T/ALL. 35.20-45 Section 35.20-45 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS OPERATIONS Navigation § 35.20-45 Use of Auto Pilot—T/ALL. Except as provided in 33 CFR 164.13, when the automatic pilot is used in: (a) Areas...
Wide tracking range, auto ranging, low jitter phase lock loop for swept and fixed frequency systems
Kerner, Thomas M.
2001-01-01
The present invention provides a wide tracking range phase locked loop (PLL) circuit that achieves minimal jitter in a recovered clock signal, regardless of the source of the jitter (i.e. whether it is in the source or the transmission media). The present invention PLL has automatic harmonic lockout detection circuitry via a novel lock and seek control logic in electrical communication with a programmable frequency discriminator and a code balance detector. (The frequency discriminator enables preset of a frequency window of upper and lower frequency limits to derive a programmable range within which signal acquisition is effected. The discriminator works in combination with the code balance detector circuit to minimize the sensitivity of the PLL circuit to random data in the data stream). In addition, the combination of a differential loop integrator with the lock and seek control logic obviates a code preamble and guarantees signal acquisition without harmonic lockup. An adaptive cable equalizer is desirably used in combination with the present invention PLL to recover encoded transmissions containing a clock and/or data. The equalizer automatically adapts to equalize short haul cable lengths of coaxial and twisted pair cables or wires and provides superior jitter performance itself. The combination of the equalizer with the present invention PLL is desirable in that such combination permits the use of short haul wires without significant jitter.
Validity of radiographic assessment of the knee joint space using automatic image analysis.
Komatsu, Daigo; Hasegawa, Yukiharu; Kojima, Toshihisa; Seki, Taisuke; Ikeuchi, Kazuma; Takegami, Yasuhiko; Amano, Takafumi; Higuchi, Yoshitoshi; Kasai, Takehiro; Ishiguro, Naoki
2016-09-01
The present study investigated whether there were differences between automatic and manual measurements of the minimum joint space width (mJSW) on knee radiographs. Knee radiographs of 324 participants in a systematic health screening were analyzed using the following three methods: manual measurement of film-based radiographs (Manual), manual measurement of digitized radiographs (Digital), and automatic measurement of digitized radiographs (Auto). The mean mJSWs on the medial and lateral sides of the knees were determined using each method, and measurement reliability was evaluated using intra-class correlation coefficients. Measurement errors were compared between normal knees and knees with radiographic osteoarthritis. All three methods demonstrated good reliability, although the reliability was slightly lower with the Manual method than with the other methods. On the medial and lateral sides of the knees, the mJSWs were the largest in the Manual method and the smallest in the Auto method. The measurement errors of each method were significantly larger for normal knees than for radiographic osteoarthritis knees. The mJSW measurements are more accurate and reliable with the Auto method than with the Manual or Digital method, especially for normal knees. Therefore, the Auto method is ideal for the assessment of the knee joint space.
A dedicated on-line detecting system for auto air dryers
NASA Astrophysics Data System (ADS)
Shi, Chao-yu; Luo, Zai
2013-10-01
According to the correlative automobile industry standard and the requirements of manufacturer, this dedicated on-line detecting system is designed against the shortage of low degree automatic efficiency and detection precision of auto air dryer in the domestic. Fast automatic detection is achieved by combining the technology of computer control, mechatronics and pneumatics. This system can detect the speciality performance of pressure regulating valve and sealability of auto air dryer, in which online analytical processing of test data is available, at the same time, saving and inquiring data is achieved. Through some experimental analysis, it is indicated that efficient and accurate detection of the performance of auto air dryer is realized, and the test errors are less than 3%. Moreover, we carry out the type A evaluation of uncertainty in test data based on Bayesian theory, and the results show that the test uncertainties of all performance parameters are less than 0.5kPa, which can meet the requirements of operating industrial site absolutely.
Pycellerator: an arrow-based reaction-like modelling language for biological simulations.
Shapiro, Bruce E; Mjolsness, Eric
2016-02-15
We introduce Pycellerator, a Python library for reading Cellerator arrow notation from standard text files, conversion to differential equations, generating stand-alone Python solvers, and optionally running and plotting the solutions. All of the original Cellerator arrows, which represent reactions ranging from mass action, Michales-Menten-Henri (MMH) and Gene-Regulation (GRN) to Monod-Wyman-Changeaux (MWC), user defined reactions and enzymatic expansions (KMech), were previously represented with the Mathematica extended character set. These are now typed as reaction-like commands in ASCII text files that are read by Pycellerator, which includes a Python command line interface (CLI), a Python application programming interface (API) and an iPython notebook interface. Cellerator reaction arrows are now input in text files. The arrows are parsed by Pycellerator and translated into differential equations in Python, and Python code is automatically generated to solve the system. Time courses are produced by executing the auto-generated Python code. Users have full freedom to modify the solver and utilize the complete set of standard Python tools. The new libraries are completely independent of the old Cellerator software and do not require Mathematica. All software is available (GPL) from the github repository at https://github.com/biomathman/pycellerator/releases. Details, including installation instructions and a glossary of acronyms and terms, are given in the Supplementary information. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Sharfo, Abdul Wahab M; Breedveld, Sebastiaan; Voet, Peter W J; Heijkoop, Sabrina T; Mens, Jan-Willem M; Hoogeman, Mischa S; Heijmen, Ben J M
2016-01-01
To develop and validate fully automated generation of VMAT plan-libraries for plan-of-the-day adaptive radiotherapy in locally-advanced cervical cancer. Our framework for fully automated treatment plan generation (Erasmus-iCycle) was adapted to create dual-arc VMAT treatment plan libraries for cervical cancer patients. For each of 34 patients, automatically generated VMAT plans (autoVMAT) were compared to manually generated, clinically delivered 9-beam IMRT plans (CLINICAL), and to dual-arc VMAT plans generated manually by an expert planner (manVMAT). Furthermore, all plans were benchmarked against 20-beam equi-angular IMRT plans (autoIMRT). For all plans, a PTV coverage of 99.5% by at least 95% of the prescribed dose (46 Gy) had the highest planning priority, followed by minimization of V45Gy for small bowel (SB). Other OARs considered were bladder, rectum, and sigmoid. All plans had a highly similar PTV coverage, within the clinical constraints (above). After plan normalizations for exactly equal median PTV doses in corresponding plans, all evaluated OAR parameters in autoVMAT plans were on average lower than in the CLINICAL plans with an average reduction in SB V45Gy of 34.6% (p<0.001). For 41/44 autoVMAT plans, SB V45Gy was lower than for manVMAT (p<0.001, average reduction 30.3%), while SB V15Gy increased by 2.3% (p = 0.011). AutoIMRT reduced SB V45Gy by another 2.7% compared to autoVMAT, while also resulting in a 9.0% reduction in SB V15Gy (p<0.001), but with a prolonged delivery time. Differences between manVMAT and autoVMAT in bladder, rectal and sigmoid doses were ≤ 1%. Improvements in SB dose delivery with autoVMAT instead of manVMAT were higher for empty bladder PTVs compared to full bladder PTVs, due to differences in concavity of the PTVs. Quality of automatically generated VMAT plans was superior to manually generated plans. Automatic VMAT plan generation for cervical cancer has been implemented in our clinical routine. Due to the achieved workload reduction, extension of plan libraries has become feasible.
Zheng, Liyao; Hua, Ruimao
2018-06-01
Direct transformation of carbon-hydrogen bond (C-H) has emerged to be a trend for construction of molecules from building blocks with no or less prefunctionalization, leading high atom and step economy. Directing group (DG) strategy is widely used to achieve higher reactivity and selectivity, but additional steps are usually needed for installation and/or cleavage of DGs, limiting step economy of the overall transformation. To meet this challenge, we proposed a concept of automatic DG (DG auto ), which is auto-installed and/or auto-cleavable. Multifunctional oxime and hydrazone DG auto were designed for C-H activation and alkyne annulation to furnish diverse nitrogen-containing heterocycles. Imidazole was employed as an intrinsic DG (DG in ) to synthesize ring-fused and π-extended functional molecules. The alkyne group in the substrates can also be served as DG in for ortho-C-H activation to afford carbocycles. In this account, we intend to give a review of our progress in this area and brief introduction of other related advances on C-H functionalization using DG auto or DG in strategies. © 2018 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A QR code identification technology in package auto-sorting system
NASA Astrophysics Data System (ADS)
di, Yi-Juan; Shi, Jian-Ping; Mao, Guo-Yong
2017-07-01
Traditional manual sorting operation is not suitable for the development of Chinese logistics. For better sorting packages, a QR code recognition technology is proposed to identify the QR code label on the packages in package auto-sorting system. The experimental results compared with other algorithms in literatures demonstrate that the proposed method is valid and its performance is superior to other algorithms.
Nigro, Carlos Alberto; González, Sergio; Arce, Anabella; Aragone, María Rosario; Nigro, Luciana
2015-05-01
Patients under treatment with continuous positive airway pressure (CPAP) may have residual sleep apnea (RSA). The main objective of our study was to evaluate a novel auto-CPAP for the diagnosis of RSA. All patients referred to the sleep laboratory to undergo CPAP polysomnography were evaluated. Patients treated with oxygen or noninvasive ventilation and split-night polysomnography (PSG), PSG with artifacts, or total sleep time less than 180 min were excluded. The PSG was manually analyzed before generating the automatic report from auto-CPAP. PSG variables (respiratory disturbance index (RDI), obstructive apnea index, hypopnea index, and central apnea index) were compared with their counterparts from auto-CPAP through Bland-Altman plots and intraclass correlation coefficient. The diagnostic accuracy of autoscoring from auto-CPAP using different cutoff points of RDI (≥5 and 10) was evaluated by the receiver operating characteristics (ROCs) curve. The study included 114 patients (24 women; mean age and BMI, 59 years old and 33 kg/m(2); RDI and apnea/hypopnea index (AHI)-auto median, 5 and 2, respectively). The average difference between the AHI-auto and the RDI was -3.5 ± 3.9. The intraclass correlation coefficient (ICC) between the total number of central apneas, obstructive, and hypopneas between the PSG and the auto-CPAP were 0.69, 0.16, and 0.15, respectively. An AHI-auto >2 (RDI ≥ 5) or >4 (RDI ≥ 10) had an area under the ROC curve, sensitivity, specificity, positive likelihood ratio, and negative for diagnosis of residual sleep apnea of 0.84/0.89, 84/81%, 82/91%, 4.5/9.5, and 0.22/0.2, respectively. The automatic analysis from auto-CPAP (S9 Autoset) showed a good diagnostic accuracy to identify residual sleep apnea. The absolute agreement between PSG and auto-CPAP to classify the respiratory events correctly varied from very low (obstructive apneas, hypopneas) to moderate (central apneas).
Use of Semi-Autonomous Tools for ISS Commanding and Monitoring
NASA Technical Reports Server (NTRS)
Brzezinski, Amy S.
2014-01-01
As the International Space Station (ISS) has moved into a utilization phase, operations have shifted to become more ground-based with fewer mission control personnel monitoring and commanding multiple ISS systems. This shift to fewer people monitoring more systems has prompted use of semi-autonomous console tools in the ISS Mission Control Center (MCC) to help flight controllers command and monitor the ISS. These console tools perform routine operational procedures while keeping the human operator "in the loop" to monitor and intervene when off-nominal events arise. Two such tools, the Pre-positioned Load (PPL) Loader and Automatic Operators Recorder Manager (AutoORM), are used by the ISS Communications RF Onboard Networks Utilization Specialist (CRONUS) flight control position. CRONUS is responsible for simultaneously commanding and monitoring the ISS Command & Data Handling (C&DH) and Communications and Tracking (C&T) systems. PPL Loader is used to uplink small pieces of frequently changed software data tables, called PPLs, to ISS computers to support different ISS operations. In order to uplink a PPL, a data load command must be built that contains multiple user-input fields. Next, a multiple step commanding and verification procedure must be performed to enable an onboard computer for software uplink, uplink the PPL, verify the PPL has incorporated correctly, and disable the computer for software uplink. PPL Loader provides different levels of automation in both building and uplinking these commands. In its manual mode, PPL Loader automatically builds the PPL data load commands but allows the flight controller to verify and save the commands for future uplink. In its auto mode, PPL Loader automatically builds the PPL data load commands for flight controller verification, but automatically performs the PPL uplink procedure by sending commands and performing verification checks while notifying CRONUS of procedure step completion. If an off-nominal condition occurs during procedure execution, PPL Loader notifies CRONUS through popup messages, allowing CRONUS to examine the situation and choose an option of how PPL loader should proceed with the procedure. The use of PPL Loader to perform frequent, routine PPL uplinks offloads CRONUS to better monitor two ISS systems. It also reduces procedure performance time and decreases risk of command errors. AutoORM identifies ISS communication outage periods and builds commands to lock, playback, and unlock ISS Operations Recorder files. Operation Recorder files are circular buffer files of continually recorded ISS telemetry data. Sections of these files can be locked from further writing, be played back to capture telemetry data that occurred during an ISS loss of signal (LOS) period, and then be unlocked for future recording use. Downlinked Operation Recorder files are used by mission support teams for data analysis, especially if failures occur during LOS. The commands to lock, playback, and unlock Operations Recorder files are encompassed in three different operational procedures and contain multiple user-input fields. AutoORM provides different levels of automation for building and uplinking the commands to lock, playback, and unlock Operations Recorder files. In its automatic mode, AutoORM automatically detects ISS LOS periods, then generates and uplinks the commands to lock, playback, and unlock Operations Recorder files when MCC regains signal with ISS. AutoORM also features semi-autonomous and manual modes which integrate CRONUS more into the command verification and uplink process. AutoORMs ability to automatically detect ISS LOS periods and build the necessary commands to preserve, playback, and release recorded telemetry data greatly offloads CRONUS to perform more high-level cognitive tasks, such as mission planning and anomaly troubleshooting. Additionally, since Operations Recorder commands contain numerical time input fields which are tedious for a human to manually build, AutoORM's ability to automatically build commands reduces operational command errors. PPL Loader and AutoORM demonstrate principles of semi-autonomous operational tools that will benefit future space mission operations. Both tools employ different levels of automation to perform simple and routine procedures, thereby offloading human operators to perform higher-level cognitive tasks. Because both tools provide procedure execution status and highlight off-nominal indications, the flight controller is able to intervene during procedure execution if needed. Semi-autonomous tools and systems that can perform routine procedures, yet keep human operators informed of execution, will be essential in future long-duration missions where the onboard crew will be solely responsible for spacecraft monitoring and control.
Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-11-01
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.
Botros, Andrew; van Dijk, Bas; Killian, Matthijs
2007-05-01
AutoNRT is an automated system that measures electrically evoked compound action potential (ECAP) thresholds from the auditory nerve with the Nucleus Freedom cochlear implant. ECAP thresholds along the electrode array are useful in objectively fitting cochlear implant systems for individual use. This paper provides the first detailed description of the AutoNRT algorithm and its expert systems, and reports the clinical success of AutoNRT to date. AutoNRT determines thresholds by visual detection, using two decision tree expert systems that automatically recognise ECAPs. The expert systems are guided by a dataset of 5393 neural response measurements. The algorithm approaches threshold from lower stimulus levels, ensuring recipient safety during postoperative measurements. Intraoperative measurements use the same algorithm but proceed faster by beginning at stimulus levels much closer to threshold. When searching for ECAPs, AutoNRT uses a highly specific expert system (specificity of 99% during training, 96% during testing; sensitivity of 91% during training, 89% during testing). Once ECAPs are established, AutoNRT uses an unbiased expert system to determine an accurate threshold. Throughout the execution of the algorithm, recording parameters (such as implant amplifier gain) are automatically optimised when needed. In a study that included 29 intraoperative and 29 postoperative subjects (a total of 418 electrodes), AutoNRT determined a threshold in 93% of cases where a human expert also determined a threshold. When compared to the median threshold of multiple human observers on 77 randomly selected electrodes, AutoNRT performed as accurately as the 'average' clinician. AutoNRT has demonstrated a high success rate and a level of performance that is comparable with human experts. It has been used in many clinics worldwide throughout the clinical trial and commercial launch of Nucleus Custom Sound Suite, significantly streamlining the clinical procedures associated with cochlear implant use.
Cannesson, Maxime; Tanabe, Masaki; Suffoletto, Matthew S; McNamara, Dennis M; Madan, Shobhit; Lacomis, Joan M; Gorcsan, John
2007-01-16
We sought to test the hypothesis that a novel 2-dimensional echocardiographic image analysis system using artificial intelligence-learned pattern recognition can rapidly and reproducibly calculate ejection fraction (EF). Echocardiographic EF by manual tracing is time consuming, and visual assessment is inherently subjective. We studied 218 patients (72 female), including 165 with abnormal left ventricular (LV) function. Auto EF incorporated a database trained on >10,000 human EF tracings to automatically locate and track the LV endocardium from routine grayscale digital cineloops and calculate EF in 15 s. Auto EF results were independently compared with manually traced biplane Simpson's rule, visual EF, and magnetic resonance imaging (MRI) in a subset. Auto EF was possible in 200 (92%) of consecutive patients, of which 77% were completely automated and 23% required manual editing. Auto EF correlated well with manual EF (r = 0.98; 6% limits of agreement) and required less time per patient (48 +/- 26 s vs. 102 +/- 21 s; p < 0.01). Auto EF correlated well with visual EF by expert readers (r = 0.96; p < 0.001), but interobserver variability was greater (3.4 +/- 2.9% vs. 9.8 +/- 5.7%, respectively; p < 0.001). Visual EF was less accurate by novice readers (r = 0.82; 19% limits of agreement) and improved with trainee-operated Auto EF (r = 0.96; 7% limits of agreement). Auto EF also correlated with MRI EF (n = 21) (r = 0.95; 12% limits of agreement), but underestimated absolute volumes (r = 0.95; bias of -36 +/- 27 ml overall). Auto EF can automatically calculate EF similarly to results by manual biplane Simpson's rule and MRI, with less variability than visual EF, and has clinical potential.
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
Stitching-aware in-design DPT auto fixing for sub-20nm logic devices
NASA Astrophysics Data System (ADS)
Choi, Soo-Han; Sai Krishna, K. V. V. S.; Pemberton-Smith, David
2017-03-01
As the technology continues to shrink below 20nm, Double Patterning Technology (DPT) becomes one of the mandatory solutions for routing metal layers. From the view point of Place and Route (P&R), the major concerns are how to prevent DPT odd-cycles automatically without sacrificing chip area. Even though the leading-edge P&R tools have advanced algorithms to prevent DPT odd-cycles, it is very hard to prevent the localized DPT odd-cycles, especially in Engineering Change Order (ECO) routing. In the last several years, we developed In-design DPT Auto Fixing method in order to reduce localized DPT odd-cycles significantly during ECO and could achieve remarkable design Turn-Around Times (TATs). But subsequently, as the design complexity continued increasing and chip size continued decreasing, we needed a new In-design DPT Auto Fixing approach to improve the auto. fixing rate. In this paper, we present the Stitching-Aware In-design DPT Auto Fixing method for better fixing rates and smaller chip design. The previous In-design DPT Auto Fixing method detected all DPT odd-cycles and tried to remove oddcycles by increasing the adjacent space. As the metal congestions increase in the newer technology nodes, the older Auto Fixing method has limitations to increase the adjacent space between routing metals. Consequently, the auto fixing rate of older method gets worse with the introduction of the smaller design rules. With DPT stitching enablement at In-design DRC checking procedure, the new Stitching-Aware DPT Auto Fixing method detects the most critical odd-cycles and revolve the odd-cycles automatically. The accuracy of new flow ensures better usage of space in the congested areas, and helps design more smaller chips. By applying the Stitching-Aware DPT Auto Fixing method to sub-20nm logic devices, we can confirm that the auto fixing rate is improved by 2X compared with auto fixing without stitching. Additionally, by developing the better heuristic algorithm and flow for DPT stitching, we can get DPT compliant layout with the acceptable design TATs.
Shenoy, Archana; Blelloch, Robert
2009-09-11
The Microprocessor, containing the RNA binding protein Dgcr8 and RNase III enzyme Drosha, is responsible for processing primary microRNAs to precursor microRNAs. The Microprocessor regulates its own levels by cleaving hairpins in the 5'UTR and coding region of the Dgcr8 mRNA, thereby destabilizing the mature transcript. To determine whether the Microprocessor has a broader role in directly regulating other coding mRNA levels, we integrated results from expression profiling and ultra high-throughput deep sequencing of small RNAs. Expression analysis of mRNAs in wild-type, Dgcr8 knockout, and Dicer knockout mouse embryonic stem (ES) cells uncovered mRNAs that were specifically upregulated in the Dgcr8 null background. A number of these transcripts had evolutionarily conserved predicted hairpin targets for the Microprocessor. However, analysis of deep sequencing data of 18 to 200nt small RNAs in mouse ES, HeLa, and HepG2 indicates that exonic sequence reads that map in a pattern consistent with Microprocessor activity are unique to Dgcr8. We conclude that the Microprocessor's role in directly destabilizing coding mRNAs is likely specifically targeted to Dgcr8 itself, suggesting a specialized cellular mechanism for gene auto-regulation.
Zenitani, Satoko; Nishiuchi, Hiromu; Kiuchi, Takahiro
2010-04-01
The Smart-card-based Automatic Meal Record system for company cafeterias (AutoMealRecord system) was recently developed and used to monitor employee eating habits. The system could be a unique nutrition assessment tool for automatically monitoring the meal purchases of all employees, although it only focuses on company cafeterias and has never been validated. Before starting an interventional study, we tested the reliability of the data collected by the system using the data mining approach. The AutoMealRecord data were examined to determine if it could predict current obesity. All data used in this study (n = 899) were collected by a major electric company based in Tokyo, which has been operating the AutoMealRecord system for several years. We analyzed dietary patterns by principal component analysis using data from the system and extracted 5 major dietary patterns: healthy, traditional Japanese, Chinese, Japanese noodles, and pasta. The ability to predict current body mass index (BMI) with dietary preference was assessed with multiple linear regression analyses, and in the current study, BMI was positively correlated with male gender, preference for "Japanese noodles," mean energy intake, protein content, and frequency of body measurement at a body measurement booth in the cafeteria. There was a negative correlation with age, dietary fiber, and lunchtime cafeteria use (R(2) = 0.22). This regression model predicted "would-be obese" participants (BMI >or= 23) with 68.8% accuracy by leave-one-out cross validation. This shows that there was sufficient predictability of BMI based on data from the AutoMealRecord System. We conclude that the AutoMealRecord system is valuable for further consideration as a health care intervention tool. Copyright 2010 Elsevier Inc. All rights reserved.
Min, Mun Ki; Ryu, Ji Ho; Kim, Yong In; Park, Maeng Real; Park, Yong Myeon; Park, Sung Wook; Yeom, Seok Ran; Han, Sang Kyoon; Kim, Yang Weon
2014-11-01
In an attempt to begin ST-segment elevation myocardial infarction (STEMI) treatment more quickly (referred to as door-to-balloon [DTB] time) by minimizing preventable delays in electrocardiogram (ECG) interpretation, cardiac catheterization laboratory (CCL) activation was changed from activation by the emergency physician (code heart I) to activation by a single page if the ECG is interpreted as STEMI by the ECG machine (ECG machine auto-interpretation) (code heart II). We sought to determine the impact of ECG machine auto-interpretation on CCL activation. The study period was from June 2010 to May 2012 (from June to November 2011, code heart I; from December 2011 to May 2012, code heart II). All patients aged 18 years or older who were diagnosed with STEMI were evaluated for enrollment. Patients who experienced the code heart system were also included. Door-to-balloon time before and after code heart system were compared with a retrospective chart review. In addition, to determine the appropriateness of the activation, we compared coronary angiography performance rate and percentage of STEMI between code heart I and II. After the code heart system, the mean DTB time was significantly decreased (before, 96.51 ± 65.60 minutes; after, 65.40 ± 26.40 minutes; P = .043). The STEMI diagnosis and the coronary angiography performance rates were significantly lower in the code heart II group than in the code heart I group without difference in DTB time. Cardiac catheterization laboratory activation by ECG machine auto-interpretation does not reduce DTB time and often unnecessarily activates the code heart system compared with emergency physician-initiated activation. This system therefore decreases the appropriateness of CCL activation. Copyright © 2014 Elsevier Inc. All rights reserved.
Zebra: An advanced PWR lattice code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, L.; Wu, H.; Zheng, Y.
2012-07-01
This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precisionmore » and a high efficiency. (authors)« less
Qin, Wanhai; Wang, Lei; Zhai, Ruidong; Ma, Qiuyue; Liu, Jianfang; Bao, Chuntong; Zhang, Hu; Sun, Changjiang; Feng, Xin; Gu, Jingmin; Du, Chongtao; Han, Wenyu; Langford, P R; Lei, Liancheng
2016-01-01
Actinobacillus pleuropneumoniae is an important pathogen that causes respiratory disease in pigs. Trimeric autotransporter adhesin (TAA) is a recently discovered bacterial virulence factor that mediates bacterial adhesion and colonization. Two TAA coding genes have been found in the genome of A. pleuropneumoniae strain 5b L20, but whether they contribute to bacterial pathogenicity is unclear. In this study, we used homologous recombination to construct a double-gene deletion mutant, ΔTAA, in which both TAA coding genes were deleted and used it in in vivo and in vitro studies to confirm that TAAs participate in bacterial auto-aggregation, biofilm formation, cell adhesion and virulence in mice. A microarray analysis was used to determine whether TAAs can regulate other A. pleuropneumoniae genes during interactions with porcine primary alveolar macrophages. The results showed that deletion of both TAA coding genes up-regulated 36 genes, including ene1514, hofB and tbpB2, and simultaneously down-regulated 36 genes, including lgt, murF and ftsY. These data illustrate that TAAs help to maintain full bacterial virulence both directly, through their bioactivity, and indirectly by regulating the bacterial type II and IV secretion systems and regulating the synthesis or secretion of virulence factors. This study not only enhances our understanding of the role of TAAs but also has significance for those studying A. pleuropneumoniae pathogenesis.
Breedveld, Sebastiaan; Voet, Peter W. J.; Heijkoop, Sabrina T.; Mens, Jan-Willem M.; Hoogeman, Mischa S.; Heijmen, Ben J. M.
2016-01-01
Purpose To develop and validate fully automated generation of VMAT plan-libraries for plan-of-the-day adaptive radiotherapy in locally-advanced cervical cancer. Material and Methods Our framework for fully automated treatment plan generation (Erasmus-iCycle) was adapted to create dual-arc VMAT treatment plan libraries for cervical cancer patients. For each of 34 patients, automatically generated VMAT plans (autoVMAT) were compared to manually generated, clinically delivered 9-beam IMRT plans (CLINICAL), and to dual-arc VMAT plans generated manually by an expert planner (manVMAT). Furthermore, all plans were benchmarked against 20-beam equi-angular IMRT plans (autoIMRT). For all plans, a PTV coverage of 99.5% by at least 95% of the prescribed dose (46 Gy) had the highest planning priority, followed by minimization of V45Gy for small bowel (SB). Other OARs considered were bladder, rectum, and sigmoid. Results All plans had a highly similar PTV coverage, within the clinical constraints (above). After plan normalizations for exactly equal median PTV doses in corresponding plans, all evaluated OAR parameters in autoVMAT plans were on average lower than in the CLINICAL plans with an average reduction in SB V45Gy of 34.6% (p<0.001). For 41/44 autoVMAT plans, SB V45Gy was lower than for manVMAT (p<0.001, average reduction 30.3%), while SB V15Gy increased by 2.3% (p = 0.011). AutoIMRT reduced SB V45Gy by another 2.7% compared to autoVMAT, while also resulting in a 9.0% reduction in SB V15Gy (p<0.001), but with a prolonged delivery time. Differences between manVMAT and autoVMAT in bladder, rectal and sigmoid doses were ≤ 1%. Improvements in SB dose delivery with autoVMAT instead of manVMAT were higher for empty bladder PTVs compared to full bladder PTVs, due to differences in concavity of the PTVs. Conclusions Quality of automatically generated VMAT plans was superior to manually generated plans. Automatic VMAT plan generation for cervical cancer has been implemented in our clinical routine. Due to the achieved workload reduction, extension of plan libraries has become feasible. PMID:28033342
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2017-03-01
In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... least one dollar. With respect to the rebate for Zero Display Orders that add liquidity in AutoEx that... in the Automatic Execution Mode of order interaction (``AutoEx'') \\3\\ priced at least one dollar. Certain conforming changes are also proposed for rebates for liquidity adding Zero Display Orders \\4\\ in...
Ivanova, E I; Popkova, S M; Dzhioev, Iu P; Rakova, E B; Dolgikh, V V; Savel'kaeva, M V; Nemchenko, U M; Bukharova, E V; Serdiuk, L V
2015-01-01
E. coli is a commensal of intestine of the vertebrata. The exchange of genetic material of different types of bacteria between themselves and with other representatives of family of Enterobacteriaceae in intestinal ecosystem results in development of types of normal colibacillus with genetic characteristics of pathogenicity that can serve as a theoretical substantiation to attribute such strains to pathobionts. The entero-pathogenic colibacillus continues be an important cause of diarrhea in children in developing countries. The gene responsible for formation of pili binding is a necessary condition for virulence of entero-pathogenic colibacillus. The polymerase chain reaction was applied to examine 316 strains of different types of E. coli (normal, with weak enzyme activity and hemolytic activity) isolated from healthy children and children with functional disorders of gastro-intestinal tract for presence of genes coding capability to form pill binding. The presence of this gene in different biochemical types of E. coli permits to establish the fact of formation of reservoir of pathogenicity in indigent microbiota of intestinal biocenosis.
Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L
2015-11-18
Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.
iSS-PC: Identifying Splicing Sites via Physical-Chemical Properties Using Deep Sparse Auto-Encoder.
Xu, Zhao-Chun; Wang, Peng; Qiu, Wang-Ren; Xiao, Xuan
2017-08-15
Gene splicing is one of the most significant biological processes in eukaryotic gene expression, such as RNA splicing, which can cause a pre-mRNA to produce one or more mature messenger RNAs containing the coded information with multiple biological functions. Thus, identifying splicing sites in DNA/RNA sequences is significant for both the bio-medical research and the discovery of new drugs. However, it is expensive and time consuming based only on experimental technique, so new computational methods are needed. To identify the splice donor sites and splice acceptor sites accurately and quickly, a deep sparse auto-encoder model with two hidden layers, called iSS-PC, was constructed based on minimum error law, in which we incorporated twelve physical-chemical properties of the dinucleotides within DNA into PseDNC to formulate given sequence samples via a battery of cross-covariance and auto-covariance transformations. In this paper, five-fold cross-validation test results based on the same benchmark data-sets indicated that the new predictor remarkably outperformed the existing prediction methods in this field. Furthermore, it is expected that many other related problems can be also studied by this approach. To implement classification accurately and quickly, an easy-to-use web-server for identifying slicing sites has been established for free access at: http://www.jci-bioinfo.cn/iSS-PC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T; Lockamy, V; Anne, P
2016-06-15
Purpose: Recently an ultrafast automatic planning system for breast irradiation using tangential beams was developed by modeling relationships between patient anatomy and achieved dose distribution. This study evaluates the performance of this system when applied to a different patient population and dose calculation algorithm. Methods: The system and its anatomy-to-dose models was developed at institution A based on 20 cases, which were planned using manual fluence painting technique and calculated WITH heterogeneity correction. Institution B uses field-in-field planning technique and dose calculation WITHOUT heterogeneity correction. 11 breast cases treated at Institution B were randomly selected for retrospective study, including leftmore » and right sides, and different breast size (irradiated volumes defined by Jaw/MLC opening range from 875cc to 3516cc). Comparisons between plans generated automatically (Auto-Plans) and those used for treatment (Clinical-Plans) included: energy choice (single/mixed), volumes receiving 95%/100%/105%/110% Rx dose (V95%/V100%/V105%/V100%) relative to irradiated volume, D1cc, and LungV20Gy. Results: In 9 out of 11 cases single/mixed energy choice made by the software agreed with Clinical-Plans. For the remaining 2 cases software recommended using mixed energy and dosimetric improvements were observed. V100% were similar (p=0.223, Wilcoxon Signed-Rank test) between Auto-Plans and Clinical-Plans (57.6±8.9% vs. 54.8±9.5%). V95% is 2.3±3.0% higher for Auto-Plans (p=0.027), indicating reduced cold areas. Hot spot volume V105% were significantly reduced in Auto-Plan by 14.4±7.2% (p=0.004). Absolute V105% was reduced from 395.6±359.9cc for Clinical-Plans to 108.7±163cc for Auto-Plans. D1cc was 107.4±2.8% for Auto-Plans, and 109.2±2.4% for Clinical-Plans (p=0.056). LungV20Gy were 13.6±4.0% for Auto-Plan vs. 14.0±4.1% for Clinical-Plans (p=0.043). All optimizations were finished within 1.5min. Conclusion: The performance of this breast auto-planning system remained stable and satisfactory when applied to a different patient population and dose calculation algorithm. The auto-planning system was able to produce clinically similar Rx dose coverage with significantly improved homogeneity inside breast tissue, in less than 1.5min.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarroll, R; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX; Beadle, B
Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrectmore » contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to flag auto-contours for special review or used with safety margins in a fully automatic treatment planning system.« less
Song, Dandan; Li, Ning; Liao, Lejian
2015-01-01
Due to the generation of enormous amounts of data at both lower costs as well as in shorter times, whole-exome sequencing technologies provide dramatic opportunities for identifying disease genes implicated in Mendelian disorders. Since upwards of thousands genomic variants can be sequenced in each exome, it is challenging to filter pathogenic variants in protein coding regions and reduce the number of missing true variants. Therefore, an automatic and efficient pipeline for finding disease variants in Mendelian disorders is designed by exploiting a combination of variants filtering steps to analyze the family-based exome sequencing approach. Recent studies on the Freeman-Sheldon disease are revisited and show that the proposed method outperforms other existing candidate gene identification methods.
A study experiment of auto idle application in the excavator engine performance
NASA Astrophysics Data System (ADS)
Purwanto, Wawan; Maksum, Hasan; Putra, Dwi Sudarno; Azmi, Meri; Wahyudi, Retno
2016-03-01
The purpose of this study was to analyze the effect of applying auto idle to excavator engine performance, such as machine unitization and fuel consumption in Excavator. Steps to be done are to modify the system JA 44 and 67 in Vehicle Electronic Control Unit (V-ECU). The modifications will be obtained from the pattern of the engine speed. If the excavator attachment is not operated, the engine speed will return to the idle speed automatically. From the experiment results the auto idle reduces fuel consumption in excavator engine.
A study experiment of auto idle application in the excavator engine performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purwanto, Wawan, E-mail: wawan5527@gmail.com; Maksum, Hasan; Putra, Dwi Sudarno, E-mail: dwisudarnoputra@ft.unp.ac.id
2016-03-29
The purpose of this study was to analyze the effect of applying auto idle to excavator engine performance, such as machine unitization and fuel consumption in Excavator. Steps to be done are to modify the system JA 44 and 67 in Vehicle Electronic Control Unit (V-ECU). The modifications will be obtained from the pattern of the engine speed. If the excavator attachment is not operated, the engine speed will return to the idle speed automatically. From the experiment results the auto idle reduces fuel consumption in excavator engine.
AutoLock: a semiautomated system for radiotherapy treatment plan quality control
Lowe, Matthew; Hardy, Mark J.; Boylan, Christopher J.; Whitehurst, Philip; Rowbottom, Carl G.
2015-01-01
A semiautomated system for radiotherapy treatment plan quality control (QC), named AutoLock, is presented. AutoLock is designed to augment treatment plan QC by automatically checking aspects of treatment plans that are well suited to computational evaluation, whilst summarizing more subjective aspects in the form of a checklist. The treatment plan must pass all automated checks and all checklist items must be acknowledged by the planner as correct before the plan is finalized. Thus AutoLock uniquely integrates automated treatment plan QC, an electronic checklist, and plan finalization. In addition to reducing the potential for the propagation of errors, the integration of AutoLock into the plan finalization workflow has improved efficiency at our center. Detailed audit data are presented, demonstrating that the treatment plan QC rejection rate fell by around a third following the clinical introduction of AutoLock. PACS number: 87.55.Qr PMID:26103498
AutoLock: a semiautomated system for radiotherapy treatment plan quality control.
Dewhurst, Joseph M; Lowe, Matthew; Hardy, Mark J; Boylan, Christopher J; Whitehurst, Philip; Rowbottom, Carl G
2015-05-08
A semiautomated system for radiotherapy treatment plan quality control (QC), named AutoLock, is presented. AutoLock is designed to augment treatment plan QC by automatically checking aspects of treatment plans that are well suited to computational evaluation, whilst summarizing more subjective aspects in the form of a checklist. The treatment plan must pass all automated checks and all checklist items must be acknowledged by the planner as correct before the plan is finalized. Thus AutoLock uniquely integrates automated treatment plan QC, an electronic checklist, and plan finalization. In addition to reducing the potential for the propagation of errors, the integration of AutoLock into the plan finalization workflow has improved efficiency at our center. Detailed audit data are presented, demonstrating that the treatment plan QC rejection rate fell by around a third following the clinical introduction of AutoLock.
Faller, Josef; Scherer, Reinhold; Friedrich, Elisabeth V. C.; Costa, Ursula; Opisso, Eloy; Medina, Josep; Müller-Putz, Gernot R.
2014-01-01
Individuals with severe motor impairment can use event-related desynchronization (ERD) based BCIs as assistive technology. Auto-calibrating and adaptive ERD-based BCIs that users control with motor imagery tasks (“SMR-AdBCI”) have proven effective for healthy users. We aim to find an improved configuration of such an adaptive ERD-based BCI for individuals with severe motor impairment as a result of spinal cord injury (SCI) or stroke. We hypothesized that an adaptive ERD-based BCI, that automatically selects a user specific class-combination from motor-related and non motor-related mental tasks during initial auto-calibration (“Auto-AdBCI”) could allow for higher control performance than a conventional SMR-AdBCI. To answer this question we performed offline analyses on two sessions (21 data sets total) of cue-guided, five-class electroencephalography (EEG) data recorded from individuals with SCI or stroke. On data from the twelve individuals in Session 1, we first identified three bipolar derivations for the SMR-AdBCI. In a similar way, we determined three bipolar derivations and four mental tasks for the Auto-AdBCI. We then simulated both, the SMR-AdBCI and the Auto-AdBCI configuration on the unseen data from the nine participants in Session 2 and compared the results. On the unseen data of Session 2 from individuals with SCI or stroke, we found that automatically selecting a user specific class-combination from motor-related and non motor-related mental tasks during initial auto-calibration (Auto-AdBCI) significantly (p < 0.01) improved classification performance compared to an adaptive ERD-based BCI that only used motor imagery tasks (SMR-AdBCI; average accuracy of 75.7 vs. 66.3%). PMID:25368546
Small UAV Automatic Ground Collision Avoidance System Design Considerations and Flight Test Results
NASA Technical Reports Server (NTRS)
Sorokowski, Paul; Skoog, Mark; Burrows, Scott; Thomas, SaraKatie
2015-01-01
The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center Small Unmanned Aerial Vehicle (SUAV) Automatic Ground Collision Avoidance System (Auto GCAS) project demonstrated several important collision avoidance technologies. First, the SUAV Auto GCAS design included capabilities to take advantage of terrain avoidance maneuvers flying turns to either side as well as straight over terrain. Second, the design also included innovative digital elevation model (DEM) scanning methods. The combination of multi-trajectory options and new scanning methods demonstrated the ability to reduce the nuisance potential of the SUAV while maintaining robust terrain avoidance. Third, the Auto GCAS algorithms were hosted on the processor inside a smartphone, providing a lightweight hardware configuration for use in either the ground control station or on board the test aircraft. Finally, compression of DEM data for the entire Earth and successful hosting of that data on the smartphone was demonstrated. The SUAV Auto GCAS project demonstrated that together these methods and technologies have the potential to dramatically reduce the number of controlled flight into terrain mishaps across a wide range of aviation platforms with similar capabilities including UAVs, general aviation aircraft, helicopters, and model aircraft.
Uzun, O; Topuz, O; Tinaz, C; Nekoofar, M H; Dummer, P M H
2008-09-01
To evaluate ex vivo the accuracy of the integrated electronic root canal length measurement devices within TCM Endo V and Tri Auto ZX motors whilst removing gutta-percha and sealer from filled root canals. Forty freshly extracted maxillary and mandibular incisor teeth with mature apices were selected. Following access cavity preparation, the length of the root canals were measured visually 0.5 mm short of the major foramen (TL). The canals were prepared using the HERO 642 system and then filled with gutta-percha and AH26 sealer using a lateral compaction technique. After 7 days the coronal temporary filling was removed and the roots mounted in an alginate experimental model. The roots were then randomly divided in two groups. The access cavities were filled with chloroform to soften the gutta-percha and allow its penetration using the Tri Auto ZX and the TCM Endo V devices in groups 1 and 2, respectively. The 'automatic apical reverse function' (ARL) of both devices was set to start at the 0.5 setting and the rotary instrument inserted inside the root canal until a beeping sound was heard and the rotation of the file stopped automatically. Once the auto reverse function had been initiated, the foot pedal of the motor was inactivated and the rubber stop placed against the reference point. The distance between the file tip and rubber stop was measured using a digital calliper to 0.01 mm accuracy (ARL). Then, a size 20, 0.02 taper instrument was attached to each device and inserted into the root canals without rotary motion until the integrated ERCLMDs positioned the instrument tips at the 0.5 setting as suggested by the devices. This length was again measured using a digital calliper (EL). The Mann-Whitney U-test was used to investigate statistical differences between the true canal length and those indicated by the two devices when used in 'automatic ARL and when inserted passively (EL). In the presence of gutta-percha, sealer and chloroform, the auto-reverse function for the Tri Auto ZX and TCM Endo V, set to start at 0.5 level, was initiated beyond the foramen in 60% and 95% of the samples, respectively during active (rotary) penetration of the instruments. There was a statistically significant difference between the devices for the mean discrepancies between the length at which the auto reverse function was initiated and the true length (P < 0.001). Electronic detection of the apical terminus when the instruments were introduced passively (not rotating) was beyond the foramen in 20% and 37% of cases in the Tri Auto ZX group and the TCM Endo V group, respectively. There was a statistically significant difference between the devices for the mean discrepancies between the electronically determined (passive) length and true length (P < 0.01). The auto reverse function of the Tri Auto ZX and TCM Endo V devices, set to start at 0.5 level, were initiated beyond the foramen in the majority of root-filled teeth during active (rotating) penetration of root filling material. Thus, this automatic function must be used with caution when removing gutta-percha root fillings. There were significant differences between the accuracy of measurements in active (rotating) and passive (not-rotating) modes; both devices were more accurate when used in passive mode. However, the Tri Auto ZX was significantly more accurate in a greater proportion of cases.
Testing First-Order Logic Axioms in AutoCert
NASA Technical Reports Server (NTRS)
Ahn, Ki Yung; Denney, Ewen
2009-01-01
AutoCert [2] is a formal verification tool for machine generated code in safety critical domains, such as aerospace control code generated from MathWorks Real-Time Workshop. AutoCert uses Automated Theorem Provers (ATPs) [5] based on First-Order Logic (FOL) to formally verify safety and functional correctness properties of the code. These ATPs try to build proofs based on user provided domain-specific axioms, which can be arbitrary First-Order Formulas (FOFs). These axioms are the most crucial part of the trusted base, since proofs can be submitted to a proof checker removing the need to trust the prover and AutoCert itself plays the part of checking the code generator. However, formulating axioms correctly (i.e. precisely as the user had really intended) is non-trivial in practice. The challenge of axiomatization arise from several dimensions. First, the domain knowledge has its own complexity. AutoCert has been used to verify mathematical requirements on navigation software that carries out various geometric coordinate transformations involving matrices and quaternions. Axiomatic theories for such constructs are complex enough that mistakes are not uncommon. Second, adjusting axioms for ATPs can add even more complexity. The axioms frequently need to be modified in order to have them in a form suitable for use with ATPs. Such modifications tend to obscure the axioms further. Thirdly, speculating validity of the axioms from the output of existing ATPs is very hard since theorem provers typically do not give any examples or counterexamples.
Investigating the Simulink Auto-Coding Process
NASA Technical Reports Server (NTRS)
Gualdoni, Matthew J.
2016-01-01
Model based program design is the most clear and direct way to develop algorithms and programs for interfacing with hardware. While coding "by hand" results in a more tailored product, the ever-growing size and complexity of modern-day applications can cause the project work load to quickly become unreasonable for one programmer. This has generally been addressed by splitting the product into separate modules to allow multiple developers to work in parallel on the same project, however this introduces new potentials for errors in the process. The fluidity, reliability and robustness of the code relies on the abilities of the programmers to communicate their methods to one another; furthermore, multiple programmers invites multiple potentially differing coding styles into the same product, which can cause a loss of readability or even module incompatibility. Fortunately, Mathworks has implemented an auto-coding feature that allows programmers to design their algorithms through the use of models and diagrams in the graphical programming environment Simulink, allowing the designer to visually determine what the hardware is to do. From here, the auto-coding feature handles converting the project into another programming language. This type of approach allows the designer to clearly see how the software will be directing the hardware without the need to try and interpret large amounts of code. In addition, it speeds up the programming process, minimizing the amount of man-hours spent on a single project, thus reducing the chance of human error as well as project turnover time. One such project that has benefited from the auto-coding procedure is Ramses, a portion of the GNC flight software on-board Orion that has been implemented primarily in Simulink. Currently, however, auto-coding Ramses into C++ requires 5 hours of code generation time. This causes issues if the tool ever needs to be debugged, as this code generation will need to occur with each edit to any part of the program; additionally, this is lost time that could be spent testing and analyzing the code. This is one of the more prominent issues with the auto-coding process, and while much information is available with regard to optimizing Simulink designs to produce efficient and reliable C++ code, not much research has been made public on how to reduce the code generation time. It is of interest to develop some insight as to what causes code generation times to be so significant, and determine if there are architecture guidelines or a desirable auto-coding configuration set to assist in streamlining this step of the design process for particular applications. To address the issue at hand, the Simulink coder was studied at a foundational level. For each different component type made available by the software, the features, auto-code generation time, and the format of the generated code were analyzed and documented. Tools were developed and documented to expedite these studies, particularly in the area of automating sequential builds to ensure accurate data was obtained. Next, the Ramses model was examined in an attempt to determine the composition and the types of technologies used in the model. This enabled the development of a model that uses similar technologies, but takes a fraction of the time to auto-code to reduce the turnaround time for experimentation. Lastly, the model was used to run a wide array of experiments and collect data to obtain knowledge about where to search for bottlenecks in the Ramses model. The resulting contributions of the overall effort consist of an experimental model for further investigation into the subject, as well as several automation tools to assist in analyzing the model, and a reference document offering insight to the auto-coding process, including documentation of the tools used in the model analysis, data illustrating some potential problem areas in the auto-coding process, and recommendations on areas or practices in the current Ramses model that should be further investigated. Several skills were required to be built up over the course of the internship project. First and foremost, my Simulink skills have improved drastically, as much of my experience had been modeling electronic circuits as opposed to software models. Furthermore, I am now comfortable working with the Simulink Auto-coder, a tool I had never used until this summer; this tool also tested my critical thinking and C++ knowledge as I had to interpret the C++ code it was generating and attempt to understand how the Simulink model affected the generated code. I had come into the internship with a solid understanding of Matlab code, but had done very little in using it to automate tasks, particularly Simulink tasks; along the same lines, I had rarely used shell script to automate and interface with programs, which I gained a fair amount of experience with this summer, including how to use regular expression. Lastly, soft-skills are an area everyone can continuously improve on; having never worked with NASA engineers, which to me seem to be a completely different breed than what I am used to (commercial electronic engineers), I learned to utilize the wealth of knowledge present at JSC. I wish I had come into the internship knowing exactly how helpful everyone in my branch would be, as I would have picked up on this sooner. I hope that having gained such a strong foundation in Simulink over this summer will open the opportunity to return to work on this project, or potentially other opportunities within the division. The idea of leaving a project I devoted ten weeks to is a hard one to cope with, so having the chance to pick up where I left off sounds appealing; alternatively, I am interested to see if there are any opening in the future that would allow me to work on a project that is more in-line with my research in estimation algorithms. Regardless, this summer has been a milestone in my professional career, and I hope this has started a long-term relationship between JSC and myself. I really enjoy the thought of building on my experience here over future summers while I work to complete my PhD at Missouri University of Science and Technology.
Zhu, Jingbo; Liu, Baoyue; Shan, Shibo; Ding, Yanl; Kou, Zinong; Xiao, Wei
2015-08-01
In order to meet the needs of efficient purification of products from natural resources, this paper developed an automatic vacuum liquid chromatographic device (AUTO-VLC) and applied it to the component separation of petroleum ether extracts of Schisandra chinensis (Turcz) Baill. The device was comprised of a solvent system, a 10-position distribution valve, a 3-position changes valve, dynamic axis compress chromatographic columns with three diameters, and a 10-position fraction valve. The programmable logic controller (PLC) S7- 200 was adopted to realize the automatic control and monitoring of the mobile phase changing, column selection, separation time setting and fraction collection. The separation results showed that six fractions (S1-S6) of different chemical components from 100 g Schisandra chinensis (Turcz) Baill. petroleum ether phase were obtained by the AUTO-VLC with 150 mm diameter dynamic axis compress chromatographic column. A new method used for the VLC separation parameters screened by using multiple development TLC was developed and confirmed. The initial mobile phase of AUTO-VLC was selected by taking Rf of all the target compounds ranging from 0 to 0.45 for fist development on the TLC; gradient elution ratio was selected according to k value (the slope of the linear function of Rf value and development times on the TLC) and the resolution of target compounds; elution times (n) were calculated by the formula n ≈ ΔRf/k. A total of four compounds with the purity more than 85% and 13 other components were separated from S5 under the selected conditions for only 17 h. Therefore, the development of the automatic VLC and its method are significant to the automatic and systematic separation of traditional Chinese medicines.
Bayesian Analysis of Hmi Images and Comparison to Tsi Variations and MWO Image Observables
NASA Astrophysics Data System (ADS)
Parker, D. G.; Ulrich, R. K.; Beck, J.; Tran, T. V.
2015-12-01
We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from June, 2010 to December, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables.The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment.Ulrich, R.K., Parker, D, Bertello, L. and Boyden, J. 2010, Solar Phys. , 261 , 11.
Nondestructive Vibratory Testing and Evaluation Procedure for Military Roads and Streets.
1984-07-01
the addition of an auto- matic data acquisition system to the instrumentation control panel. This system , presently available, would automatically ...the data used to further develop and define the basic correlations. c. Consideration be given to installing an automatic data acquisi- tion system to...glows red any time the force generator is not fully elevated. Depressing this switch will stop the automatic cycle at any point and clear all system
A deep auto-encoder model for gene expression prediction.
Xie, Rui; Wen, Jia; Quitadamo, Andrew; Cheng, Jianlin; Shi, Xinghua
2017-11-17
Gene expression is a key intermediate level that genotypes lead to a particular trait. Gene expression is affected by various factors including genotypes of genetic variants. With an aim of delineating the genetic impact on gene expression, we build a deep auto-encoder model to assess how good genetic variants will contribute to gene expression changes. This new deep learning model is a regression-based predictive model based on the MultiLayer Perceptron and Stacked Denoising Auto-encoder (MLP-SAE). The model is trained using a stacked denoising auto-encoder for feature selection and a multilayer perceptron framework for backpropagation. We further improve the model by introducing dropout to prevent overfitting and improve performance. To demonstrate the usage of this model, we apply MLP-SAE to a real genomic datasets with genotypes and gene expression profiles measured in yeast. Our results show that the MLP-SAE model with dropout outperforms other models including Lasso, Random Forests and the MLP-SAE model without dropout. Using the MLP-SAE model with dropout, we show that gene expression quantifications predicted by the model solely based on genotypes, align well with true gene expression patterns. We provide a deep auto-encoder model for predicting gene expression from SNP genotypes. This study demonstrates that deep learning is appropriate for tackling another genomic problem, i.e., building predictive models to understand genotypes' contribution to gene expression. With the emerging availability of richer genomic data, we anticipate that deep learning models play a bigger role in modeling and interpreting genomics.
Auto-recognition of surfaces and auto-generation of material removal volume for finishing process
NASA Astrophysics Data System (ADS)
Kataraki, Pramod S.; Salman Abu Mansor, Mohd
2018-03-01
Auto-recognition of a surface and auto-generation of material removal volumes for the so recognised surfaces has become a need to achieve successful downstream manufacturing activities like automated process planning and scheduling. Few researchers have contributed to generation of material removal volume for a product but resulted in material removal volume discontinuity between two adjacent material removal volumes generated from two adjacent faces that form convex geometry. The need for limitation free material removal volume generation was attempted and an algorithm that automatically recognises computer aided design (CAD) model’s surface and also auto-generate material removal volume for finishing process of the recognised surfaces was developed. The surfaces of CAD model are successfully recognised by the developed algorithm and required material removal volume is obtained. The material removal volume discontinuity limitation that occurred in fewer studies is eliminated.
A Code Generation Approach for Auto-Vectorization in the Spade Compiler
NASA Astrophysics Data System (ADS)
Wang, Huayong; Andrade, Henrique; Gedik, Buğra; Wu, Kun-Lung
We describe an auto-vectorization approach for the Spade stream processing programming language, comprising two ideas. First, we provide support for vectors as a primitive data type. Second, we provide a C++ library with architecture-specific implementations of a large number of pre-vectorized operations as the means to support language extensions. We evaluate our approach with several stream processing operators, contrasting Spade's auto-vectorization with the native auto-vectorization provided by the GNU gcc and Intel icc compilers.
NASA Astrophysics Data System (ADS)
An, Shengpei; Hu, Tianyue; Liu, Yimou; Peng, Gengxin; Liang, Xianghao
2017-12-01
Static correction is a crucial step of seismic data processing for onshore play, which frequently has a complex near-surface condition. The effectiveness of the static correction depends on an accurate determination of first-arrival traveltimes. However, it is difficult to accurately auto-pick the first arrivals for data with low signal-to-noise ratios (SNR), especially for those measured in the area of the complex near-surface. The technique of the super-virtual interferometry (SVI) has the potential to enhance the SNR of first arrivals. In this paper, we develop the extended SVI with (1) the application of the reverse correlation to improve the capability of SNR enhancement at near-offset, and (2) the usage of the multi-domain method to partially overcome the limitation of current method, given insufficient available source-receiver combinations. Compared to the standard SVI, the SNR enhancement of the extended SVI can be up to 40%. In addition, we propose a quality control procedure, which is based on the statistical characteristics of multichannel recordings of first arrivals. It can auto-correct the mispicks, which might be spurious events generated by the SVI. This procedure is very robust, highly automatic and it can accommodate large data in batches. Finally, we develop one automatic first-arrival picking method to combine the extended SVI and the quality control procedure. Both the synthetic and the field data examples demonstrate that the proposed method is able to accurately auto-pick first arrivals in seismic traces with low SNR. The quality of the stacked seismic sections obtained from this method is much better than those obtained from an auto-picking method, which is commonly employed by the commercial software.
Sun, Ming-Shen; Zhang, Li; Guo, Ning; Song, Yan-Zheng; Zhang, Feng-Ju
2018-01-01
To evaluate and compare the uniformity of angle Kappa adjustment between Oculyzer and Topolyzer Vario topography guided ablation of laser in situ keratomileusis (LASIK) by EX500 excimer laser for myopia. Totally 145 cases (290 consecutive eyes )with myopia received LASIK with a target of emmetropia. The ablation for 86 cases (172 eyes) was guided manually based on Oculyzer topography (study group), while the ablation for 59 cases (118 eyes) was guided automatically by Topolyzer Vario topography (control group). Measurement of adjustment values included data respectively in horizontal and vertical direction of cornea. Horizontally, synclastic adjustment between manually actual values (dx manu ) and Oculyzer topography guided data (dx ocu ) accounts 35.5% in study group, with mean dx manu /dx ocu of 0.78±0.48; while in control group, synclastic adjustment between automatically actual values (dx auto ) and Oculyzer topography data (dx ocu ) accounts 54.2%, with mean dx auto /dx ocu of 0.79±0.66. Vertically, synclastic adjustment between dy manu and dy ocu accounts 55.2% in study group, with mean dy manu /dy ocu of 0.61±0.42; while in control group, synclastic adjustment between dy auto and dy ocu accounts 66.1%, with mean dy auto /dy ocu of 0.66±0.65. There was no statistically significant difference in ratio of actual values/Oculyzer topography guided data in horizontal and vertical direction between two groups ( P =0.951, 0.621). There is high consistency in angle Kappa adjustment guided manually by Oculyzer and guided automatically by Topolyzer Vario topography during corneal refractive surgery by WaveLight EX500 excimer laser.
Keyphrase based Evaluation of Automatic Text Summarization
NASA Astrophysics Data System (ADS)
Elghannam, Fatma; El-Shishtawy, Tarek
2015-05-01
The development of methods to deal with the informative contents of the text units in the matching process is a major challenge in automatic summary evaluation systems that use fixed n-gram matching. The limitation causes inaccurate matching between units in a peer and reference summaries. The present study introduces a new Keyphrase based Summary Evaluator KpEval for evaluating automatic summaries. The KpEval relies on the keyphrases since they convey the most important concepts of a text. In the evaluation process, the keyphrases are used in their lemma form as the matching text unit. The system was applied to evaluate different summaries of Arabic multi-document data set presented at TAC2011. The results showed that the new evaluation technique correlates well with the known evaluation systems: Rouge1, Rouge2, RougeSU4, and AutoSummENG MeMoG. KpEval has the strongest correlation with AutoSummENG MeMoG, Pearson and spearman correlation coefficient measures are 0.8840, 0.9667 respectively.
Shaping electromagnetic waves using software-automatically-designed metasurfaces.
Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie
2017-06-15
We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.
Automatic license plate reader: a solution to avoiding vehicle pursuit
NASA Astrophysics Data System (ADS)
Jordan, Stanley K.
1997-01-01
The Massachusetts Governor's Auto Theft Strike Force has tested an automatic license plate reader (LPR) to recover stolen cars and catch car thieves, without vehicle pursuit. Experiments were conducted at the Sumner Tunnel in Boston, and proved the feasibility of a LPR for identifying stolen cars instantly. The same technology can be applied to other law-enforcement objectives.
Criteria for Operational Approval of Auto Guidance Systems
DOT National Transportation Integrated Search
1997-03-18
This advisory circular (AC) states an acceptable means, but not the only means, : for obtaining operational approval of the initial engagement or use of an Auto : Flight Guidance System (AFGS) under Title 14 of the Code of Federal Regulations : (14 C...
Code of Federal Regulations, 2010 CFR
2010-10-01
... of the inflation mechanism approved for use on the PFD. (2) [Reserved] (e) Inflation mechanisms. Each manual, automatic, or manual-auto inflation mechanism must be permanently marked with its unique model...
MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.
2011-01-01
MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically
SU-F-T-94: Plan2pdf - a Software Tool for Automatic Plan Report for Philips Pinnacle TPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, C
Purpose: To implement an automatic electronic PDF plan reporting tool for Philips Pinnacle treatment planning system (TPS) Methods: An electronic treatment plan reporting software is developed by us to enable fully automatic PDF report from Pinnacle TPS to external EMR programs such as MOSAIQ. The tool is named “plan2pdf”. plan2pdf is implemented using Pinnacle scripts, Java and UNIX shell scripts, without any external program needed. plan2pdf supports full auto-mode and manual mode reporting. In full auto-mode, with a single mouse click, plan2pdf will generate a detailed Pinnacle plan report in PDF format, which includes customizable cover page, Pinnacle plan summary,more » orthogonal views through each plan POI and maximum dose point, DRR for each beam, serial transverse views captured throughout the dose grid at a user specified interval, DVH and scorecard windows. The final PDF report is also automatically bookmarked for each section above for convenient plan review. The final PDF report can either be saved on a user specified folder on Pinnacle, or it can be automatically exported to an EMR import folder via a user configured FTP service. In manual capture mode, plan2pdf allows users to capture any Pinnacle plan by full screen, individual window or rectangular ROI drawn on screen. Furthermore, to avoid possible patients’ plan mix-up during auto-mode reporting, a user conflict check feature is included in plan2pdf: it prompts user to wait if another patient is being exported by plan2pdf by another user. Results: plan2pdf is tested extensively and successfully at our institution consists of 5 centers, 15 dosimetrists and 10 physicists, running Pinnacle version 9.10 on Enterprise servers. Conclusion: plan2pdf provides a highly efficient, user friendly and clinical proven platform for all Philips Pinnacle users, to generate a detailed plan report in PDF format for external EMR systems.« less
A New Tool For The Hospital Lab
NASA Technical Reports Server (NTRS)
1979-01-01
The multi-module AutoMicrobic System (AMS), whose development stemmed from space-biomedical research, is an automatic, time-saving system for detecting and identifying disease-producing microorganisms in the human body.
Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans; Swertz, Morris A
2016-07-15
While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect : m.a.swertz@rug.nl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K. Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans
2016-01-01
Motivation: While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. Results: To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Availability and Implementation: Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect. Contact: m.a.swertz@rug.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153686
[DNA extraction from bones and teeth using AutoMate Express forensic DNA extraction system].
Gao, Lin-Lin; Xu, Nian-Lai; Xie, Wei; Ding, Shao-Cheng; Wang, Dong-Jing; Ma, Li-Qin; Li, You-Ying
2013-04-01
To explore a new method in order to extract DNA from bones and teeth automatically. Samples of 33 bones and 15 teeth were acquired by freeze-mill method and manual method, respectively. DNA materials were extracted and quantified from the triturated samples by AutoMate Express forensic DNA extraction system. DNA extraction from bones and teeth were completed in 3 hours using the AutoMate Express forensic DNA extraction system. There was no statistical difference between the two methods in the DNA concentration of bones. Both bones and teeth got the good STR typing by freeze-mill method, and the DNA concentration of teeth was higher than those by manual method. AutoMate Express forensic DNA extraction system is a new method to extract DNA from bones and teeth, which can be applied in forensic practice.
NASA Astrophysics Data System (ADS)
Parker, D. G.; Ulrich, R. K.; Beck, J.
2014-12-01
We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from May, 2010 to June, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables. The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment. Ulrich, R.K., Parker, D, Bertello, L. and Boyden, J. 2010, Solar Phys. , 261 , 11.
Marien, Koen M.; Andries, Luc; De Schepper, Stefanie; Kockx, Mark M.; De Meyer, Guido R.Y.
2015-01-01
Tumor angiogenesis is measured by counting microvessels in tissue sections at high power magnification as a potential prognostic or predictive biomarker. Until now, regions of interest1 (ROIs) were selected by manual operations within a tumor by using a systematic uniform random sampling2 (SURS) approach. Although SURS is the most reliable sampling method, it implies a high workload. However, SURS can be semi-automated and in this way contribute to the development of a validated quantification method for microvessel counting in the clinical setting. Here, we report a method to use semi-automated SURS for microvessel counting: • Whole slide imaging with Pannoramic SCAN (3DHISTECH) • Computer-assisted sampling in Pannoramic Viewer (3DHISTECH) extended by two self-written AutoHotkey applications (AutoTag and AutoSnap) • The use of digital grids in Photoshop® and Bridge® (Adobe Systems) This rapid procedure allows traceability essential for high throughput protein analysis of immunohistochemically stained tissue. PMID:26150998
Development of an Automatic Differentiation Version of the FPX Rotor Code
NASA Technical Reports Server (NTRS)
Hu, Hong
1996-01-01
The ADIFOR2.0 automatic differentiator is applied to the FPX rotor code along with the grid generator GRGN3. The FPX is an eXtended Full-Potential CFD code for rotor calculations. The automatic differentiation version of the code is obtained, which provides both non-geometry and geometry sensitivity derivatives. The sensitivity derivatives via automatic differentiation are presented and compared with divided difference generated derivatives. The study shows that automatic differentiation method gives accurate derivative values in an efficient manner.
A new approach to the form and position error measurement of the auto frame surface based on laser
NASA Astrophysics Data System (ADS)
Wang, Hua; Li, Wei
2013-03-01
Auto frame is a very large workpiece, with length up to 12 meters and width up to 2 meters, and it's very easy to know that it's inconvenient and not automatic to measure such a large workpiece by independent manual operation. In this paper we propose a new approach to reconstruct the 3D model of the large workpiece, especially the auto truck frame, based on multiple pulsed lasers, for the purpose of measuring the form and position errors. In a concerned area, it just needs one high-speed camera and two lasers. It is a fast, high-precision and economical approach.
Bayesian classification theory
NASA Technical Reports Server (NTRS)
Hanson, Robin; Stutz, John; Cheeseman, Peter
1991-01-01
The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit or share model parameters though a class hierarchy. We summarize the mathematical foundations of AutoClass.
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
Note: Digital laser frequency auto-locking for inter-satellite laser ranging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Yingxin; Yeh, Hsien-Chi, E-mail: yexianji@mail.hust.edu.cn; Li, Hongyin
2016-05-15
We present a prototype of a laser frequency auto-locking and re-locking control system designed for laser frequency stabilization in inter-satellite laser ranging system. The controller has been implemented on field programmable gate arrays and programmed with LabVIEW software. The controller allows initial frequency calibrating and lock-in of a free-running laser to a Fabry-Pérot cavity. Since it allows automatic recovery from unlocked conditions, benefit derives to automated in-orbit operations. Program design and experimental results are demonstrated.
Clinical utility of anti-p53 auto-antibody: systematic review and focus on colorectal cancer.
Suppiah, Aravind; Greenman, John
2013-08-07
Mutation of the p53 gene is a key event in the carcinogenesis of many different types of tumours. These can occur throughout the length of the p53 gene. Anti-p53 auto-antibodies are commonly produced in response to these p53 mutations. This review firstly describes the various mechanisms of p53 dysfunction and their association with subsequent carcinogenesis. Following this, the mechanisms of induction of anti-p53 auto-antibody production are shown, with various hypotheses for the discrepancies between the presence of p53 mutation and the presence/absence of anti-p53 auto-antibodies. A systematic review was performed with a descriptive summary of key findings of each anti-p53 auto-antibody study in all cancers published in the last 30 years. Using this, the cumulative frequency of anti-p53 auto-antibody in each cancer type is calculated and then compared with the incidence of p53 mutation in each cancer to provide the largest sample calculation and correlation between mutation and anti-p53 auto-antibody published to date. Finally, the review focuses on the data of anti-p53 auto-antibody in colorectal cancer studies, and discusses future strategies including the potentially promising role using anti-p53 auto-antibody presence in screening and surveillance.
Sommermeyer, Dirk; Zou, Ding; Grote, Ludger; Hedner, Jan
2012-10-15
To assess the accuracy of novel algorithms using an oximeter-based finger plethysmographic signal in combination with a nasal cannula for the detection and differentiation of central and obstructive apneas. The validity of single pulse oximetry to detect respiratory disturbance events was also studied. Patients recruited from four sleep laboratories underwent an ambulatory overnight cardiorespiratory polygraphy recording. The nasal flow and photoplethysmographic signals of the recording were analyzed by automated algorithms. The apnea hypopnea index (AHI(auto)) was calculated using both signals, and a respiratory disturbance index (RDI(auto)) was calculated from photoplethysmography alone. Apnea events were classified into obstructive and central types using the oximeter derived pulse wave signal and compared with manual scoring. Sixty-six subjects (42 males, age 54 ± 14 yrs, body mass index 28.5 ± 5.9 kg/m(2)) were included in the analysis. AHI(manual) (19.4 ± 18.5 events/h) correlated highly significantly with AHI(auto) (19.9 ± 16.5 events/h) and RDI(auto) (20.4 ± 17.2 events/h); the correlation coefficients were r = 0.94 and 0.95, respectively (p < 0.001) with a mean difference of -0.5 ± 6.6 and -1.0 ± 6.1 events/h. The automatic analysis of AHI(auto) and RDI(auto) detected sleep apnea (cutoff AHI(manual) ≥ 15 events/h) with a sensitivity/specificity of 0.90/0.97 and 0.86/0.94, respectively. The automated obstructive/central apnea indices correlated closely with manually scoring (r = 0.87 and 0.95, p < 0.001) with mean difference of -4.3 ± 7.9 and 0.3 ± 1.5 events/h, respectively. Automatic analysis based on routine pulse oximetry alone may be used to detect sleep disordered breathing with accuracy. In addition, the combination of photoplethysmographic signals with a nasal flow signal provides an accurate distinction between obstructive and central apneic events during sleep.
Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer
2014-01-01
Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.
Ballante, Flavio; Marshall, Garland R
2016-01-25
Molecular docking is a widely used technique in drug design to predict the binding pose of a candidate compound in a defined therapeutic target. Numerous docking protocols are available, each characterized by different search methods and scoring functions, thus providing variable predictive capability on a same ligand-protein system. To validate a docking protocol, it is necessary to determine a priori the ability to reproduce the experimental binding pose (i.e., by determining the docking accuracy (DA)) in order to select the most appropriate docking procedure and thus estimate the rate of success in docking novel compounds. As common docking programs use generally different root-mean-square deviation (RMSD) formulas, scoring functions, and format results, it is both difficult and time-consuming to consistently determine and compare their predictive capabilities in order to identify the best protocol to use for the target of interest and to extrapolate the binding poses (i.e., best-docked (BD), best-cluster (BC), and best-fit (BF) poses) when applying a given docking program over thousands/millions of molecules during virtual screening. To reduce this difficulty, two new procedures called Clusterizer and DockAccessor have been developed and implemented for use with some common and "free-for-academics" programs such as AutoDock4, AutoDock4(Zn), AutoDock Vina, DOCK, MpSDockZn, PLANTS, and Surflex-Dock to automatically extrapolate BD, BC, and BF poses as well as to perform consistent cluster and DA analyses. Clusterizer and DockAccessor (code available over the Internet) represent two novel tools to collect computationally determined poses and detect the most predictive docking approach. Herein an application to human lysine deacetylase (hKDAC) inhibitors is illustrated.
A novel method for intelligent fault diagnosis of rolling bearings using ensemble deep auto-encoders
NASA Astrophysics Data System (ADS)
Shao, Haidong; Jiang, Hongkai; Lin, Ying; Li, Xingqiu
2018-03-01
Automatic and accurate identification of rolling bearings fault categories, especially for the fault severities and fault orientations, is still a major challenge in rotating machinery fault diagnosis. In this paper, a novel method called ensemble deep auto-encoders (EDAEs) is proposed for intelligent fault diagnosis of rolling bearings. Firstly, different activation functions are employed as the hidden functions to design a series of auto-encoders (AEs) with different characteristics. Secondly, EDAEs are constructed with various auto-encoders for unsupervised feature learning from the measured vibration signals. Finally, a combination strategy is designed to ensure accurate and stable diagnosis results. The proposed method is applied to analyze the experimental bearing vibration signals. The results confirm that the proposed method can get rid of the dependence on manual feature extraction and overcome the limitations of individual deep learning models, which is more effective than the existing intelligent diagnosis methods.
46 CFR 167.65-35 - Use of auto pilot.
Code of Federal Regulations, 2014 CFR
2014-10-01
... the automatic pilot is used in— (a) Areas of high traffic density; (b) Conditions of restricted... possible to immediately establish human control of the ship's steering: (2) A competent person is ready at...
46 CFR 167.65-35 - Use of auto pilot.
Code of Federal Regulations, 2013 CFR
2013-10-01
... the automatic pilot is used in— (a) Areas of high traffic density; (b) Conditions of restricted... possible to immediately establish human control of the ship's steering: (2) A competent person is ready at...
46 CFR 167.65-35 - Use of auto pilot.
Code of Federal Regulations, 2012 CFR
2012-10-01
... the automatic pilot is used in— (a) Areas of high traffic density; (b) Conditions of restricted... possible to immediately establish human control of the ship's steering: (2) A competent person is ready at...
Roadway data representation and application development : final report, December 2009.
DOT National Transportation Integrated Search
2009-08-06
The Straight-line Diagrammer, a web-based application to produce Straight-line Diagrams (SLDs) automatically, was developed in this project to replace old application (AutoSLD) which has outdated structure and limited capabilities.
Automatic planning on hippocampal avoidance whole-brain radiotherapy.
Wang, Shuo; Zheng, Dandan; Zhang, Chi; Ma, Rongtao; Bennion, Nathan R; Lei, Yu; Zhu, Xiaofeng; Enke, Charles A; Zhou, Sumin
2017-01-01
Mounting evidence suggests that radiation-induced damage to the hippocampus plays a role in neurocognitive decline for patients receiving whole-brain radiotherapy (WBRT). Hippocampal avoidance whole-brain radiotherapy (HA-WBRT) has been proposed to reduce the putative neurocognitive deficits by limiting the dose to the hippocampus. However, urgency of palliation for patients as well as the complexities of the treatment planning may be barriers to protocol enrollment to accumulate further clinical evidence. This warrants expedited quality planning of HA-WBRT. Pinnacle 3 Automatic treatment planning was designed to increase planning efficiency while maintaining or improving plan quality and consistency. The aim of the present study is to evaluate the performance of the Pinnacle 3 Auto-Planning on HA-WBRT treatment planning. Ten patients previously treated for brain metastases were selected. Hippocampal volumes were contoured on T1 magnetic resonance (MR) images, and planning target volumes (PTVs) were generated based on RTOG0933. The following 2 types of plans were generated by Pinnacle 3 Auto-Planning: the one with 2 coplanar volumetric modulated arc therapy (VMAT) arcs and the other with 9-field noncoplanar intensity-modulated radiation therapy (IMRT). D 2% and D 98% of PTV were used to calculate homogeneity index (HI). HI and Paddick Conformity index (CI) of PTV as well as D 100% and D max of the hippocampus were used to evaluate the plan quality. All the auto-plans met the dose coverage and constraint objectives based on RTOG0933. The auto-plans eliminated the necessity of generating pseudostructures by the planners, and it required little manual intervention which expedited the planning process. IMRT quality assurance (QA) results also suggest that all the auto-plans are practically acceptable on delivery. Pinnacle 3 Auto-Planning generates acceptable plans by RTOG0933 criteria without time-consuming planning process. The expedited quality planning achieved by Auto-Planning (AP) may facilitate protocol enrollment of patients to further investigate the hippocampal-sparing effect and be used to ensure timely start of palliative treatment in future clinical practice. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Automatic planning on hippocampal avoidance whole-brain radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shuo, E-mail: shuo0220@gmail.com; Zheng, Dandan; Zhang, Chi
Mounting evidence suggests that radiation-induced damage to the hippocampus plays a role in neurocognitive decline for patients receiving whole-brain radiotherapy (WBRT). Hippocampal avoidance whole-brain radiotherapy (HA-WBRT) has been proposed to reduce the putative neurocognitive deficits by limiting the dose to the hippocampus. However, urgency of palliation for patients as well as the complexities of the treatment planning may be barriers to protocol enrollment to accumulate further clinical evidence. This warrants expedited quality planning of HA-WBRT. Pinnacle{sup 3} Automatic treatment planning was designed to increase planning efficiency while maintaining or improving plan quality and consistency. The aim of the present studymore » is to evaluate the performance of the Pinnacle{sup 3} Auto-Planning on HA-WBRT treatment planning. Ten patients previously treated for brain metastases were selected. Hippocampal volumes were contoured on T1 magnetic resonance (MR) images, and planning target volumes (PTVs) were generated based on RTOG0933. The following 2 types of plans were generated by Pinnacle{sup 3} Auto-Planning: the one with 2 coplanar volumetric modulated arc therapy (VMAT) arcs and the other with 9-field noncoplanar intensity-modulated radiation therapy (IMRT). D{sub 2%} and D{sub 98%} of PTV were used to calculate homogeneity index (HI). HI and Paddick Conformity index (CI) of PTV as well as D{sub 100%} and D{sub max} of the hippocampus were used to evaluate the plan quality. All the auto-plans met the dose coverage and constraint objectives based on RTOG0933. The auto-plans eliminated the necessity of generating pseudostructures by the planners, and it required little manual intervention which expedited the planning process. IMRT quality assurance (QA) results also suggest that all the auto-plans are practically acceptable on delivery. Pinnacle{sup 3} Auto-Planning generates acceptable plans by RTOG0933 criteria without time-consuming planning process. The expedited quality planning achieved by Auto-Planning (AP) may facilitate protocol enrollment of patients to further investigate the hippocampal-sparing effect and be used to ensure timely start of palliative treatment in future clinical practice.« less
SU-E-J-129: Atlas Development for Cardiac Automatic Contouring Using Multi-Atlas Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, R; Yang, J; Pan, T
Purpose: To develop a set of atlases for automatic contouring of cardiac structures to determine heart radiation dose and the associated toxicity. Methods: Six thoracic cancer patients with both contrast and non-contrast CT images were acquired for this study. Eight radiation oncologists manually and independently delineated cardiac contours on the non-contrast CT by referring to the fused contrast CT and following the RTOG 1106 atlas contouring guideline. Fifteen regions of interest (ROIs) were delineated, including heart, four chambers, four coronary arteries, pulmonary artery and vein, inferior and superior vena cava, and ascending and descending aorta. Individual expert contours were fusedmore » using the simultaneous truth and performance level estimation (STAPLE) algorithm for each ROI and each patient. The fused contours became atlases for an in-house multi-atlas segmentation. Using leave-one-out test, we generated auto-segmented contours for each ROI and each patient. The auto-segmented contours were compared with the fused contours using the Dice similarity coefficient (DSC) and the mean surface distance (MSD). Results: Inter-observer variability was not obvious for heart, chambers, and aorta but was large for other structures that were not clearly distinguishable on CT image. The average DSC between individual expert contours and the fused contours were less than 50% for coronary arteries and pulmonary vein, and the average MSD were greater than 4.0 mm. The largest MSD of expert contours deviating from the fused contours was 2.5 cm. The mean DSC and MSD of auto-segmented contours were within one standard deviation of expert contouring variability except the right coronary artery. The coronary arteries, vena cava, and pulmonary vein had DSC<70% and MSD>3.0 mm. Conclusion: A set of cardiac atlases was created for cardiac automatic contouring, the accuracy of which was comparable to the variability in expert contouring. However, substantial modification may need for auto-segmented contours of indistinguishable small structures.« less
SU-E-I-97: Smart Auto-Planning Framework in An EMR Environment (SAFEE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B; Chen, S; Mutaf, Y
2014-06-01
Purpose: Our Radiation Oncology Department uses clinical practice guidelines for patient treatment, including normal tissue sparing and other dosimetric constraints. These practice guidelines were adapted from national guidelines, clinical trials, literature reviews, and practitioner's own experience. Modern treatment planning systems (TPS) have the capability of incorporating these practice guidelines to automatically create radiation therapy treatment plans with little human intervention. We are developing a software infrastructure to integrate clinical practice guidelines and radiation oncology electronic medical record (EMR) system into radiation therapy treatment planning system (TPS) for auto planning. Methods: Our Smart Auto-Planning Framework in an EMR environment (SAFEE) usesmore » a software pipeline framework to integrate practice guidelines,EMR, and TPS together. The SAFEE system starts with retrieving diagnosis information and physician's prescription from the EMR system. After approval of contouring, SAFEE will automatically create plans according to our guidelines. Based on clinical objectives, SAFEE will automatically select treatment delivery techniques (such as, 3DRT/IMRT/VMAT) and optimize plans. When necessary, SAFEE will create multiple treatment plans with different combinations of parameters. SAFEE's pipeline structure makes it very flexible to integrate various techniques, such as, Model-Base Segmentation (MBS) and plan optimization algorithms, e.g., Multi-Criteria Optimization (MCO). In addition, SAFEE uses machine learning, data mining techniques, and an integrated database to create clinical knowledgebase and then answer clinical questions, such as, how to score plan quality or how volume overlap affects physicians' decision in beam and treatment technique selection. Results: In our institution, we use Varian Aria EMR system and RayStation TPS from RaySearch, whose ScriptService API allows control by external programs. These applications are the building blocks of our SAFEE system. Conclusion: SAFEE is a feasible method of integrating clinical information to develop an auto-planning paradigm to improve clinical workflow in cancer patient care.« less
Luo, Jiaying; Xiao, Sichang; Qiu, Zhihui; Song, Ning; Luo, Yuanming
2013-04-01
Whether the therapeutic nasal continuous positive airway pressure (CPAP) derived from manual titration is the same as derived from automatic titration is controversial. The purpose of this study was to compare the therapeutic pressure derived from manual titration with automatic titration. Fifty-one patients with obstructive sleep apnoea (OSA) (mean apnoea/hypopnoea index (AHI) = 50.6 ± 18.6 events/h) who were newly diagnosed after an overnight full polysomnography and who were willing to accept CPAP as a long-term treatment were recruited for the study. Manual titration during full polysomnography monitoring and unattended automatic titration with an automatic CPAP device (REMstar Auto) were performed. A separate cohort study of one hundred patients with OSA (AHI = 54.3 ± 18.9 events/h) was also performed by observing the efficacy of CPAP derived from manual titration. The treatment pressure derived from automatic titration (9.8 ± 2.2 cmH(2)O) was significantly higher than that derived from manual titration (7.3 ± 1.5 cmH(2)O; P < 0.001) in 51 patients. The cohort study of 100 patients showed that AHI was satisfactorily decreased after CPAP treatment using a pressure derived from manual titration (54.3 ± 18.9 events/h before treatment and 3.3 ± 1.7 events/h after treatment; P < 0.001). The results suggest that automatic titration pressure derived from REMstar Auto is usually higher than the pressure derived from manual titration. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Maret, Eva; Brudin, Lars; Lindstrom, Lena; Nylander, Eva; Ohlsson, Jan L; Engvall, Jan E
2008-01-01
Background Left ventricular size and function are important prognostic factors in heart disease. Their measurement is the most frequent reason for sending patients to the echo lab. These measurements have important implications for therapy but are sensitive to the skill of the operator. Earlier automated echo-based methods have not become widely used. The aim of our study was to evaluate an automatic echocardiographic method (with manual correction if needed) for determining left ventricular ejection fraction (LVEF) based on an active appearance model of the left ventricle (syngo®AutoEF, Siemens Medical Solutions). Comparisons were made with manual planimetry (manual Simpson), visual assessment and automatically determined LVEF from quantitative myocardial gated single photon emission computed tomography (SPECT). Methods 60 consecutive patients referred for myocardial perfusion imaging (MPI) were included in the study. Two-dimensional echocardiography was performed within one hour of MPI at rest. Image quality did not constitute an exclusion criterion. Analysis was performed by five experienced observers and by two novices. Results LVEF (%), end-diastolic and end-systolic volume/BSA (ml/m2) were for uncorrected AutoEF 54 ± 10, 51 ± 16, 24 ± 13, for corrected AutoEF 53 ± 10, 53 ± 18, 26 ± 14, for manual Simpson 51 ± 11, 56 ± 20, 28 ± 15, and for MPI 52 ± 12, 67 ± 26, 35 ± 23. The required time for analysis was significantly different for all four echocardiographic methods and was for uncorrected AutoEF 79 ± 5 s, for corrected AutoEF 159 ± 46 s, for manual Simpson 177 ± 66 s, and for visual assessment 33 ± 14 s. Compared with the expert manual Simpson, limits of agreement for novice corrected AutoEF was lower than for novice manual Simpson (0.8 ± 10.5 vs. -3.2 ± 11.4 LVEF percentage points). Calculated for experts and with LVEF (%) categorized into < 30, 30–44, 45–54 and ≥ 55, kappa measure of agreement was moderate (0.44–0.53) for all method comparisons (uncorrected AutoEF not evaluated). Conclusion Corrected AutoEF reduces the variation in measurements compared with manual planimetry, without increasing the time required. The method seems especially suited for unexperienced readers. PMID:19014461
An intelligent identification algorithm for the monoclonal picking instrument
NASA Astrophysics Data System (ADS)
Yan, Hua; Zhang, Rongfu; Yuan, Xujun; Wang, Qun
2017-11-01
The traditional colony selection is mainly operated by manual mode, which takes on low efficiency and strong subjectivity. Therefore, it is important to develop an automatic monoclonal-picking instrument. The critical stage of the automatic monoclonal-picking and intelligent optimal selection is intelligent identification algorithm. An auto-screening algorithm based on Support Vector Machine (SVM) is proposed in this paper, which uses the supervised learning method, which combined with the colony morphological characteristics to classify the colony accurately. Furthermore, through the basic morphological features of the colony, system can figure out a series of morphological parameters step by step. Through the establishment of maximal margin classifier, and based on the analysis of the growth trend of the colony, the selection of the monoclonal colony was carried out. The experimental results showed that the auto-screening algorithm could screen out the regular colony from the other, which meets the requirement of various parameters.
Global quasi-linearization (GQL) versus QSSA for a hydrogen-air auto-ignition problem.
Yu, Chunkan; Bykov, Viatcheslav; Maas, Ulrich
2018-04-25
A recently developed automatic reduction method for systems of chemical kinetics, the so-called Global Quasi-Linearization (GQL) method, has been implemented to study and reduce the dimensions of a homogeneous combustion system. The results of application of the GQL and the Quasi-Steady State Assumption (QSSA) are compared. A number of drawbacks of the QSSA are discussed, i.e. the selection criteria of QSS-species and its sensitivity to system parameters, initial conditions, etc. To overcome these drawbacks, the GQL approach has been developed as a robust, automatic and scaling invariant method for a global analysis of the system timescale hierarchy and subsequent model reduction. In this work the auto-ignition problem of the hydrogen-air system is considered in a wide range of system parameters and initial conditions. The potential of the suggested approach to overcome most of the drawbacks of the standard approaches is illustrated.
Clinical utility of anti-p53 auto-antibody: Systematic review and focus on colorectal cancer
Suppiah, Aravind; Greenman, John
2013-01-01
Mutation of the p53 gene is a key event in the carcinogenesis of many different types of tumours. These can occur throughout the length of the p53 gene. Anti-p53 auto-antibodies are commonly produced in response to these p53 mutations. This review firstly describes the various mechanisms of p53 dysfunction and their association with subsequent carcinogenesis. Following this, the mechanisms of induction of anti-p53 auto-antibody production are shown, with various hypotheses for the discrepancies between the presence of p53 mutation and the presence/absence of anti-p53 auto-antibodies. A systematic review was performed with a descriptive summary of key findings of each anti-p53 auto-antibody study in all cancers published in the last 30 years. Using this, the cumulative frequency of anti-p53 auto-antibody in each cancer type is calculated and then compared with the incidence of p53 mutation in each cancer to provide the largest sample calculation and correlation between mutation and anti-p53 auto-antibody published to date. Finally, the review focuses on the data of anti-p53 auto-antibody in colorectal cancer studies, and discusses future strategies including the potentially promising role using anti-p53 auto-antibody presence in screening and surveillance. PMID:23922463
A Nonparametric Approach to Automated S-Wave Picking
NASA Astrophysics Data System (ADS)
Rawles, C.; Thurber, C. H.
2014-12-01
Although a number of very effective P-wave automatic pickers have been developed over the years, automatic picking of S waves has remained more challenging. Most automatic pickers take a parametric approach, whereby some characteristic function (CF), e.g. polarization or kurtosis, is determined from the data and the pick is estimated from the CF. We have adopted a nonparametric approach, estimating the pick directly from the waveforms. For a particular waveform to be auto-picked, the method uses a combination of similarity to a set of seismograms with known S-wave arrivals and dissimilarity to a set of seismograms that do not contain S-wave arrivals. Significant effort has been made towards dealing with the problem of S-to-P conversions. We have evaluated the effectiveness of our method by testing it on multiple sets of microearthquake seismograms with well-determined S-wave arrivals for several areas around the world, including fault zones and volcanic regions. In general, we find that the results from our auto-picker are consistent with reviewed analyst picks 90% of the time at the 0.2 s level and 80% of the time at the 0.1 s level, or better. For most of the large datasets we have analyzed, our auto-picker also makes far more S-wave picks than were made previously by analysts. We are using these enlarged sets of high-quality S-wave picks to refine tomographic inversions for these areas, resulting in substantial improvement in the quality of the S-wave images. We will show examples from New Zealand, Hawaii, and California.
Realization of the ergonomics design and automatic control of the fundus cameras
NASA Astrophysics Data System (ADS)
Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye
2012-12-01
The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.
NASA Astrophysics Data System (ADS)
Modegi, Toshio
Using our previously developed audio to MIDI code converter tool “Auto-F”, from given vocal acoustic signals we can create MIDI data, which enable to playback the voice-like signals with a standard MIDI synthesizer. Applying this tool, we are constructing a MIDI database, which consists of previously converted simple harmonic structured MIDI codes from a set of 71 Japanese male and female syllable recorded signals. And we are developing a novel voice synthesizing system based on harmonically synthesizing musical sounds, which can generate MIDI data and playback voice signals with a MIDI synthesizer by giving Japanese plain (kana) texts, referring to the syllable MIDI code database. In this paper, we propose an improved MIDI converter tool, which can produce temporally higher-resolution MIDI codes. Then we propose an algorithm separating a set of 20 consonant and vowel phoneme MIDI codes from 71 syllable MIDI converted codes in order to construct a voice synthesizing system. And, we present the evaluation results of voice synthesizing quality between these separated phoneme MIDI codes and their original syllable MIDI codes by our developed 4-syllable word listening tests.
Automatic Certification of Kalman Filters for Reliable Code Generation
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd; Schumann, Johann; Richardson, Julian
2005-01-01
AUTOFILTER is a tool for automatically deriving Kalman filter code from high-level declarative specifications of state estimation problems. It can generate code with a range of algorithmic characteristics and for several target platforms. The tool has been designed with reliability of the generated code in mind and is able to automatically certify that the code it generates is free from various error classes. Since documentation is an important part of software assurance, AUTOFILTER can also automatically generate various human-readable documents, containing both design and safety related information. We discuss how these features address software assurance standards such as DO-178B.
TreSpEx—Detection of Misleading Signal in Phylogenetic Reconstructions Based on Tree Information
Struck, Torsten H
2014-01-01
Phylogenies of species or genes are commonplace nowadays in many areas of comparative biological studies. However, for phylogenetic reconstructions one must refer to artificial signals such as paralogy, long-branch attraction, saturation, or conflict between different datasets. These signals might eventually mislead the reconstruction even in phylogenomic studies employing hundreds of genes. Unfortunately, there has been no program allowing the detection of such effects in combination with an implementation into automatic process pipelines. TreSpEx (Tree Space Explorer) now combines different approaches (including statistical tests), which utilize tree-based information like nodal support or patristic distances (PDs) to identify misleading signals. The program enables the parallel analysis of hundreds of trees and/or predefined gene partitions, and being command-line driven, it can be integrated into automatic process pipelines. TreSpEx is implemented in Perl and supported on Linux, Mac OS X, and MS Windows. Source code, binaries, and additional material are freely available at http://www.annelida.de/research/bioinformatics/software.html. PMID:24701118
Deep Learning for Lowtextured Image Matching
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.
2018-05-01
Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.
Specific DNA binding of the two chicken Deformed family homeodomain proteins, Chox-1.4 and Chox-a.
Sasaki, H; Yokoyama, E; Kuroiwa, A
1990-01-01
The cDNA clones encoding two chicken Deformed (Dfd) family homeobox containing genes Chox-1.4 and Chox-a were isolated. Comparison of their amino acid sequences with another chicken Dfd family homeodomain protein and with those of mouse homologues revealed that strong homologies are located in the amino terminal regions and around the homeodomains. Although homologies in other regions were relatively low, some short conserved sequences were also identified. E. coli-made full length proteins were purified and used for the production of specific antibodies and for DNA binding studies. The binding profiles of these proteins to the 5'-leader and 5'-upstream sequences of Chox-1.4 and Chox-a coding regions were analyzed by immunoprecipitation and DNase I footprint assays. These two Chox proteins bound to the same sites in the 5'-flanking sequences of their coding regions with various affinities and their binding affinities to each site were nearly the same. The consensus sequences of the high and low affinity binding sites were TAATGA(C/G) and CTAATTTT, respectively. A clustered binding site was identified in the 5'-upstream of the Chox-a gene, suggesting that this clustered binding site works as a cis-regulatory element for auto- and/or cross-regulation of Chox-a gene expression. Images PMID:1970866
Automatic Thread-Level Parallelization in the Chombo AMR Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christen, Matthias; Keen, Noel; Ligocki, Terry
2011-05-26
The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less
Auto Mechanics Series. Duty Task List.
ERIC Educational Resources Information Center
Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.
This document contains the occupational duty/task lists for eight occupations in the auto mechanics series. Each occupation is divided into a number of duties. A separate page for each duty in the occupation lists the tasks in that duty along with its code number and columns to indicate whether that particular duty has been taught and to provide…
Automatized set-up procedure for transcranial magnetic stimulation protocols.
Harquel, S; Diard, J; Raffin, E; Passera, B; Dall'Igna, G; Marendaz, C; David, O; Chauvin, A
2017-06-01
Transcranial Magnetic Stimulation (TMS) established itself as a powerful technique for probing and treating the human brain. Major technological evolutions, such as neuronavigation and robotized systems, have continuously increased the spatial reliability and reproducibility of TMS, by minimizing the influence of human and experimental factors. However, there is still a lack of efficient set-up procedure, which prevents the automation of TMS protocols. For example, the set-up procedure for defining the stimulation intensity specific to each subject is classically done manually by experienced practitioners, by assessing the motor cortical excitability level over the motor hotspot (HS) of a targeted muscle. This is time-consuming and introduces experimental variability. Therefore, we developed a probabilistic Bayesian model (AutoHS) that automatically identifies the HS position. Using virtual and real experiments, we compared the efficacy of the manual and automated procedures. AutoHS appeared to be more reproducible, faster, and at least as reliable as classical manual procedures. By combining AutoHS with robotized TMS and automated motor threshold estimation methods, our approach constitutes the first fully automated set-up procedure for TMS protocols. The use of this procedure decreases inter-experimenter variability while facilitating the handling of TMS protocols used for research and clinical routine. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Gladden, Roy E.; Khanampornpan, Teerapat; Fisher, Forest W.
2010-01-01
Version 5.0 of the AutoGen software has been released. Previous versions, variously denoted Autogen and autogen, were reported in two articles: Automated Sequence Generation Process and Software (NPO-30746), Software Tech Briefs (Special Supplement to NASA Tech Briefs), September 2007, page 30, and Autogen Version 2.0 (NPO- 41501), NASA Tech Briefs, Vol. 31, No. 10 (October 2007), page 58. To recapitulate: AutoGen (now signifying automatic sequence generation ) automates the generation of sequences of commands in a standard format for uplink to spacecraft. AutoGen requires fewer workers than are needed for older manual sequence-generation processes, and greatly reduces sequence-generation times. The sequences are embodied in spacecraft activity sequence files (SASFs). AutoGen automates generation of SASFs by use of another previously reported program called APGEN. AutoGen encodes knowledge of different mission phases and of how the resultant commands must differ among the phases. AutoGen also provides means for customizing sequences through use of configuration files. The approach followed in developing AutoGen has involved encoding the behaviors of a system into a model and encoding algorithms for context-sensitive customizations of the modeled behaviors. This version of AutoGen addressed the MRO (Mars Reconnaissance Orbiter) primary science phase (PSP) mission phase. On previous Mars missions this phase has more commonly been referred to as mapping phase. This version addressed the unique aspects of sequencing orbital operations and specifically the mission specific adaptation of orbital operations for MRO. This version also includes capabilities for MRO s role in Mars relay support for UHF relay communications with the MER rovers and the Phoenix lander.
NASA Astrophysics Data System (ADS)
Alshakova, E. L.
2017-01-01
The program in the AutoLISP language allows automatically to form parametrical drawings during the work in the AutoCAD software product. Students study development of programs on AutoLISP language with the use of the methodical complex containing methodical instructions in which real examples of creation of images and drawings are realized. Methodical instructions contain reference information necessary for the performance of the offered tasks. The method of step-by-step development of the program is the basis for training in programming on AutoLISP language: the program draws elements of the drawing of a detail by means of definitely created function which values of arguments register in that sequence in which AutoCAD gives out inquiries when performing the corresponding command in the editor. The process of the program design is reduced to the process of step-by-step formation of functions and sequence of their calls. The author considers the development of the AutoLISP program for the creation of parametrical drawings of details, the defined design, the user enters the dimensions of elements of details. These programs generate variants of tasks of the graphic works performed in educational process of "Engineering graphics", "Engineering and computer graphics" disciplines. Individual tasks allow to develop at students skills of independent work in reading and creation of drawings, as well as 3D modeling.
A Comparative Study of Alternative Controls and Displays for by the Severely Physically Handicapped
NASA Technical Reports Server (NTRS)
Williams, D.; Simpson, C.; Barker, M.
1984-01-01
A modification of a row/column scanning system was investigated in order to increase the speed and accuracy with which communication aids can be accessed with one or two switches. A selection algorithm was developed and programmed in BASIC to automatically select individuals with the characteristic difficulty in controlling time dependent control and display systems. Four systems were compared: (1) row/column directed scan (2 switches); (2) row/column auto scan (1 switch); (3) row auto scan (1 switch); and (4) column auto scan (1 switch). For this sample population, there were no significant differences among systems for scan time to select the correct target. The row/column auto scan system resulted in significantly more errors than any of the other three systems. Thus, the most widely prescribed system for severely physically disabled individuals turns out for this group to have a higher error rate and no faster communication rate than three other systems that have been considered inappropriate for this group.
Automatic Coding of Short Text Responses via Clustering in Educational Assessment
ERIC Educational Resources Information Center
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank
2016-01-01
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
AutoCPAP initiation at home: optimal trial duration and cost-effectiveness.
Bachour, Adel; Virkkala, Jussi T; Maasilta, Paula K
2007-11-01
The duration of automatic computer-controlled continuous positive airway pressure device (autoCPAP) initiation at home varies largely between sleep centers. Our objectives were to evaluate the cost-effectiveness and to find the optimal trial duration. Of the 206 consecutive CPAP-naive patients with obstructive sleep apnea syndrome, who were referred to our hospital, 166 received autoCPAP for a 5-day trial at home. Of the 166 patients, 89 (15 women) showed a successful 5-day autoCPAP trial (normalized oximetry and mask-on time exceeding 4 h/day for at least 4 days). For the first trial day, 88 (53%) patients had normalized oximetry and a mask-on time exceeding 4 h. A 1-day autoCPAP trial EUR 668 was less cost-effective than a 5-day trial EUR 653, with no differences in values of efficient CPAP pressure or residual apnea-hypopnea index (AHI). The systematic requirement of oximetry monitoring raised the cost considerably from EUR 481 to EUR 668. In selected patients with obstructive sleep apnea, the optimal duration for initiating CPAP therapy at home by autoCPAP is 5 days. Although a 1-day trial was sufficient to determine the CPAP pressure requirement, it was not cost-effective and had a high rate of failure.
NASA Astrophysics Data System (ADS)
Chancy, Carl H.
A device for performing an objective eye exam has been developed to automatically determine ophthalmic prescriptions. The closed loop fluidic auto-phoropter has been designed, modeled, fabricated and tested for the automatic measurement and correction of a patient's prescriptions. The adaptive phoropter is designed through the combination of a spherical-powered fluidic lens and two cylindrical fluidic lenses that are orientated 45o relative to each other. In addition, the system incorporates Shack-Hartmann wavefront sensing technology to identify the eye's wavefront error and corresponding prescription. Using the wavefront error information, the fluidic auto-phoropter nulls the eye's lower order wavefront error by applying the appropriate volumes to the fluidic lenses. The combination of the Shack-Hartmann wavefront sensor the fluidic auto-phoropter allows for the identification and control of spherical refractive error, as well as cylinder error and axis; thus, creating a truly automated refractometer and corrective system. The fluidic auto-phoropter is capable of correcting defocus error ranging from -20D to 20D and astigmatism from -10D to 10D. The transmissive see-through design allows for the observation of natural scenes through the system at varying object planes with no additional imaging optics in the patient's line of sight. In this research, two generations of the fluidic auto-phoropter are designed and tested; the first generation uses traditional glass optics for the measurement channel. The second generation of the fluidic auto-phoropter takes advantage of the progress in the development of holographic optical elements (HOEs) to replace all the traditional glass optics. The addition of the HOEs has enabled the development of a more compact, inexpensive and easily reproducible system without compromising its performance. Additionally, the fluidic lenses were tested during a National Aeronautics Space Administration (NASA) parabolic flight campaign, to determine the effect of varying gravitational acceleration on the performance and image quality of the fluidic lenses. Wavefront analysis has indicated that flight turbulence and the varying levels of gravitational acceleration ranging from zero-G (microgravity) to 2G (hypergravity) had minimal effect on the performance of the fluidic lenses, except for small changes in defocus; making them suitable for potential use in a portable space-based fluidic auto-phoropter.
[Development of a Software for Automatically Generated Contours in Eclipse TPS].
Xie, Zhao; Hu, Jinyou; Zou, Lian; Zhang, Weisha; Zou, Yuxin; Luo, Kelin; Liu, Xiangxiang; Yu, Luxin
2015-03-01
The automatic generation of planning targets and auxiliary contours have achieved in Eclipse TPS 11.0. The scripting language autohotkey was used to develop a software for automatically generated contours in Eclipse TPS. This software is named Contour Auto Margin (CAM), which is composed of operational functions of contours, script generated visualization and script file operations. RESULTS Ten cases in different cancers have separately selected, in Eclipse TPS 11.0 scripts generated by the software could not only automatically generate contours but also do contour post-processing. For different cancers, there was no difference between automatically generated contours and manually created contours. The CAM is a user-friendly and powerful software, and can automatically generated contours fast in Eclipse TPS 11.0. With the help of CAM, it greatly save plan preparation time and improve working efficiency of radiation therapy physicists.
GenePRIMP: A Gene Prediction Improvement Pipeline For Prokaryotic Genomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyrpides, Nikos C.; Ivanova, Natalia N.; Pati, Amrita
2010-07-08
GenePRIMP (Gene Prediction Improvement Pipeline, Http://geneprimp.jgi-psf.org), a computational process that performs evidence-based evaluation of gene models in prokaryotic genomes and reports anomalies including inconsistent start sites, missing genes, and split genes. We show that manual curation of gene models using the anomaly reports generated by GenePRIMP improves their quality and demonstrate the applicability of GenePRIMP in improving finishing quality and comparing different genome sequencing and annotation technologies. Keywords in context: Gene model, Quality Control, Translation start sites, Automatic correction. Hardware requirements; PC, MAC; Operating System: UNIX/LINUX; Compiler/Version: Perl 5.8.5 or higher; Special requirements: NCBI Blast and nr installation; File Types:more » Source Code, Executable module(s), Sample problem input data; installation instructions other; programmer documentation. Location/transmission: http://geneprimp.jgi-psf.org/gp.tar.gz« less
[Design of longitudinal auto-tracking of the detector on X-ray in digital radiography].
Yu, Xiaomin; Jiang, Tianhao; Liu, Zhihong; Zhao, Xu
2018-04-01
One algorithm is designed to implement longitudinal auto-tracking of the the detector on X-ray in the digital radiography system (DR) with manual collimator. In this study, when the longitudinal length of field of view (LFOV) on the detector is coincided with the longitudinal effective imaging size of the detector, the collimator half open angle ( Ψ ), the maximum centric distance ( e max ) between the center of X-ray field of view and the projection center of the focal spot, and the detector moving distance for auto-traking can be calculated automatically. When LFOV is smaller than the longitudinal effective imaging size of the detector by reducing Ψ , the e max can still be used to calculate the detector moving distance. Using this auto-tracking algorithm in DR with manual collimator, the tested results show that the X-ray projection is totally covered by the effective imaging area of the detector, although the center of the field of view is not aligned with the center of the effective imaging area of the detector. As a simple and low-cost design, the algorithm can be used for longitudinal auto-tracking of the detector on X-ray in the manual collimator DR.
The Effect of a Low-Speed Automatic Brake System Estimated From Real Life Data
Isaksson-Hellman, Irene; Lindman, Magdalena
2012-01-01
A substantial part of all traffic accidents involving passenger cars are rear-end collisions and most of them occur at low speed. Auto Brake is a feature that has been launched in several passenger car models during the last few years. City Safety is a technology designed to help the driver mitigate, and in certain situations avoid, rear-end collisions at low speed by automatically braking the vehicle. Studies have been presented that predict promising benefits from these kinds of systems, but few attempts have been made to show the actual effect of Auto Brake. In this study, the effect of City Safety, a standard feature on the Volvo XC60 model, is calculated based on insurance claims data from cars in real traffic crashes in Sweden. The estimated claim frequency of rear-end frontal collisions measured in claims per 1,000 insured vehicle years was 23% lower for the City Safety equipped XC60 model than for other Volvo models without the system. PMID:23169133
The effect of a low-speed automatic brake system estimated from real life data.
Isaksson-Hellman, Irene; Lindman, Magdalena
2012-01-01
A substantial part of all traffic accidents involving passenger cars are rear-end collisions and most of them occur at low speed. Auto Brake is a feature that has been launched in several passenger car models during the last few years. City Safety is a technology designed to help the driver mitigate, and in certain situations avoid, rear-end collisions at low speed by automatically braking the vehicle.Studies have been presented that predict promising benefits from these kinds of systems, but few attempts have been made to show the actual effect of Auto Brake. In this study, the effect of City Safety, a standard feature on the Volvo XC60 model, is calculated based on insurance claims data from cars in real traffic crashes in Sweden. The estimated claim frequency of rear-end frontal collisions measured in claims per 1,000 insured vehicle years was 23% lower for the City Safety equipped XC60 model than for other Volvo models without the system.
Characteristics of AZ31 Mg alloy joint using automatic TIG welding
NASA Astrophysics Data System (ADS)
Liu, Hong-tao; Zhou, Ji-xue; Zhao, Dong-qing; Liu, Yun-teng; Wu, Jian-hua; Yang, Yuan-sheng; Ma, Bai-chang; Zhuang, Hai-hua
2017-01-01
The automatic tungsten-inert gas welding (ATIGW) of AZ31 Mg alloys was performed using a six-axis robot. The evolution of the microstructure and texture of the AZ31 auto-welded joints was studied by optical microscopy, scanning electron microscopy, energy-dispersive X-ray spectroscopy, and electron backscatter diffraction. The ATIGW process resulted in coarse recrystallized grains in the heat affected zone (HAZ) and epitaxial growth of columnar grains in the fusion zone (FZ). Substantial changes of texture between the base material (BM) and the FZ were detected. The {0002} basal plane in the BM was largely parallel to the sheet rolling plane, whereas the c-axis of the crystal lattice in the FZ inclined approximately 25° with respect to the welding direction. The maximum pole density increased from 9.45 in the BM to 12.9 in the FZ. The microhardness distribution, tensile properties, and fracture features of the AZ31 auto-welded joints were also investigated.
Algorithm for automatic analysis of electro-oculographic data
2013-01-01
Background Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. Methods The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. Results The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. Conclusion The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. PMID:24160372
Algorithm for automatic analysis of electro-oculographic data.
Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti
2013-10-25
Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.
A methodology for automatic intensity-modulated radiation treatment planning for lung cancer
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Li, Xiaoqiang; Quan, Enzhuo M.; Pan, Xiaoning; Li, Yupeng
2011-07-01
In intensity-modulated radiotherapy (IMRT), the quality of the treatment plan, which is highly dependent upon the treatment planner's level of experience, greatly affects the potential benefits of the radiotherapy (RT). Furthermore, the planning process is complicated and requires a great deal of iteration, and is often the most time-consuming aspect of the RT process. In this paper, we describe a methodology to automate the IMRT planning process in lung cancer cases, the goal being to improve the quality and consistency of treatment planning. This methodology (1) automatically sets beam angles based on a beam angle automation algorithm, (2) judiciously designs the planning structures, which were shown to be effective for all the lung cancer cases we studied, and (3) automatically adjusts the objectives of the objective function based on a parameter automation algorithm. We compared treatment plans created in this system (mdaccAutoPlan) based on the overall methodology with plans from a clinical trial of IMRT for lung cancer run at our institution. The 'autoplans' were consistently better, or no worse, than the plans produced by experienced medical dosimetrists in terms of tumor coverage and normal tissue sparing. We conclude that the mdaccAutoPlan system can potentially improve the quality and consistency of treatment planning for lung cancer.
Bertke, S J; Meyers, A R; Wurzelbacher, S J; Bell, J; Lampl, M L; Robins, D
2012-12-01
Tracking and trending rates of injuries and illnesses classified as musculoskeletal disorders caused by ergonomic risk factors such as overexertion and repetitive motion (MSDs) and slips, trips, or falls (STFs) in different industry sectors is of high interest to many researchers. Unfortunately, identifying the cause of injuries and illnesses in large datasets such as workers' compensation systems often requires reading and coding the free form accident text narrative for potentially millions of records. To alleviate the need for manual coding, this paper describes and evaluates a computer auto-coding algorithm that demonstrated the ability to code millions of claims quickly and accurately by learning from a set of previously manually coded claims. The auto-coding program was able to code claims as a musculoskeletal disorders, STF or other with approximately 90% accuracy. The program developed and discussed in this paper provides an accurate and efficient method for identifying the causation of workers' compensation claims as a STF or MSD in a large database based on the unstructured text narrative and resulting injury diagnoses. The program coded thousands of claims in minutes. The method described in this paper can be used by researchers and practitioners to relieve the manual burden of reading and identifying the causation of claims as a STF or MSD. Furthermore, the method can be easily generalized to code/classify other unstructured text narratives. Published by Elsevier Ltd.
Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming
2018-02-19
The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.
Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Omitaomu, Olufemi A; Kotikot, Susan
A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.
JADOPPT: java based AutoDock preparing and processing tool.
García-Pérez, Carlos; Peláez, Rafael; Therón, Roberto; Luis López-Pérez, José
2017-02-15
AutoDock is a very popular software package for docking and virtual screening. However, currently it is hard work to visualize more than one result from the virtual screening at a time. To overcome this limitation we have designed JADOPPT, a tool for automatically preparing and processing multiple ligand-protein docked poses obtained from AutoDock. It allows the simultaneous visual assessment and comparison of multiple poses through clustering methods. Moreover, it permits the representation of reference ligands with known binding modes, binding site residues, highly scoring regions for the ligand, and the calculated binding energy of the best ranked results. JADOPPT, supplementary material (Case Studies 1 and 2) and video tutorials are available at http://visualanalytics.land/cgarcia/JADOPPT.html. carlosgarcia@usal.es or pelaez@usal.es. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Computer-Controlled System for Plasma Ion Energy Auto-Analyzer
NASA Astrophysics Data System (ADS)
Wu, Xian-qiu; Chen, Jun-fang; Jiang, Zhen-mei; Zhong, Qing-hua; Xiong, Yu-ying; Wu, Kai-hua
2003-02-01
A computer-controlled system for plasma ion energy auto-analyzer was technically studied for rapid and online measurement of plasma ion energy distribution. The system intelligently controls all the equipments via a RS-232 port, a printer port and a home-built circuit. The software designed by Lab VIEW G language automatically fulfils all of the tasks such as system initializing, adjustment of scanning-voltage, measurement of weak-current, data processing, graphic export, etc. By using the system, a few minutes are taken to acquire the whole ion energy distribution, which rapidly provides important parameters of plasma process techniques based on semiconductor devices and microelectronics.
Moon, Kyoung-Ja; Jin, Yinji; Jin, Taixian; Lee, Sun-Mi
2018-01-01
A key component of the delirium management is prevention and early detection. To develop an automated delirium risk assessment system (Auto-DelRAS) that automatically alerts health care providers of an intensive care unit (ICU) patient's delirium risk based only on data collected in an electronic health record (EHR) system, and to evaluate the clinical validity of this system. Cohort and system development designs were used. Medical and surgical ICUs in two university hospitals in Seoul, Korea. A total of 3284 patients for the development of Auto-DelRAS, 325 for external validation, 694 for validation after clinical applications. The 4211 data items were extracted from the EHR system and delirium was measured using CAM-ICU (Confusion Assessment Method for Intensive Care Unit). The potential predictors were selected and a logistic regression model was established to create a delirium risk scoring algorithm to construct the Auto-DelRAS. The Auto-DelRAS was evaluated at three months and one year after its application to clinical practice to establish the predictive validity of the system. Eleven predictors were finally included in the logistic regression model. The results of the Auto-DelRAS risk assessment were shown as high/moderate/low risk on a Kardex screen. The predictive validity, analyzed after the clinical application of Auto-DelRAS after one year, showed a sensitivity of 0.88, specificity of 0.72, positive predictive value of 0.53, negative predictive value of 0.94, and a Youden index of 0.59. A relatively high level of predictive validity was maintained with the Auto-DelRAS system, even one year after it was applied to clinical practice. Copyright © 2017. Published by Elsevier Ltd.
Rahmouni, Hind W; Ky, Bonnie; Plappert, Ted; Duffy, Kevin; Wiegers, Susan E; Ferrari, Victor A; Keane, Martin G; Kirkpatrick, James N; Silvestry, Frank E; St John Sutton, Martin
2008-03-01
Ejection fraction (EF) calculated from 2-dimensional echocardiography provides important prognostic and therapeutic information in patients with heart disease. However, quantification of EF requires planimetry and is time-consuming. As a result, visual assessment is frequently used but is subjective and requires extensive experience. New computer software to assess EF automatically is now available and could be used routinely in busy digital laboratories (>15,000 studies per year) and in core laboratories running large clinical trials. We tested Siemens AutoEF software (Siemens Medical Solutions, Erlangen, Germany) to determine whether it correlated with visual estimates of EF, manual planimetry, and cardiac magnetic resonance (CMR). Siemens AutoEF is based on learned patterns and artificial intelligence. An expert and a novice reader assessed EF visually by reviewing transthoracic echocardiograms from consecutive patients. An experienced sonographer quantified EF in all studies using Simpson's method of disks. AutoEF results were compared to CMR. Ninety-two echocardiograms were analyzed. Visual assessment by the expert (R = 0.86) and the novice reader (R = 0.80) correlated more closely with manual planimetry using Simpson's method than did AutoEF (R = 0.64). The correlation between AutoEF and CMR was 0.63, 0.28, and 0.51 for EF, end-diastolic and end-systolic volumes, respectively. The discrepancies in EF estimates between AutoEF and manual tracing using Simpson's method and between AutoEF and CMR preclude routine clinical use of AutoEF until it has been validated in a number of large, busy echocardiographic laboratories. Visual assessment of EF, with its strong correlation with quantitative EF, underscores its continued clinical utility.
A framework for feature extraction from hospital medical data with applications in risk prediction.
Tran, Truyen; Luo, Wei; Phung, Dinh; Gupta, Sunil; Rana, Santu; Kennedy, Richard Lee; Larkins, Ann; Venkatesh, Svetha
2014-12-30
Feature engineering is a time consuming component of predictive modeling. We propose a versatile platform to automatically extract features for risk prediction, based on a pre-defined and extensible entity schema. The extraction is independent of disease type or risk prediction task. We contrast auto-extracted features to baselines generated from the Elixhauser comorbidities. Hospital medical records was transformed to event sequences, to which filters were applied to extract feature sets capturing diversity in temporal scales and data types. The features were evaluated on a readmission prediction task, comparing with baseline feature sets generated from the Elixhauser comorbidities. The prediction model was through logistic regression with elastic net regularization. Predictions horizons of 1, 2, 3, 6, 12 months were considered for four diverse diseases: diabetes, COPD, mental disorders and pneumonia, with derivation and validation cohorts defined on non-overlapping data-collection periods. For unplanned readmissions, auto-extracted feature set using socio-demographic information and medical records, outperformed baselines derived from the socio-demographic information and Elixhauser comorbidities, over 20 settings (5 prediction horizons over 4 diseases). In particular over 30-day prediction, the AUCs are: COPD-baseline: 0.60 (95% CI: 0.57, 0.63), auto-extracted: 0.67 (0.64, 0.70); diabetes-baseline: 0.60 (0.58, 0.63), auto-extracted: 0.67 (0.64, 0.69); mental disorders-baseline: 0.57 (0.54, 0.60), auto-extracted: 0.69 (0.64,0.70); pneumonia-baseline: 0.61 (0.59, 0.63), auto-extracted: 0.70 (0.67, 0.72). The advantages of auto-extracted standard features from complex medical records, in a disease and task agnostic manner were demonstrated. Auto-extracted features have good predictive power over multiple time horizons. Such feature sets have potential to form the foundation of complex automated analytic tasks.
A modular (almost) automatic set-up for elastic multi-tenants cloud (micro)infrastructures
NASA Astrophysics Data System (ADS)
Amoroso, A.; Astorino, F.; Bagnasco, S.; Balashov, N. A.; Bianchi, F.; Destefanis, M.; Lusso, S.; Maggiora, M.; Pellegrino, J.; Yan, L.; Yan, T.; Zhang, X.; Zhao, X.
2017-10-01
An auto-installing tool on an usb drive can allow for a quick and easy automatic deployment of OpenNebula-based cloud infrastructures remotely managed by a central VMDIRAC instance. A single team, in the main site of an HEP Collaboration or elsewhere, can manage and run a relatively large network of federated (micro-)cloud infrastructures, making an highly dynamic and elastic use of computing resources. Exploiting such an approach can lead to modular systems of cloud-bursting infrastructures addressing complex real-life scenarios.
QR codes: next level of social media.
Gottesman, Wesley; Baum, Neil
2013-01-01
The OR code, which is short for quick response code, system was invented in Japan for the auto industry. Its purpose was to track vehicles during manufacture; it was designed to allow high-speed component scanning. Now the scanning can be easily accomplished via cell phone, making the technology useful and within reach of your patients. There are numerous applications for OR codes in the contemporary medical practice. This article describes QR codes and how they might be applied for marketing and practice management.
Parallel auto-correlative statistics with VTK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
2013-08-01
This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.
Ohno, S
1984-01-01
Three outstanding properties uniquely qualify repeats of base oligomers as the primordial coding sequences of all polypeptide chains. First, when compared with randomly generated base sequences in general, they are more likely to have long open reading frames. Second, periodical polypeptide chains specified by such repeats are more likely to assume either alpha-helical or beta-sheet secondary structures than are polypeptide chains of random sequence. Third, provided that the number of bases in the oligomeric unit is not a multiple of 3, these internally repetitious coding sequences are impervious to randomly sustained base substitutions, deletions, and insertions. This is because the recurring periodicity of their polypeptide chains is given by three consecutive copies of the oligomeric unit translated in three different reading frames. Accordingly, when one reading frame is open, the other two are automatically open as well, all three being capable of coding for polypeptide chains of identical periodicity. Under this circumstance, a frame shift due to the deletion or insertion of a number of bases that is not a multiple of 3 fails to alter the down-stream amino acid sequence, and even a base change causing premature chain-termination can silence only one of the three potential coding units. Newly arisen coding sequences in modern organisms are oligomeric repeats, and most of the older genes retain various vestiges of their original internal repetitions. Some of the genes (e.g., oncogenes) have even inherited the property of being impervious to randomly sustained base changes.
Georg, Birgitte; Falktoft, Birgitte; Fahrenkrug, Jan
2016-12-01
The neuropeptide PACAP is expressed throughout the central and peripheral nervous system where it modulates diverse physiological functions including neuropeptide gene expression. We here report that in human neuroblastoma NB-1 cells PACAP transiently induces its own expression. Maximal PACAP mRNA expression was found after stimulation with PACAP for 3h. PACAP auto-regulation was found to be mediated by activation of PACAP specific PAC 1 Rs as PACAP had >100-fold higher efficacy than VIP, and the PAC 1 R selective agonist Maxadilan potently induced PACAP gene expression. Experiments with pharmacological kinase inhibitors revealed that both PKA and novel but not conventional PKC isozymes were involved in the PACAP auto-regulation. Inhibition of MAPK/ERK kinase (MEK) also impeded the induction, and we found that PKA, novel PKC and ERK acted in parallel and were thus not part of the same pathways. The expression of the transcription factor EGR1 previously ascribed as target of PACAP signalling was found to be transiently induced by PACAP and pharmacological inhibition of either PKC or MEK1/2 abolished PACAP mediated EGR1 induction. In contrast, inhibition of PKA mediated increased PACAP mediated EGR1 induction. Experiments using siRNA against EGR1 to lower the expression did however not affect the PACAP auto-regulation indicating that this immediate early gene product is not part of PACAP auto-regulation in NB-1 cells. We here reveal that in NB-1 neuroblastoma cells, PACAP induces its own expression by activation of PAC 1 R, and that the signalling is different from the PAC 1 R signalling mediating induction of VIP in the same cells. PACAP auto-regulation depends on parallel activation of PKA, novel PKC isoforms, and ERK, while EGR1 does not seem to be part of the PACAP auto-regulation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wagner, Tristan; Alexandre, Matthieu; Duran, Rosario; Barilone, Nathalie; Wehenkel, Annemarie; Alzari, Pedro M; Bellinzoni, Marco
2015-05-01
Signal transduction mediated by Ser/Thr phosphorylation in Mycobacterium tuberculosis has been intensively studied in the last years, as its genome harbors eleven genes coding for eukaryotic-like Ser/Thr kinases. Here we describe the crystal structure and the autophosphorylation sites of the catalytic domain of PknA, one of two protein kinases essential for pathogen's survival. The structure of the ligand-free kinase domain shows an auto-inhibited conformation similar to that observed in human Tyr kinases of the Src-family. These results reinforce the high conservation of structural hallmarks and regulation mechanisms between prokaryotic and eukaryotic protein kinases. © 2015 Wiley Periodicals, Inc.
Bridging the gap between the clinician and the patient with cryopyrin-associated periodic syndromes.
Cantarini, L; Lucherini, O M; Frediani, B; Brizi, M G; Bartolomei, B; Cimaz, R; Galeazzi, M; Rigante, D
2011-01-01
Cryopyrin-associated periodic syndromes are categorized as a spectrum of three autoinflammatory diseases, namely familial cold auto-inflammatory syndrome, Muckle-Wells syndrome and chronic infantile neurological cutaneous articular syndrome. All are caused by mutations in the NLRP3 gene coding for cryopyrin and result in active interleukin-1 release: their rarity and shared clinical indicators involving skin, joints, central nervous system and eyes often mean that correct diagnosis is delayed. Onset occurs early in childhood, and life-long therapy with interleukin-1 blocking agents usually leads to tangible clinical remission and inflammatory marker normalization in a large number of patients, justifying the need to facilitate early diagnosis and thus avoid irreversible negative consequences for tissues and organs.
Castillo, Andrés M; Bernal, Andrés; Patiny, Luc; Wist, Julien
2015-08-01
We present a method for the automatic assignment of small molecules' NMR spectra. The method includes an automatic and novel self-consistent peak-picking routine that validates NMR peaks in each spectrum against peaks in the same or other spectra that are due to the same resonances. The auto-assignment routine used is based on branch-and-bound optimization and relies predominantly on integration and correlation data; chemical shift information may be included when available to fasten the search and shorten the list of viable assignments, but in most cases tested, it is not required in order to find the correct assignment. This automatic assignment method is implemented as a web-based tool that runs without any user input other than the acquired spectra. Copyright © 2015 John Wiley & Sons, Ltd.
Bertke, S. J.; Meyers, A. R.; Wurzelbacher, S. J.; Bell, J.; Lampl, M. L.; Robins, D.
2015-01-01
Introduction Tracking and trending rates of injuries and illnesses classified as musculoskeletal disorders caused by ergonomic risk factors such as overexertion and repetitive motion (MSDs) and slips, trips, or falls (STFs) in different industry sectors is of high interest to many researchers. Unfortunately, identifying the cause of injuries and illnesses in large datasets such as workers’ compensation systems often requires reading and coding the free form accident text narrative for potentially millions of records. Method To alleviate the need for manual coding, this paper describes and evaluates a computer auto-coding algorithm that demonstrated the ability to code millions of claims quickly and accurately by learning from a set of previously manually coded claims. Conclusions The auto-coding program was able to code claims as a musculoskeletal disorders, STF or other with approximately 90% accuracy. Impact on industry The program developed and discussed in this paper provides an accurate and efficient method for identifying the causation of workers’ compensation claims as a STF or MSD in a large database based on the unstructured text narrative and resulting injury diagnoses. The program coded thousands of claims in minutes. The method described in this paper can be used by researchers and practitioners to relieve the manual burden of reading and identifying the causation of claims as a STF or MSD. Furthermore, the method can be easily generalized to code/classify other unstructured text narratives. PMID:23206504
A New Method for Measuring Text Similarity in Learning Management Systems Using WordNet
ERIC Educational Resources Information Center
Alkhatib, Bassel; Alnahhas, Ammar; Albadawi, Firas
2014-01-01
As text sources are getting broader, measuring text similarity is becoming more compelling. Automatic text classification, search engines and auto answering systems are samples of applications that rely on text similarity. Learning management systems (LMS) are becoming more important since electronic media is getting more publicly available. As…
NASA Technical Reports Server (NTRS)
Togai, Masaki
1990-01-01
Viewgraphs on commercial applications of fuzzy logic in Japan are presented. Topics covered include: suitable application area of fuzzy theory; characteristics of fuzzy control; fuzzy closed-loop controller; Mitsubishi heavy air conditioner; predictive fuzzy control; the Sendai subway system; automatic transmission; fuzzy logic-based command system for antilock braking system; fuzzy feed-forward controller; and fuzzy auto-tuning system.
Automatic classification of spectra from the Infrared Astronomical Satellite (IRAS)
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Self, Matthew; Taylor, William; Goebel, John; Volk, Kevin; Walker, Helen
1989-01-01
A new classification of Infrared spectra collected by the Infrared Astronomical Satellite (IRAS) is presented. The spectral classes were discovered automatically by a program called Auto Class 2. This program is a method for discovering (inducing) classes from a data base, utilizing a Bayesian probability approach. These classes can be used to give insight into the patterns that occur in the particular domain, in this case, infrared astronomical spectroscopy. The classified spectra are the entire Low Resolution Spectra (LRS) Atlas of 5,425 sources. There are seventy-seven classes in this classification and these in turn were meta-classified to produce nine meta-classes. The classification is presented as spectral plots, IRAS color-color plots, galactic distribution plots and class commentaries. Cross-reference tables, listing the sources by IRAS name and by Auto Class class, are also given. These classes show some of the well known classes, such as the black-body class, and silicate emission classes, but many other classes were unsuspected, while others show important subtle differences within the well known classes.
Evaluation of a commercial automatic treatment planning system for prostate cancers.
Nawa, Kanabu; Haga, Akihiro; Nomoto, Akihiro; Sarmiento, Raniel A; Shiraishi, Kenshiro; Yamashita, Hideomi; Nakagawa, Keiichi
2017-01-01
Recent developments in Radiation Oncology treatment planning have led to the development of software packages that facilitate automated intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) planning. Such solutions include site-specific modules, plan library methods, and algorithm-based methods. In this study, the plan quality for prostate cancer generated by the Auto-Planning module of the Pinnacle 3 radiation therapy treatment planning system (v9.10, Fitchburg, WI) is retrospectively evaluated. The Auto-Planning module of Pinnacle 3 uses a progressive optimization algorithm. Twenty-three prostate cancer cases, which had previously been planned and treated without lymph node irradiation, were replanned using the Auto-Planning module. Dose distributions were statistically compared with those of manual planning by the paired t-test at 5% significance level. Auto-Planning was performed without any manual intervention. Planning target volume (PTV) dose and dose to rectum were comparable between Auto-Planning and manual planning. The former, however, significantly reduced the dose to the bladder and femurs. Regression analysis was performed to examine the correlation between volume overlap between bladder and PTV divided by the total bladder volume and resultant V70. The findings showed that manual planning typically exhibits a logistic way for dose constraint, whereas Auto-Planning shows a more linear tendency. By calculating the Akaike information criterion (AIC) to validate the statistical model, a reduction of interoperator variation in Auto-Planning was shown. We showed that, for prostate cancer, the Auto-Planning module provided plans that are better than or comparable with those of manual planning. By comparing our results with those previously reported for head and neck cancer treatment, we recommend the homogeneous plan quality generated by the Auto-Planning module, which exhibits less dependence on anatomic complexity. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
PERI - Auto-tuning Memory Intensive Kernels for Multicore
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H; Williams, Samuel; Datta, Kaushik
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we developmore » a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.« less
Automatic brightness control of laser spot vision inspection system
NASA Astrophysics Data System (ADS)
Han, Yang; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin
2009-10-01
The laser spot detection system aims to locate the center of the laser spot after long-distance transmission. The accuracy of positioning laser spot center depends very much on the system's ability to control brightness. In this paper, an automatic brightness control system with high-performance is designed using the device of FPGA. The brightness is controlled by combination of auto aperture (video driver) and adaptive exposure algorithm, and clear images with proper exposure are obtained under different conditions of illumination. Automatic brightness control system creates favorable conditions for positioning of the laser spot center later, and experiment results illuminate the measurement accuracy of the system has been effectively guaranteed. The average error of the spot center is within 0.5mm.
Automatic recloser circuit breaker integrated with GSM technology for power system notification
NASA Astrophysics Data System (ADS)
Lada, M. Y.; Khiar, M. S. A.; Ghani, S. A.; Nawawi, M. R. M.; Rahim, N. H.; Sinar, L. O. M.
2015-05-01
Lightning is one type of transient faults that usually cause the circuit breaker in the distribution board trip due to overload current detection. The instant tripping condition in the circuit breakers clears the fault in the system. Unfortunately most circuit breakers system is manually operated. The power line will be effectively re-energized after the clearing fault process is finished. Auto-reclose circuit is used on the transmission line to carry out the duty of supplying quality electrical power to customers. In this project, an automatic reclose circuit breaker for low voltage usage is designed. The product description is the Auto Reclose Circuit Breaker (ARCB) will trip if the current sensor detects high current which exceeds the rated current for the miniature circuit breaker (MCB) used. Then the fault condition will be cleared automatically and return the power line to normal condition. The Global System for Mobile Communication (GSM) system will send SMS to the person in charge if the tripping occurs. If the over current occurs in three times, the system will fully trip (open circuit) and at the same time will send an SMS to the person in charge. In this project a 1 A is set as the rated current and any current exceeding a 1 A will cause the system to trip or interrupted. This system also provides an additional notification for user such as the emergency light and warning system.
Skalec, Tomasz; Górecka-Dolny, Agnieszka; Zieliński, Stanisław; Gibek, Mirosław; Stróżecki, Łukasz; Kübler, Andrzej
2017-01-01
The automatic control module of end-tidal volatile agents (EtC) was designed to reduce the consumption of anaesthetic gases, increase the stability of general anaesthesia and reduce the need for adjustments in the settings of the anaesthesia machine. The aim of this study was to verify these hypotheses. The course of general anaesthesia with the use of the EtC module was analysed for haemodynamic stability, depth of anaesthesia, end-expiratory concentration of anaesthetic, number of ventilator key presses, fentanyl supply, consumption of volatile agents and anaesthesia and operation times. These data were compared with the data obtained during general anaesthesia controlled manually and were processed with statistical tests. Seventy-four patients underwent general anaesthesia for scheduled operations. Group AUTO-ET (n = 35) was anaesthetized with EtC, and group MANUAL-ET (n = 39) was controlled manually. Both populations presented similar anaesthesia stability. No differences were noted in the time of anaesthesia, saturation up to MAC 1.0 or awakening. Data revealed no differences in mean EtAA or the fentanyl dose. The AUTO-ET group exhibited fewer key presses per minute, 0.0603 min⁻¹, whereas the MANUAL-ET exhibited a value of 0.0842 min⁻¹; P = 0.001. The automatic group consumed more anaesthetic and oxygen per minute (sevoflurane 0.1171 mL min⁻¹; IQR: 0.0503; oxygen 1.8286 mL min⁻¹, IQR: 1,3751) than MANUAL-ET (sevoflurane 0.0824 mL min⁻¹, IQR: 0.0305; oxygen 1,288 mL min⁻¹, IQR: 0,6517) (P = 0.0028 and P = 0.0171, respectively). Both methods are equally stable and safe for patients. The consumption of volatile agents was significantly increased in the AUTO-ET group. EtC considerably reduces the number of key presses.
MAISTAS: a tool for automatic structural evaluation of alternative splicing products.
Floris, Matteo; Raimondo, Domenico; Leoni, Guido; Orsini, Massimiliano; Marcatili, Paolo; Tramontano, Anna
2011-06-15
Analysis of the human genome revealed that the amount of transcribed sequence is an order of magnitude greater than the number of predicted and well-characterized genes. A sizeable fraction of these transcripts is related to alternatively spliced forms of known protein coding genes. Inspection of the alternatively spliced transcripts identified in the pilot phase of the ENCODE project has clearly shown that often their structure might substantially differ from that of other isoforms of the same gene, and therefore that they might perform unrelated functions, or that they might even not correspond to a functional protein. Identifying these cases is obviously relevant for the functional assignment of gene products and for the interpretation of the effect of variations in the corresponding proteins. Here we describe a publicly available tool that, given a gene or a protein, retrieves and analyses all its annotated isoforms, provides users with three-dimensional models of the isoform(s) of his/her interest whenever possible and automatically assesses whether homology derived structural models correspond to plausible structures. This information is clearly relevant. When the homology model of some isoforms of a gene does not seem structurally plausible, the implications are that either they assume a structure unrelated to that of the other isoforms of the same gene with presumably significant functional differences, or do not correspond to functional products. We provide indications that the second hypothesis is likely to be true for a substantial fraction of the cases. http://maistas.bioinformatica.crs4.it/.
Tools for model-building with cryo-EM maps
Terwilliger, Thomas Charles
2018-01-01
There are new tools available to you in Phenix for interpreting cryo-EM maps. You can automatically sharpen (or blur) a map with phenix.auto_sharpen and you can segment a map with phenix.segment_and_split_map. If you have overlapping partial models for a map, you can merge them with phenix.combine_models. If you have a protein-RNA complex and protein chains have been accidentally built in the RNA region, you can try to remove them with phenix.remove_poor_fragments. You can put these together and automatically sharpen, segment and build a map with phenix.map_to_model.
Tools for model-building with cryo-EM maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas Charles
There are new tools available to you in Phenix for interpreting cryo-EM maps. You can automatically sharpen (or blur) a map with phenix.auto_sharpen and you can segment a map with phenix.segment_and_split_map. If you have overlapping partial models for a map, you can merge them with phenix.combine_models. If you have a protein-RNA complex and protein chains have been accidentally built in the RNA region, you can try to remove them with phenix.remove_poor_fragments. You can put these together and automatically sharpen, segment and build a map with phenix.map_to_model.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2001-01-01
This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.
Automatic Coding of Dialogue Acts in Collaboration Protocols
ERIC Educational Resources Information Center
Erkens, Gijsbert; Janssen, Jeroen
2008-01-01
Although protocol analysis can be an important tool for researchers to investigate the process of collaboration and communication, the use of this method of analysis can be time consuming. Hence, an automatic coding procedure for coding dialogue acts was developed. This procedure helps to determine the communicative function of messages in online…
New auto-segment method of cerebral hemorrhage
NASA Astrophysics Data System (ADS)
Wang, Weijiang; Shen, Tingzhi; Dang, Hua
2007-12-01
A novel method for Computerized tomography (CT) cerebral hemorrhage (CH) image automatic segmentation is presented in the paper, which uses expert system that models human knowledge about the CH automatic segmentation problem. The algorithm adopts a series of special steps and extracts some easy ignored CH features which can be found by statistic results of mass real CH images, such as region area, region CT number, region smoothness and some statistic CH region relationship. And a seven steps' extracting mechanism will ensure these CH features can be got correctly and efficiently. By using these CH features, a decision tree which models the human knowledge about the CH automatic segmentation problem has been built and it will ensure the rationality and accuracy of the algorithm. Finally some experiments has been taken to verify the correctness and reasonable of the automatic segmentation, and the good correct ratio and fast speed make it possible to be widely applied into practice.
Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias
2018-03-01
To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.
Are circadian rhythms new pathways to understand Autism Spectrum Disorder?
Geoffray, M-M; Nicolas, A; Speranza, M; Georgieff, N
2016-11-01
Autism Spectrum Disorder (ASD) is a frequent neurodevelopmental disorder. ASD is probably the result of intricate interactions between genes and environment altering progressively the development of brain structures and functions. Circadian rhythms are a complex intrinsic timing system composed of almost as many clocks as there are body cells. They regulate a variety of physiological and behavioral processes such as the sleep-wake rhythm. ASD is often associated with sleep disorders and low levels of melatonin. This first point raises the hypothesis that circadian rhythms could have an implication in ASD etiology. Moreover, circadian rhythms are generated by auto-regulatory genetic feedback loops, driven by transcription factors CLOCK and BMAL1, who drive transcription daily patterns of a wide number of clock-controlled genes (CCGs) in different cellular contexts across tissues. Among these, are some CCGs coding for synapses molecules associated to ASD susceptibility. Furthermore, evidence emerges about circadian rhythms control of time brain development processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fast and Adaptive Auto-focusing Microscope
NASA Astrophysics Data System (ADS)
Obara, Takeshi; Igarashi, Yasunobu; Hashimoto, Koichi
Optical microscopes are widely used in biological and medical researches. By using the microscope, we can observe cellular movements including intracellular ions and molecules tagged with fluorescent dyes at a high magnification. However, a freely motile cell easily escapes from a 3D field of view of the typical microscope. Therefore, we propose a novel auto-focusing algorithm and develop a auto-focusing and tracking microscope. XYZ positions of a microscopic stage are feedback controlled to focus and track the cell automatically. A bright-field image is used to estimate a cellular position. XY centroids are used to estimate XY positions of the tracked cell. To estimate Z position, we use a diffraction pattern around the cell membrane. This estimation method is so-called Depth from Diffraction (DFDi). However, this method is not robust for individual differences between cells because the diffraction pattern depends on each cellular shape. Therefore, in this study, we propose a real-time correction of DFDi by using 2D Laplacian of an intracellular area as a goodness of the focus. To evaluate the performance of our developed algorithm and microscope, we auto-focus and track a freely moving paramecium. In this experimental result, the paramecium is auto-focused and kept inside the scope of the microscope during 45s. The evaluated focal error is within 5µm, while a length and a thickness of the paramecium are about 200µm and 50µm, respectively.
AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source
NASA Astrophysics Data System (ADS)
Nightingale, J. W.; Dye, S.; Massey, Richard J.
2018-05-01
This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.
ASA24 enables multiple automatically coded self-administered 24-hour recalls and food records
A freely available web-based tool for epidemiologic, interventional, behavioral, or clinical research from NCI that enables multiple automatically coded self-administered 24-hour recalls and food records.
A Tire Air Maintenance Technology
ERIC Educational Resources Information Center
Pierce, Alan
2012-01-01
Improperly inflated car tires can reduce gas mileage and car performance, speed up tire wear, and even cause a tire to blow out. The AAA auto club recommends that someone check the air pressure of one's car's tires at least once a month. Wouldn't it be nice, though, if someone came up with a tire pressure-monitoring system that automatically kept…
Automotive Power Flow System; Auto Mechanics I: 9043.04.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
This automotive power flow system course sets the foundation in the theory of operation of the standard and automatic transmission, clutch assemblies, drive-line and rear axle assemblies. This is a one or two quinmester credit course covering 45 clock hours. In the fourth quinmester course in the tenth year, instruction consists of lectures,…
2016-10-06
Copyright 2016, Compsim, All Rights Reserved 1 KEEL® Technology in support of Mission Planning and Execution delivering Adaptive...Executing, and Auditing ) This paper focuses on the decision-making component (#2) with the use of Knowledge Enhanced Electronic logic (KEEL) Technology ...Copyright 2016, Compsim, All Rights Reserved 2 • Eliminate “coding errors” (auto-generated code) • 100% explainable and auditable
Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder
NASA Technical Reports Server (NTRS)
Staats, Matt
2009-01-01
We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.
Improving performance of DS-CDMA systems using chaotic complex Bernoulli spreading codes
NASA Astrophysics Data System (ADS)
Farzan Sabahi, Mohammad; Dehghanfard, Ali
2014-12-01
The most important goal of spreading spectrum communication system is to protect communication signals against interference and exploitation of information by unintended listeners. In fact, low probability of detection and low probability of intercept are two important parameters to increase the performance of the system. In Direct Sequence Code Division Multiple Access (DS-CDMA) systems, these properties are achieved by multiplying the data information in spreading sequences. Chaotic sequences, with their particular properties, have numerous applications in constructing spreading codes. Using one-dimensional Bernoulli chaotic sequence as spreading code is proposed in literature previously. The main feature of this sequence is its negative auto-correlation at lag of 1, which with proper design, leads to increase in efficiency of the communication system based on these codes. On the other hand, employing the complex chaotic sequences as spreading sequence also has been discussed in several papers. In this paper, use of two-dimensional Bernoulli chaotic sequences is proposed as spreading codes. The performance of a multi-user synchronous and asynchronous DS-CDMA system will be evaluated by applying these sequences under Additive White Gaussian Noise (AWGN) and fading channel. Simulation results indicate improvement of the performance in comparison with conventional spreading codes like Gold codes as well as similar complex chaotic spreading sequences. Similar to one-dimensional Bernoulli chaotic sequences, the proposed sequences also have negative auto-correlation. Besides, construction of complex sequences with lower average cross-correlation is possible with the proposed method.
Auto-Configuration Protocols in Mobile Ad Hoc Networks
Villalba, Luis Javier García; Matesanz, Julián García; Orozco, Ana Lucila Sandoval; Díaz, José Duván Márquez
2011-01-01
The TCP/IP protocol allows the different nodes in a network to communicate by associating a different IP address to each node. In wired or wireless networks with infrastructure, we have a server or node acting as such which correctly assigns IP addresses, but in mobile ad hoc networks there is no such centralized entity capable of carrying out this function. Therefore, a protocol is needed to perform the network configuration automatically and in a dynamic way, which will use all nodes in the network (or part thereof) as if they were servers that manage IP addresses. This article reviews the major proposed auto-configuration protocols for mobile ad hoc networks, with particular emphasis on one of the most recent: D2HCP. This work also includes a comparison of auto-configuration protocols for mobile ad hoc networks by specifying the most relevant metrics, such as a guarantee of uniqueness, overhead, latency, dependency on the routing protocol and uniformity. PMID:22163814
Equipment for linking the AutoAnalyzer on-line to a computer
Simpson, D.; Sims, G. E.; Harrison, M. I.; Whitby, L. G.
1971-01-01
An Elliott 903 computer with 8K central core store and magnetic tape backing store has been operated for approximately 20 months in a clinical chemistry laboratory. Details of the equipment designed for linking AutoAnalyzers on-line to the computer are described, and data presented concerning the time required by the computer for different processes. The reliability of the various components in daily operation is discussed. Limitations in the system's capabilities have been defined, and ways of overcoming these are delineated. At present, routine operations include the preparation of worksheets for a limited range of tests (five channels), monitoring of up to 11 AutoAnalyzer channels at a time on a seven-day week basis (with process control and automatic calculation of results), and the provision of quality control data. Cumulative reports can be printed out on those analyses for which computer-prepared worksheets are provided but the system will require extension before these can be issued sufficiently rapidly for routine use. PMID:5551384
Real-time deblurring of handshake blurred images on smartphones
NASA Astrophysics Data System (ADS)
Pourreza-Shahri, Reza; Chang, Chih-Hsiang; Kehtarnavaz, Nasser
2015-02-01
This paper discusses an Android app for the purpose of removing blur that is introduced as a result of handshakes when taking images via a smartphone. This algorithm utilizes two images to achieve deblurring in a computationally efficient manner without suffering from artifacts associated with deconvolution deblurring algorithms. The first image is the normal or auto-exposure image and the second image is a short-exposure image that is automatically captured immediately before or after the auto-exposure image is taken. A low rank approximation image is obtained by applying singular value decomposition to the auto-exposure image which may appear blurred due to handshakes. This approximation image does not suffer from blurring while incorporating the image brightness and contrast information. The eigenvalues extracted from the low rank approximation image are then combined with those from the shortexposure image. It is shown that this deblurring app is computationally more efficient than the adaptive tonal correction algorithm which was previously developed for the same purpose.
An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.
Chen, Lei; Wei, Guoyuan; Shen, Zhenyao
2015-10-21
To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.
NASA Technical Reports Server (NTRS)
Ding, R. Jeffrey; Oelgoetz, Peter A.
1999-01-01
The "Auto-Adjustable Pin Tool for Friction Stir Welding", was developed at The Marshall Space Flight Center to address process deficiencies unique to the FSW process. The auto-adjustable pin tool, also called the retractable pin-tool (R.PT) automatically withdraws the welding probe of the pin-tool into the pin-tool's shoulder. The primary function of the auto-adjustable pin-tool is to allow for keyhole closeout, necessary for circumferential welding and localized weld repair, and, automated pin-length adjustment for the welding of tapered material thickness. An overview of the RPT hardware is presented. The paper follows with studies conducted using the RPT. The RPT was used to simulate two capabilities; welding tapered material thickness and closing out the keyhole in a circumferential weld. The retracted pin-tool regions in aluminum- lithium 2195 friction stir weldments were studied through mechanical property testing and metallurgical sectioning. Correlation's can be =de between retractable pin-tool programmed parameters, process parameters, microstructure, and resulting weld quality.
Synergistic Effect of Auto-Activation and Small RNA Regulation on Gene Expression
NASA Astrophysics Data System (ADS)
Xiong, Li-Ping; Ma, Yu-Qiang; Tang, Lei-Han
2010-09-01
Auto-activation and small ribonucleic acid (RNA)-mediated regulation are two important mechanisms in controlling gene expression. We study the synergistic effect of these two regulations on gene expression. It is found that under this combinatorial regulation, gene expression exhibits bistable behaviors at the transition regime, while each of these two regulations, if working solely, only leads to monostability. Within the stochastic framework, the base pairing strength between sRNA and mRNA plays an important role in controlling the transition time between on and off states. The noise strength of protein number in the off state approaches 1 and is smaller than that in the on state. The noise strength also depends on which parameters, the feedback strength or the synthesis rate of small RNA, are tuned in switching the gene expression on and off. Our findings may provide a new insight into gene-regulation mechanism and can be applied in synthetic biology.
jMetalCpp: optimizing molecular docking problems with a C++ metaheuristic framework.
López-Camacho, Esteban; García Godoy, María Jesús; Nebro, Antonio J; Aldana-Montes, José F
2014-02-01
Molecular docking is a method for structure-based drug design and structural molecular biology, which attempts to predict the position and orientation of a small molecule (ligand) in relation to a protein (receptor) to produce a stable complex with a minimum binding energy. One of the most widely used software packages for this purpose is AutoDock, which incorporates three metaheuristic techniques. We propose the integration of AutoDock with jMetalCpp, an optimization framework, thereby providing both single- and multi-objective algorithms that can be used to effectively solve docking problems. The resulting combination of AutoDock + jMetalCpp allows users of the former to easily use the metaheuristics provided by the latter. In this way, biologists have at their disposal a richer set of optimization techniques than those already provided in AutoDock. Moreover, designers of metaheuristic techniques can use molecular docking for case studies, which can lead to more efficient algorithms oriented to solving the target problems. jMetalCpp software adapted to AutoDock is freely available as a C++ source code at http://khaos.uma.es/AutodockjMetal/.
Automated Computer Access Request System
NASA Technical Reports Server (NTRS)
Snook, Bryan E.
2010-01-01
The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).
NASA Astrophysics Data System (ADS)
Giorgino, Toni
2018-07-01
The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.
Auto-tracking system for human lumbar motion analysis.
Sui, Fuge; Zhang, Da; Lam, Shing Chun Benny; Zhao, Lifeng; Wang, Dongjun; Bi, Zhenggang; Hu, Yong
2011-01-01
Previous lumbar motion analyses suggest the usefulness of quantitatively characterizing spine motion. However, the application of such measurements is still limited by the lack of user-friendly automatic spine motion analysis systems. This paper describes an automatic analysis system to measure lumbar spine disorders that consists of a spine motion guidance device, an X-ray imaging modality to acquire digitized video fluoroscopy (DVF) sequences and an automated tracking module with a graphical user interface (GUI). DVF sequences of the lumbar spine are recorded during flexion-extension under a guidance device. The automatic tracking software utilizing a particle filter locates the vertebra-of-interest in every frame of the sequence, and the tracking result is displayed on the GUI. Kinematic parameters are also extracted from the tracking results for motion analysis. We observed that, in a bone model test, the maximum fiducial error was 3.7%, and the maximum repeatability error in translation and rotation was 1.2% and 2.6%, respectively. In our simulated DVF sequence study, the automatic tracking was not successful when the noise intensity was greater than 0.50. In a noisy situation, the maximal difference was 1.3 mm in translation and 1° in the rotation angle. The errors were calculated in translation (fiducial error: 2.4%, repeatability error: 0.5%) and in the rotation angle (fiducial error: 1.0%, repeatability error: 0.7%). However, the automatic tracking software could successfully track simulated sequences contaminated by noise at a density ≤ 0.5 with very high accuracy, providing good reliability and robustness. A clinical trial with 10 healthy subjects and 2 lumbar spondylolisthesis patients were enrolled in this study. The measurement with auto-tacking of DVF provided some information not seen in the conventional X-ray. The results proposed the potential use of the proposed system for clinical applications.
Paproki, A; Engstrom, C; Chandra, S S; Neubert, A; Fripp, J; Crozier, S
2014-09-01
To validate an automatic scheme for the segmentation and quantitative analysis of the medial meniscus (MM) and lateral meniscus (LM) in magnetic resonance (MR) images of the knee. We analysed sagittal water-excited double-echo steady-state MR images of the knee from a subset of the Osteoarthritis Initiative (OAI) cohort. The MM and LM were automatically segmented in the MR images based on a deformable model approach. Quantitative parameters including volume, subluxation and tibial-coverage were automatically calculated for comparison (Wilcoxon tests) between knees with variable radiographic osteoarthritis (rOA), medial and lateral joint space narrowing (mJSN, lJSN) and pain. Automatic segmentations and estimated parameters were evaluated for accuracy using manual delineations of the menisci in 88 pathological knee MR examinations at baseline and 12 months time-points. The median (95% confidence-interval (CI)) Dice similarity index (DSI) (2 ∗|Auto ∩ Manual|/(|Auto|+|Manual|)∗ 100) between manual and automated segmentations for the MM and LM volumes were 78.3% (75.0-78.7), 83.9% (82.1-83.9) at baseline and 75.3% (72.8-76.9), 83.0% (81.6-83.5) at 12 months. Pearson coefficients between automatic and manual segmentation parameters ranged from r = 0.70 to r = 0.92. MM in rOA/mJSN knees had significantly greater subluxation and smaller tibial-coverage than no-rOA/no-mJSN knees. LM in rOA knees had significantly greater volumes and tibial-coverage than no-rOA knees. Our automated method successfully segmented the menisci in normal and osteoarthritic knee MR images and detected meaningful morphological differences with respect to rOA and joint space narrowing (JSN). Our approach will facilitate analyses of the menisci in prospective MR cohorts such as the OAI for investigations into pathophysiological changes occurring in early osteoarthritis (OA) development. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
García-Betances, Rebeca I; Huerta, Mónica K
2012-01-01
A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies' backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones' present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients' identification processes in low-budget situations.
García-Betances, Rebeca I.; Huerta, Mónica K.
2012-01-01
A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies’ backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones’ present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients’ identification processes in low-budget situations. PMID:23569629
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950
Automatic mathematical modeling for space application
NASA Technical Reports Server (NTRS)
Wang, Caroline K.
1987-01-01
A methodology for automatic mathematical modeling is described. The major objective is to create a very friendly environment for engineers to design, maintain and verify their model and also automatically convert the mathematical model into FORTRAN code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine simulation mathematical model called Propulsion System Automatic Modeling (PSAM). PSAM provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. PSAM contains an initial set of component process elements for the Space Shuttle Main Engine simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. PSAM is then able to automatically generate the model and the FORTRAN code. A future goal is to download the FORTRAN code to the VAX/VMS system for conventional computation.
A Real-Time Non-invasive Auto-bioluminescent Urinary Bladder Cancer Xenograft Model.
John, Bincy Anu; Xu, Tingting; Ripp, Steven; Wang, Hwa-Chain Robert
2017-02-01
The study was to develop an auto-bioluminescent urinary bladder cancer (UBC) xenograft animal model for pre-clinical research. The study used a humanized, bacteria-originated lux reporter system consisting of six (luxCDABEfrp) genes to express components required for producing bioluminescent signals in human UBC J82, J82-Ras, and SW780 cells without exogenous substrates. Immune-deficient nude mice were inoculated with Lux-expressing UBC cells to develop auto-bioluminescent xenograft tumors that were monitored by imaging and physical examination. Lux-expressing auto-bioluminescent J82-Lux, J82-Ras-Lux, and SW780-Lux cell lines were established. Xenograft tumors derived from tumorigenic Lux-expressing auto-bioluminescent J82-Ras-Lux cells allowed a serial, non-invasive, real-time monitoring by imaging of tumor development prior to the presence of palpable tumors in animals. Using Lux-expressing auto-bioluminescent tumorigenic cells enabled us to monitor the entire course of xenograft tumor development through tumor cell implantation, adaptation, and growth to visible/palpable tumors in animals.
Wang, Xiaodan; Yamaguchi, Nobuyasu; Someya, Takashi; Nasu, Masao
2007-10-01
The micro-colony method was used to enumerate viable bacteria in composts. Cells were vacuum-filtered onto polycarbonate filters and incubated for 18 h on LB medium at 37 degrees C. Bacteria on the filters were stained with SYBR Green II, and enumerated using a newly developed micro-colony auto counting system which can automatically count micro-colonies on half the area of the filter within 90 s. A large number of bacteria in samples retained physiological activity and formed micro-colonies within 18 h, whereas most could not form large colonies on conventional media within 1 week. The results showed that this convenient technique can enumerate viable bacteria in compost rapidly for its efficient quality control.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Green, Lawrence; Carle, Alan; Fagan, Mike
1999-01-01
Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.
[Clinical overview of auto-inflammatory diseases].
Georgin-Lavialle, S; Rodrigues, F; Hentgen, V; Fayand, A; Quartier, P; Bader-Meunier, B; Bachmeyer, C; Savey, L; Louvrier, C; Sarrabay, G; Melki, I; Belot, A; Koné-Paut, I; Grateau, G
2018-04-01
Monogenic auto-inflammatory diseases are characterized by genetic abnormalities coding for proteins involved in innate immunity. They were initially described in mirror with auto-immune diseases because of the absence of circulating autoantibodies. Their main feature is the presence of peripheral blood inflammation in crisis without infection. The best-known auto-inflammatory diseases are mediated by interleukines that consisted in the 4 following diseases familial Mediterranean fever, cryopyrinopathies, TNFRSF1A-related intermittent fever, and mevalonate kinase deficiency. Since 10 years, many other diseases have been discovered, especially thanks to the progress in genetics. In this review, we propose the actual panorama of the main known auto-inflammatory diseases. Some of them are recurrent fevers with crisis and remission; some others evaluate more chronically; some are associated with immunodeficiency. From a physiopathological point of view, we can separate diseases mediated by interleukine-1 and diseases mediated by interferon. Then some polygenic inflammatory diseases will be shortly described: Still disease, Schnitzler syndrome, aseptic abscesses syndrome. The diagnosis of auto-inflammatory disease is largely based on anamnesis, the presence of peripheral inflammation during attacks and genetic analysis, which are more and more performant. Copyright © 2018 Société Nationale Française de Médecine Interne (SNFMI). Published by Elsevier SAS. All rights reserved.
Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications
NASA Astrophysics Data System (ADS)
Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves
2015-09-01
The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.
Gene Graphics: a genomic neighborhood data visualization web application.
Harrison, Katherine J; Crécy-Lagard, Valérie de; Zallot, Rémi
2018-04-15
The examination of gene neighborhood is an integral part of comparative genomics but no tools to produce publication quality graphics of gene clusters are available. Gene Graphics is a straightforward web application for creating such visuals. Supported inputs include National Center for Biotechnology Information gene and protein identifiers with automatic fetching of neighboring information, GenBank files and data extracted from the SEED database. Gene representations can be customized for many parameters including gene and genome names, colors and sizes. Gene attributes can be copied and pasted for rapid and user-friendly customization of homologous genes between species. In addition to Portable Network Graphics and Scalable Vector Graphics, produced representations can be exported as Tagged Image File Format or Encapsulated PostScript, formats that are standard for publication. Hands-on tutorials with real life examples inspired from publications are available for training. Gene Graphics is freely available at https://katlabs.cc/genegraphics/ and source code is hosted at https://github.com/katlabs/genegraphics. katherinejh@ufl.edu or remizallot@ufl.edu. Supplementary data are available at Bioinformatics online.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-16
... executions in securities priced at least one dollar in the Exchange's Automatic Execution Mode of order... securities priced at least one dollar in AutoEx. Third, SR-NSX-2012-06 amended the rebate tiers applicable to order executions in securities priced at least one dollar in the Exchange's Order Delivery Mode of order...
[Determination of Hair Shafts by InnoTyper® 21 Kit].
Li, F; Zhang, M; Wang, Y X; Shui, J J; Yan, M; Jin, X P; Zhu, X J
2017-12-01
To explore the application value of InnoTyper® 21 kit in forensic practice. Samples of hair shafts and saliva were collected from 8 unrelated individuals. Template DNA was extracted by AutoMate Express™ forensic DNA automatic extraction system. DNA was amplified by InnoTyper® 21 kit and AmpFℓSTR™ Identifiler™ Plus kit, respectively, and then the results were compared. After the amplification by InnoTyper® 21 kit, complete specific genotyping could be detected from the saliva samples, and the peak value of genotyping profiles of hair shafts without sheath cells was 57-1 219 RFU. Allelic gene deletion could be found sometimes. When amplified by AmpFℓSTR™ Identifiler™ Plus kit, complete specific genotyping could be detected from the saliva samples, and the specific fragment was not detected in hair shafts without sheath cells. The InnoTyper® 21 kit has certain application value in the cases of hair shafts without sheath cells. Copyright© by the Editorial Department of Journal of Forensic Medicine
Automatic generation of user material subroutines for biomechanical growth analysis.
Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato
2010-10-01
The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.
Automatic NEPHIS Coding of Descriptive Titles for Permuted Index Generation.
ERIC Educational Resources Information Center
Craven, Timothy C.
1982-01-01
Describes a system for the automatic coding of most descriptive titles which generates Nested Phrase Indexing System (NEPHIS) input strings of sufficient quality for permuted index production. A series of examples and an 11-item reference list accompany the text. (JL)
Analysis of Air Traffic Track Data with the AutoBayes Synthesis System
NASA Technical Reports Server (NTRS)
Schumann, Johann Martin Philip; Cate, Karen; Lee, Alan G.
2010-01-01
The Next Generation Air Traffic System (NGATS) is aiming to provide substantial computer support for the air traffic controllers. Algorithms for the accurate prediction of aircraft movements are of central importance for such software systems but trajectory prediction has to work reliably in the presence of unknown parameters and uncertainties. We are using the AutoBayes program synthesis system to generate customized data analysis algorithms that process large sets of aircraft radar track data in order to estimate parameters and uncertainties. In this paper, we present, how the tasks of finding structure in track data, estimation of important parameters in climb trajectories, and the detection of continuous descent approaches can be accomplished with compact task-specific AutoBayes specifications. We present an overview of the AutoBayes architecture and describe, how its schema-based approach generates customized analysis algorithms, documented C/C++ code, and detailed mathematical derivations. Results of experiments with actual air traffic control data are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Y; Li, T; Yoo, S
2016-06-15
Purpose: To enable near-real-time (<20sec) and interactive planning without compromising quality for whole breast RT treatment planning using tangential fields. Methods: Whole breast RT plans from 20 patients treated with single energy (SE, 6MV, 10 patients) or mixed energy (ME, 6/15MV, 10 patients) were randomly selected for model training. Additional 20 cases were used as validation cohort. The planning process for a new case consists of three fully automated steps:1. Energy Selection. A classification model automatically selects energy level. To build the energy selection model, principle component analysis (PCA) was applied to the digital reconstructed radiographs (DRRs) of training casesmore » to extract anatomy-energy relationship.2. Fluence Estimation. Once energy is selected, a random forest (RF) model generates the initial fluence. This model summarizes the relationship between patient anatomy’s shape based features and the output fluence. 3. Fluence Fine-tuning. This step balances the overall dose contribution throughout the whole breast tissue by automatically selecting reference points and applying centrality correction. Fine-tuning works at beamlet-level until the dose distribution meets clinical objectives. Prior to finalization, physicians can also make patient-specific trade-offs between target coverage and high-dose volumes.The proposed method was validated by comparing auto-plans with manually generated clinical-plans using Wilcoxon Signed-Rank test. Results: In 19/20 cases the model suggested the same energy combination as clinical-plans. The target volume coverage V100% was 78.1±4.7% for auto-plans, and 79.3±4.8% for clinical-plans (p=0.12). Volumes receiving 105% Rx were 69.2±78.0cc for auto-plans compared to 83.9±87.2cc for clinical-plans (p=0.13). The mean V10Gy, V20Gy of the ipsilateral lung was 24.4±6.7%, 18.6±6.0% for auto plans and 24.6±6.7%, 18.9±6.1% for clinical-plans (p=0.04, <0.001). Total computational time for auto-plans was < 20s. Conclusion: We developed an automated method that generates breast radiotherapy plans with accurate energy selection, similar target volume coverage, reduced hotspot volumes, and significant reduction in planning time, allowing for near-real-time planning.« less
Finster, Kai Waldemar; Kjeldsen, Kasper Urup; Kube, Michael; Reinhardt, Richard; Mussmann, Marc; Amann, Rudolf; Schreiber, Lars
2013-04-15
Desulfocapsa sulfexigens SB164P1 (DSM 10523) belongs to the deltaproteobacterial family Desulfobulbaceae and is one of two validly described members of its genus. This strain was selected for genome sequencing, because it is the first marine bacterium reported to thrive on the disproportionation of elemental sulfur, a process with a unresolved enzymatic pathway in which elemental sulfur serves both as electron donor and electron acceptor. Furthermore, in contrast to its phylogenetically closest relatives, which are dissimilatory sulfate-reducers, D. sulfexigens is unable to grow by sulfate reduction and appears metabolically specialized in growing by disproportionating elemental sulfur, sulfite or thiosulfate with CO2 as the sole carbon source. The genome of D. sulfexigens contains the set of genes that is required for nitrogen fixation. In an acetylene assay it could be shown that the strain reduces acetylene to ethylene, which is indicative for N-fixation. The circular chromosome of D. sulfexigens SB164P1 comprises 3,986,761 bp and harbors 3,551 protein-coding genes of which 78% have a predicted function based on auto-annotation. The chromosome furthermore encodes 46 tRNA genes and 3 rRNA operons.
Finster, Kai Waldemar; Kjeldsen, Kasper Urup; Kube, Michael; Reinhardt, Richard; Mussmann, Marc; Amann, Rudolf; Schreiber, Lars
2013-01-01
Desulfocapsa sulfexigens SB164P1 (DSM 10523) belongs to the deltaproteobacterial family Desulfobulbaceae and is one of two validly described members of its genus. This strain was selected for genome sequencing, because it is the first marine bacterium reported to thrive on the disproportionation of elemental sulfur, a process with a unresolved enzymatic pathway in which elemental sulfur serves both as electron donor and electron acceptor. Furthermore, in contrast to its phylogenetically closest relatives, which are dissimilatory sulfate-reducers, D. sulfexigens is unable to grow by sulfate reduction and appears metabolically specialized in growing by disproportionating elemental sulfur, sulfite or thiosulfate with CO2 as the sole carbon source. The genome of D. sulfexigens contains the set of genes that is required for nitrogen fixation. In an acetylene assay it could be shown that the strain reduces acetylene to ethylene, which is indicative for N-fixation. The circular chromosome of D. sulfexigens SB164P1 comprises 3,986,761 bp and harbors 3,551 protein-coding genes of which 78% have a predicted function based on auto-annotation. The chromosome furthermore encodes 46 tRNA genes and 3 rRNA operons. PMID:23961312
Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process
NASA Technical Reports Server (NTRS)
McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.
1999-01-01
This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.
Development and evaluation of an automated fall risk assessment system.
Lee, Ju Young; Jin, Yinji; Piao, Jinshi; Lee, Sun-Mi
2016-04-01
Fall risk assessment is the first step toward prevention, and a risk assessment tool with high validity should be used. This study aimed to develop and validate an automated fall risk assessment system (Auto-FallRAS) to assess fall risks based on electronic medical records (EMRs) without additional data collected or entered by nurses. This study was conducted in a 1335-bed university hospital in Seoul, South Korea. The Auto-FallRAS was developed using 4211 fall-related clinical data extracted from EMRs. Participants included fall patients and non-fall patients (868 and 3472 for the development study; 752 and 3008 for the validation study; and 58 and 232 for validation after clinical application, respectively). The system was evaluated for predictive validity and concurrent validity. The final 10 predictors were included in the logistic regression model for the risk-scoring algorithm. The results of the Auto-FallRAS were shown as high/moderate/low risk on the EMR screen. The predictive validity analyzed after clinical application of the Auto-FallRAS was as follows: sensitivity = 0.95, NPV = 0.97 and Youden index = 0.44. The validity of the Morse Fall Scale assessed by nurses was as follows: sensitivity = 0.68, NPV = 0.88 and Youden index = 0.28. This study found that the Auto-FallRAS results were better than were the nurses' predictions. The advantage of the Auto-FallRAS is that it automatically analyzes information and shows patients' fall risk assessment results without requiring additional time from nurses. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Gache, Yannick; Pin, Didier; Gagnoux-Palacios, Laurent; Carozzo, Claude; Meneguzzi, Guerrino
2011-10-01
Recessive dystrophic epidermolysis bullosa (RDEB) is a severe skin blistering condition caused by mutations in the gene coding for collagen type VII. Genetically engineered RDEB dog keratinocytes were used to generate autologous epidermal sheets subsequently grafted on two RDEB dogs carrying a homozygous missense mutation in the col7a1 gene and expressing baseline amounts of the aberrant protein. Transplanted cells regenerated a differentiated and vascularized auto-renewing epidermis progressively repopulated by dendritic cells and melanocytes. No adverse immune reaction was detected in either dog. In dog 1, the grafted epidermis firmly adhered to the dermis throughout the 24-month follow-up, which correlated with efficient transduction (100%) of highly clonogenic epithelial cells and sustained transgene expression. In dog 2, less efficient (65%) transduction of primary keratinocytes resulted in a loss of the transplanted epidermis and graft blistering 5 months after transplantation. These data provide the proof of principle for ex vivo gene therapy of RDEB patients with missense mutations in collagen type VII by engraftment of the reconstructed epidermis, and demonstrate that highly efficient transduction of epidermal stem cells is crucial for successful gene therapy of inherited skin diseases in which correction of the genetic defect confers no major selective advantage in cell culture.
[Oral diseases in auto-immune polyendocrine syndrome type 1].
Proust-Lemoine, Emmanuelle; Guyot, Sylvie
2017-09-01
Auto-immune polyendocrine syndrome type 1 (APS1) also called Auto-immune Polyendocrinopathy Candidiasis Ectodermal Dystrophy (APECED) is a rare monogenic childhood-onset auto-immune disease. This autosomal recessive disorder is caused by mutations in the auto-immune regulator (AIRE) gene, and leads to autoimmunity targeting peripheral tissues. There is a wide variability in clinical phenotypes in patients with APSI, with auto-immune endocrine and non-endocrine disorders, and chronic mucocutaneous candidiasis. These patients suffer from oral diseases such as dental enamel hypoplasia and candidiasis. Both are frequently described, and in recent series, enamel hypoplasia and candidiasis are even the most frequent components of APS1 together with hypoparathyroidism. Both often occur during childhood (before 5 years old for canrdidiasis, and before 15 years old for enamel hypoplasia). Oral candidiasis is recurrent all life long, could become resistant to azole antifungal after years of treatment, and be carcinogenic, leading to severe oral squamous cell carcinoma. Oral components of APS1 should be diagnosed and rigorously treated. Dental enamel hypoplasia and/or recurrent oral candidiasis in association with auto-immune diseases in a young child should prompt APS1 diagnosis. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
A comparison of different methods to implement higher order derivatives of density functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Dam, Hubertus J.J.
Density functional theory is the dominant approach in electronic structure methods today. To calculate properties higher order derivatives of the density functionals are required. These derivatives might be implemented manually,by automatic differentiation, or by symbolic algebra programs. Different authors have cited different reasons for using the particular method of their choice. This paper presents work where all three approaches were used and the strengths and weaknesses of each approach are considered. It is found that all three methods produce code that is suffficiently performanted for practical applications, despite the fact that our symbolic algebra generated code and our automatic differentiationmore » code still have scope for significant optimization. The automatic differentiation approach is the best option for producing readable and maintainable code.« less
Jian, Bo-Lin; Peng, Chao-Chung
2017-06-15
Due to the direct influence of night vision equipment availability on the safety of night-time aerial reconnaissance, maintenance needs to be carried out regularly. Unfortunately, some defects are not easy to observe or are not even detectable by human eyes. As a consequence, this study proposed a novel automatic defect detection system for aviator's night vision imaging systems AN/AVS-6(V)1 and AN/AVS-6(V)2. An auto-focusing process consisting of a sharpness calculation and a gradient-based variable step search method is applied to achieve an automatic detection system for honeycomb defects. This work also developed a test platform for sharpness measurement. It demonstrates that the honeycomb defects can be precisely recognized and the number of the defects can also be determined automatically during the inspection. Most importantly, the proposed approach significantly reduces the time consumption, as well as human assessment error during the night vision goggle inspection procedures.
NASA Astrophysics Data System (ADS)
Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei
2017-01-01
Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.
Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei
2017-01-07
Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.
Auto-adaptive finite element meshes
NASA Technical Reports Server (NTRS)
Richter, Roland; Leyland, Penelope
1995-01-01
Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.
SU-C-BRA-06: Automatic Brain Tumor Segmentation for Stereotactic Radiosurgery Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Stojadinovic, S; Jiang, S
Purpose: Stereotactic radiosurgery (SRS), which delivers a potent dose of highly conformal radiation to the target in a single fraction, requires accurate tumor delineation for treatment planning. We present an automatic segmentation strategy, that synergizes intensity histogram thresholding, super-voxel clustering, and level-set based contour evolving methods to efficiently and accurately delineate SRS brain tumors on contrast-enhance T1-weighted (T1c) Magnetic Resonance Images (MRI). Methods: The developed auto-segmentation strategy consists of three major steps. Firstly, tumor sites are localized through 2D slice intensity histogram scanning. Then, super voxels are obtained through clustering the corresponding voxels in 3D with reference to the similaritymore » metrics composited from spatial distance and intensity difference. The combination of the above two could generate the initial contour surface. Finally, a localized region active contour model is utilized to evolve the surface to achieve the accurate delineation of the tumors. The developed method was evaluated on numerical phantom data, synthetic BRATS (Multimodal Brain Tumor Image Segmentation challenge) data, and clinical patients’ data. The auto-segmentation results were quantitatively evaluated by comparing to ground truths with both volume and surface similarity metrics. Results: DICE coefficient (DC) was performed as a quantitative metric to evaluate the auto-segmentation in the numerical phantom with 8 tumors. DCs are 0.999±0.001 without noise, 0.969±0.065 with Rician noise and 0.976±0.038 with Gaussian noise. DC, NMI (Normalized Mutual Information), SSIM (Structural Similarity) and Hausdorff distance (HD) were calculated as the metrics for the BRATS and patients’ data. Assessment of BRATS data across 25 tumor segmentation yield DC 0.886±0.078, NMI 0.817±0.108, SSIM 0.997±0.002, and HD 6.483±4.079mm. Evaluation on 8 patients with total 14 tumor sites yield DC 0.872±0.070, NMI 0.824±0.078, SSIM 0.999±0.001, and HD 5.926±6.141mm. Conclusion: The developed automatic segmentation strategy, which yields accurate brain tumor delineation in evaluation cases, is promising for its application in SRS treatment planning.« less
PCR Amplicon Prediction from Multiplex Degenerate Primer and Probe Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, S. N.
2013-08-08
Assessing primer specificity and predicting both desired and off-target amplification products is an essential step for robust PCR assay design. Code is described to predict potential polymerase chain reaction (PCR) amplicons in a large sequence database such as NCBI nt from either singleplex or a large multiplexed set of primers, allowing degenerate primer and probe bases, with target mismatch annotates amplicons with gene information automatically downloaded from NCBI, and optionally it can predict whether there are also TaqMan/Luminex probe matches within predicted amplicons.
NASA Astrophysics Data System (ADS)
Gu, L.
2017-12-01
In this study, we examine responses of sun-induced chlorophyll fluorescence to biological and environmental variations measured with a versatile Fluorescence Auto-Measurement Equipment (FAME). FAME was developed to automatically and continuously measure chlorophyll fluorescence (F) of a leaf, plant or canopy in both laboratory and field environments, excited by either artificial light source or sunlight. FAME is controlled by a datalogger and allows simultaneous measurements of environmental variables complementary to the F signals. A built-in communication system allows FAME to be remotely monitored and data-downloaded. Radiance and irradiance calibrations can be done online. FAME has been applied in a variety of environments, allowing an investigation of biological and environmental controls on F emission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poliakov, Alexander; Couronne, Olivier
2002-11-04
Aligning large vertebrate genomes that are structurally complex poses a variety of problems not encountered on smaller scales. Such genomes are rich in repetitive elements and contain multiple segmental duplications, which increases the difficulty of identifying true orthologous SNA segments in alignments. The sizes of the sequences make many alignment algorithms designed for comparing single proteins extremely inefficient when processing large genomic intervals. We integrated both local and global alignment tools and developed a suite of programs for automatically aligning large vertebrate genomes and identifying conserved non-coding regions in the alignments. Our method uses the BLAT local alignment program tomore » find anchors on the base genome to identify regions of possible homology for a query sequence. These regions are postprocessed to find the best candidates which are then globally aligned using the AVID global alignment program. In the last step conserved non-coding segments are identified using VISTA. Our methods are fast and the resulting alignments exhibit a high degree of sensitivity, covering more than 90% of known coding exons in the human genome. The GenomeVISTA software is a suite of Perl programs that is built on a MySQL database platform. The scheduler gets control data from the database, builds a queve of jobs, and dispatches them to a PC cluster for execution. The main program, running on each node of the cluster, processes individual sequences. A Perl library acts as an interface between the database and the above programs. The use of a separate library allows the programs to function independently of the database schema. The library also improves on the standard Perl MySQL database interfere package by providing auto-reconnect functionality and improved error handling.« less
Efficacy of dry-ice blasting in preventive maintenance of auto robotic assemblies
NASA Astrophysics Data System (ADS)
Baluch, Nazim; Mohtar, Shahimi; Abdullah, Che Sobry
2016-08-01
Welding robots are extensively applied in the automotive assemblies and `Spot Welding' is the most common welding application found in the auto stamping assembly manufacturing. Every manufacturing process is subject to variations - with resistance welding, these include; part fit up, part thickness variations, misaligned electrodes, variations in coating materials or thickness, sealers, weld force variations, shunting, machine tooling degradation; and slag and spatter damage. All welding gun tips undergo wear; an elemental part of the process. Though adaptive resistance welding control automatically compensates to keep production and quality up to the levels needed as gun tips undergo wear so that the welds remain reliable; the system cannot compensate for deterioration caused by the slag and spatter on the part holding fixtures, sensors, and gun tips. To cleanse welding robots of slag and spatter, dry-ice blasting has proven to be an effective remedy. This paper describes Spot welding process, analyses the slag and spatter formation during robotic welding of stamping assemblies, and concludes that the dry ice blasting process's utility in cleansing of welding robots in auto stamping plant operations is paramount and exigent.
Multilevel Parallelization of AutoDock 4.2.
Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P
2011-04-28
Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.
Adriani, Walter; Romano, Emilia; Pucci, Mariangela; Pascale, Esterina; Cerniglia, Luca; Cimino, Silvia; Tambelli, Renata; Curatolo, Paolo; Granstrem, Oleg; Maccarrone, Mauro; Laviola, Giovanni; D'Addario, Claudio
2018-02-01
In view of the need for easily accessible biomarkers, we evaluated in ADHD children the epigenetic status of the 5'-untranslated region (UTR) in the SLC6A3 gene, coding for human dopamine transporter (DAT). We analysed buccal swabs and sera from 30 children who met DSM-IV-TR criteria for ADHD, assigned to treatment according to severity. Methylation levels at six-selected CpG sites (among which, a CGGCGGCGG and a CGCG motif), alone or in combination with serum titers in auto-antibodies against dopamine transporter (DAT aAbs), were analysed for correlation with CGAS scores (by clinicians) and Conners' scales (by parents), collected at recruitment and after 6 weeks. In addition, we characterized the DAT genotype, i.e., the variable number tandem repeat (VNTR) polymorphisms at the 3'-UTR of the gene. DAT methylation levels were greatly reduced in ADHD patients compared to control, healthy children. Within patients carrying at least one DAT 9 allele (DAT 9/x), methylation at positions CpG2 and/or CpG6 correlated with recovery, as evident from delta-CGAS scores as well as delta Conners' scales ('inattentive' and 'hyperactive' subscales). Moreover, hypermethylation at CpG1 position denoted severity, specifically for those patients carrying a DAT 10/10 genotype. Intriguingly, high serum DAT-aAbs titers appeared to corroborate indications from high CpG1 versus high CpG2/CpG6 levels, likewise denoting severity versus recovery in DAT 10/10 versus 9/x patients, respectively. These profiles suggest that DAT 5'UTR epigenetics plus serum aAbs can serve as suitable biomarkers, to confirm ADHD diagnosis and/or to predict the efficacy of treatment.
High density growth of T7 expression strains with auto-induction option
Studier, F. William
2013-03-19
A method for promoting and suppressing auto-induction of transcription of a cloned gene 1 of bacteriophage T7 in cultures of bacterial cells grown batchwise is disclosed. The transcription is under the control of a promoter whose activity can be induced by an exogenous inducer whose ability to induce said promoter is dependent on the metabolic state of said bacterial cells.
NASA Astrophysics Data System (ADS)
Zhang, Shaojun; Xu, Xiping
2015-10-01
The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.
[Auto-immune disorders as a possible cause of neuropsychiatric syndromes].
Martinez-Martinez, P; Molenaar, P C; Losen, M; Hoffmann, C; Stevens, J; de Witte, L D; van Amelsvoort, T; van Os, J; Rutten, B P F
2015-01-01
Changes that occur in the behaviour of voltage-gated ion channels and ligand-gated receptor channels due to gene mutations or auto-immune attack are the cause of channelopathies in the central and peripheral nervous system. Although the relation between molecular channel defects and clinical symptoms has been explained in the case of many neuromuscular channelopathies, the pathophysiology of auto-immunity in neuropsychiatric syndromes is still unclear. To review recent findings regarding neuronal auto-immune reactions in severe neuropsychiatric syndromes. Using PubMed, we consulted the literature published between 1990 and August 2014 relating to the occurrence of auto-immune antibodies in severe and persistent neuropsychiatric syndromes. Auto-antibodies have only limited access to the central nervous system, but if they do enter the system they can, in some cases, cause disease. We discuss recent findings regarding the occurrence of auto-antibodies against ligand-activated receptor channels and potassium channels in neuropsychiatric and neurological syndromes, including schizophrenia and limbic encephalitis. Although the occurrence of several auto-antibodies in schizophrenia has been confirmed, there is still no proof of a causal relationship in the syndrome. We still have no evidence of the prevalence of auto-immunity in neuropsychiatric syndromes. The discovery that an antibody against an ion channel is associated with some neuropsychiatric disorders may mean that in future it will be possible to treat patients by means of immunosuppression, which could lead to an improvement in a patient's cognitive abilities.
Vlachakis, Georgios; Chatterjee, Sayantani; Arroyo-Mateos, Manuel; Wackers, Paul F. K.; Jonker, Martijs J.
2018-01-01
Increased ambient temperature is inhibitory to plant immunity including auto-immunity. SNC1-dependent auto-immunity is, for example, fully suppressed at 28°C. We found that the Arabidopsis sumoylation mutant siz1 displays SNC1-dependent auto-immunity at 22°C but also at 28°C, which was EDS1 dependent at both temperatures. This siz1 auto-immune phenotype provided enhanced resistance to Pseudomonas at both temperatures. Moreover, the rosette size of siz1 recovered only weakly at 28°C, while this temperature fully rescues the growth defects of other SNC1-dependent auto-immune mutants. This thermo-insensitivity of siz1 correlated with a compromised thermosensory growth response, which was independent of the immune regulators PAD4 or SNC1. Our data reveal that this high temperature induced growth response strongly depends on COP1, while SIZ1 controls the amplitude of this growth response. This latter notion is supported by transcriptomics data, i.e. SIZ1 controls the amplitude and timing of high temperature transcriptional changes including a subset of the PIF4/BZR1 gene targets. Combined our data signify that SIZ1 suppresses an SNC1-dependent resistance response at both normal and high temperatures. At the same time, SIZ1 amplifies the dark and high temperature growth response, likely via COP1 and upstream of gene regulation by PIF4 and BRZ1. PMID:29357355
The History of the AutoChemist®: From Vision to Reality.
Peterson, H E; Jungner, I
2014-05-22
This paper discusses the early history and development of a clinical analyser system in Sweden (AutoChemist, 1965). It highlights the importance of such high capacity system both for clinical use and health care screening. The device was developed to assure the quality of results and to automatically handle the orders, store the results in digital form for later statistical analyses and distribute the results to the patients' physicians by using the computer used for the analyser. The most important result of the construction of an analyser able to produce analytical results on a mass scale was the development of a mechanical multi-channel analyser for clinical laboratories that handled discrete sample technology and could prevent carry-over to the next test samples while incorporating computer technology to improve the quality of test results. The AutoChemist could handle 135 samples per hour in an 8-hour shift and up to 24 possible analyses channels resulting in 3,200 results per hour. Later versions would double this capacity. Some customers used the equipment 24 hours per day. With a capacity of 3,000 to 6,000 analyses per hour, pneumatic driven pipettes, special units for corrosive liquids or special activities, and an integrated computer, the AutoChemist system was unique and the largest of its kind for many years. Its follower - The AutoChemist PRISMA (PRogrammable Individually Selective Modular Analyzer) - was smaller in size but had a higher capacity. Both analysers established new standards of operation for clinical laboratories and encouraged others to use new technologies for building new analysers.
Automated response matching for organic scintillation detector arrays
NASA Astrophysics Data System (ADS)
Aspinall, M. D.; Joyce, M. J.; Cave, F. D.; Plenteda, R.; Tomanin, A.
2017-07-01
This paper identifies a digitizer technology with unique features that facilitates feedback control for the realization of a software-based technique for automatically calibrating detector responses. Three such auto-calibration techniques have been developed and are described along with an explanation of the main configuration settings and potential pitfalls. Automating this process increases repeatability, simplifies user operation, enables remote and periodic system calibration where consistency across detectors' responses are critical.
Radio frequency tags systems to initiate system processing
NASA Astrophysics Data System (ADS)
Madsen, Harold O.; Madsen, David W.
1994-09-01
This paper describes the automatic identification technology which has been installed at Applied Magnetic Corp. MR fab. World class manufacturing requires technology exploitation. This system combines (1) FluoroTrac cassette and operator tracking, (2) CELLworks cell controller software tools, and (3) Auto-Soft Inc. software integration services. The combined system eliminates operator keystrokes and errors during normal processing within a semiconductor fab. The methods and benefits of this system are described.
De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul
2017-03-01
Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.
Muñoz-Organero, Mario; Davies, Richard; Mawson, Sue
2017-01-01
Insole pressure sensors capture the force distribution patterns during the stance phase while walking. By comparing patterns obtained from healthy individuals to patients suffering different medical conditions based on a given similarity measure, automatic impairment indexes can be computed in order to help in applications such as rehabilitation. This paper uses the data sensed from insole pressure sensors for a group of healthy controls to train an auto-encoder using patterns of stochastic distances in series of consecutive steps while walking at normal speeds. Two experiment groups are compared to the healthy control group: a group of patients suffering knee pain and a group of post-stroke survivors. The Mahalanobis distance is computed for every single step by each participant compared to the entire dataset sensed from healthy controls. The computed distances for consecutive steps are fed into the previously trained autoencoder and the average error is used to assess how close the walking segment is to the autogenerated model from healthy controls. The results show that automatic distortion indexes can be used to assess each participant as compared to normal patterns computed from healthy controls. The stochastic distances observed for the group of stroke survivors are bigger than those for the people with knee pain.
NASA Technical Reports Server (NTRS)
Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.
1989-01-01
The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.
Lee, M-Y; Won, H-S; Jeon, E-J; Yoon, H C; Choi, J Y; Hong, S J; Kim, M-J
2014-06-01
To evaluate the reproducibility of measurement of the fetal left modified myocardial performance index (Mod-MPI) determined using a novel automated system. This was a prospective study of 116 ultrasound examinations from 110 normal singleton pregnancies at 12 + 1 to 37 + 1 weeks' gestation. Two experienced operators each measured the left Mod-MPI twice manually and twice automatically using the Auto Mod-MPI system. Intra- and interoperator reproducibility were assessed using intraclass correlation coefficients (ICCs) and the manual and automated measurements obtained by the more experienced operator were compared using Bland-Altman plots and ICCs. Both operators successfully measured the left Mod-MPI in all cases using the Auto Mod-MPI system. For both operators, intraoperator reproducibility was higher when performing automated measurements (ICC = 0.967 and 0.962 for Operators 1 and 2, respectively) than when performing manual measurements (ICC = 0.857 and 0.856 for Operators 1 and 2, respectively). Interoperator agreement was also better for automated than for manual measurements (ICC = 0.930 vs 0.723, respectively). There was good agreement between the automated and manual values measured by the more experienced operator. The Auto Mod-MPI system is a reliable technique for measuring fetal left Mod-MPI and demonstrates excellent reproducibility. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
Implementation of four layer automatic elevator controller
NASA Astrophysics Data System (ADS)
Prasad, B. K. V.; Kumar, P. Satish; Charles, B. S.; Srilakshmi, G.
2017-07-01
In this modern era, elevators have become an integral part of any commercial or public complex. It facilitates the faster movement of people and luggage between floors. The lift control system is one among the keenest aspects in electronics controlling module that are used in auto motive filed. Usually elevators are designed for a specific building taking into account the main factors like the measure of the building, the count of persons travelling to each floor and the expected periods of large usage. The lift system was designed with different control strategies. This implementation is based on FPGA, which could be used for any building with any number of floors, with the necessary inputs and outputs. This controller can be implemented based on the required number of floors by merely changing a control variable from the HDL code. This approach is based on an algorithm which reduces the number of computation necessary, on concentrating only on the relevant principles that improves the score and ability of the club of elevator structure. The elevator controller is developed using Verilog HDL and is perfectly executed on a Xilinx ISE 12.4 and Spartan -3E FPGA.
The language of gene ontology: a Zipf's law analysis.
Kalankesh, Leila Ranandeh; Stevens, Robert; Brass, Andy
2012-06-07
Most major genome projects and sequence databases provide a GO annotation of their data, either automatically or through human annotators, creating a large corpus of data written in the language of GO. Texts written in natural language show a statistical power law behaviour, Zipf's law, the exponent of which can provide useful information on the nature of the language being used. We have therefore explored the hypothesis that collections of GO annotations will show similar statistical behaviours to natural language. Annotations from the Gene Ontology Annotation project were found to follow Zipf's law. Surprisingly, the measured power law exponents were consistently different between annotation captured using the three GO sub-ontologies in the corpora (function, process and component). On filtering the corpora using GO evidence codes we found that the value of the measured power law exponent responded in a predictable way as a function of the evidence codes used to support the annotation. Techniques from computational linguistics can provide new insights into the annotation process. GO annotations show similar statistical behaviours to those seen in natural language with measured exponents that provide a signal which correlates with the nature of the evidence codes used to support the annotations, suggesting that the measured exponent might provide a signal regarding the information content of the annotation.
Automatic mathematical modeling for real time simulation program (AI application)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Purinton, Steve
1989-01-01
A methodology is described for automatic mathematical modeling and generating simulation models. The major objective was to create a user friendly environment for engineers to design, maintain, and verify their models; to automatically convert the mathematical models into conventional code for computation; and finally, to document the model automatically.
Diagnosis - Using automatic test equipment and artificial intelligence expert systems
NASA Astrophysics Data System (ADS)
Ramsey, J. E., Jr.
Three expert systems (ATEOPS, ATEFEXPERS, and ATEFATLAS), which were created to direct automatic test equipment (ATE), are reviewed. The purpose of the project was to develop an expert system to troubleshoot the converter-programmer power supply card for the F-15 aircraft and have that expert system direct the automatic test equipment. Each expert system uses a different knowledge base or inference engine, basing the testing on the circuit schematic, test requirements document, or ATLAS code. Implementing generalized modules allows the expert systems to be used for any different unit under test. Using converted ATLAS to LISP code allows the expert system to direct any ATE using ATLAS. The constraint propagated frame system allows for the expansion of control by creating the ATLAS code, checking the code for good software engineering techniques, directing the ATE, and changing the test sequence as needed (planning).
Data transmission system with distributed microprocessors
Nambu, Shigeo
1985-01-01
A data transmission system having a common request line and a special request line in addition to a transmission line. The special request line has priority over the common request line. A plurality of node stations are multi-drop connected to the transmission line. Among the node stations, a supervising station is connected to the special request line and takes precedence over other slave stations to become a master station. The master station collects data from the slave stations. The station connected to the common request line can assign a master control function to any station requesting to be assigned the master control function within a short period of time. Each station has an auto response control circuit. The master station automatically collects data by the auto response controlling circuit independently of the microprocessors of the slave stations.
Automatic finite element generators
NASA Technical Reports Server (NTRS)
Wang, P. S.
1984-01-01
The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.
Objectively Optimized Observation Direction System Providing Situational Awareness for a Sensor Web
NASA Astrophysics Data System (ADS)
Aulov, O.; Lary, D. J.
2010-12-01
There is great utility in having a flexible and automated objective observation direction system for the decadal survey missions and beyond. Such a system allows us to optimize the observations made by suite of sensors to address specific goals from long term monitoring to rapid response. We have developed such a prototype using a network of communicating software elements to control a heterogeneous network of sensor systems, which can have multiple modes and flexible viewing geometries. Our system makes sensor systems intelligent and situationally aware. Together they form a sensor web of multiple sensors working together and capable of automated target selection, i.e. the sensors “know” where they are, what they are able to observe, what targets and with what priorities they should observe. This system is implemented in three components. The first component is a Sensor Web simulator. The Sensor Web simulator describes the capabilities and locations of each sensor as a function of time, whether they are orbital, sub-orbital, or ground based. The simulator has been implemented using AGIs Satellite Tool Kit (STK). STK makes it easy to analyze and visualize optimal solutions for complex space scenarios, and perform complex analysis of land, sea, air, space assets, and shares results in one integrated solution. The second component is target scheduler that was implemented with STK Scheduler. STK Scheduler is powered by a scheduling engine that finds better solutions in a shorter amount of time than traditional heuristic algorithms. The global search algorithm within this engine is based on neural network technology that is capable of finding solutions to larger and more complex problems and maximizing the value of limited resources. The third component is a modeling and data assimilation system. It provides situational awareness by supplying the time evolution of uncertainty and information content metrics that are used to tell us what we need to observe and the priority we should give to the observations. A prototype of this component was implemented with AutoChem. AutoChem is NASA release software constituting an automatic code generation, symbolic differentiator, analysis, documentation, and web site creation tool for atmospheric chemical modeling and data assimilation. Its model is explicit and uses an adaptive time-step, error monitoring time integration scheme for stiff systems of equations. AutoChem was the first model to ever have the facility to perform 4D-Var data assimilation and Kalman filter. The project developed a control system with three main accomplishments. First, fully multivariate observational and theoretical information with associated uncertainties was combined using a full Kalman filter data assimilation system. Second, an optimal distribution of the computations and of data queries was achieved by utilizing high performance computers/load balancing and a set of automatically mirrored databases. Third, inter-instrument bias correction was performed using machine learning. The PI for this project was Dr. David Lary of the UMBC Joint Center for Earth Systems Technology at NASA/Goddard Space Flight Center.
Evaluation of the efficiency and fault density of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1993-01-01
Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.
A deep learning method for lincRNA detection using auto-encoder algorithm.
Yu, Ning; Yu, Zeng; Pan, Yi
2017-12-06
RNA sequencing technique (RNA-seq) enables scientists to develop novel data-driven methods for discovering more unidentified lincRNAs. Meantime, knowledge-based technologies are experiencing a potential revolution ignited by the new deep learning methods. By scanning the newly found data set from RNA-seq, scientists have found that: (1) the expression of lincRNAs appears to be regulated, that is, the relevance exists along the DNA sequences; (2) lincRNAs contain some conversed patterns/motifs tethered together by non-conserved regions. The two evidences give the reasoning for adopting knowledge-based deep learning methods in lincRNA detection. Similar to coding region transcription, non-coding regions are split at transcriptional sites. However, regulatory RNAs rather than message RNAs are generated. That is, the transcribed RNAs participate the biological process as regulatory units instead of generating proteins. Identifying these transcriptional regions from non-coding regions is the first step towards lincRNA recognition. The auto-encoder method achieves 100% and 92.4% prediction accuracy on transcription sites over the putative data sets. The experimental results also show the excellent performance of predictive deep neural network on the lincRNA data sets compared with support vector machine and traditional neural network. In addition, it is validated through the newly discovered lincRNA data set and one unreported transcription site is found by feeding the whole annotated sequences through the deep learning machine, which indicates that deep learning method has the extensive ability for lincRNA prediction. The transcriptional sequences of lincRNAs are collected from the annotated human DNA genome data. Subsequently, a two-layer deep neural network is developed for the lincRNA detection, which adopts the auto-encoder algorithm and utilizes different encoding schemes to obtain the best performance over intergenic DNA sequence data. Driven by those newly annotated lincRNA data, deep learning methods based on auto-encoder algorithm can exert their capability in knowledge learning in order to capture the useful features and the information correlation along DNA genome sequences for lincRNA detection. As our knowledge, this is the first application to adopt the deep learning techniques for identifying lincRNA transcription sequences.
Worldwide Report, Telecommunications Policy, Research and Development, No. 281.
1983-08-02
communications line. This line will automatically link Cotonou with Ouagadougou without having to pass through Paris. The agreement was signed this morning in... finished parts e.g. auto and motocycle assembly plants ^wKch most of the^time are wrongly referred to as "industries". . Unlike Brazil, Nigeria...tends to think that the machine tools factory will end up as these assembly plants of imported finished parts unless the government sets up other
Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data
NASA Astrophysics Data System (ADS)
Yu, Q.; Helmholz, P.; Belton, D.; West, G.
2014-04-01
The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.
Hoermann, Astrid; Cicin-Sain, Damjan; Jaeger, Johannes
2016-03-15
Understanding eukaryotic transcriptional regulation and its role in development and pattern formation is one of the big challenges in biology today. Most attempts at tackling this problem either focus on the molecular details of transcription factor binding, or aim at genome-wide prediction of expression patterns from sequence through bioinformatics and mathematical modelling. Here we bridge the gap between these two complementary approaches by providing an integrative model of cis-regulatory elements governing the expression of the gap gene giant (gt) in the blastoderm embryo of Drosophila melanogaster. We use a reverse-engineering method, where mathematical models are fit to quantitative spatio-temporal reporter gene expression data to infer the regulatory mechanisms underlying gt expression in its anterior and posterior domains. These models are validated through prediction of gene expression in mutant backgrounds. A detailed analysis of our data and models reveals that gt is regulated by domain-specific CREs at early stages, while a late element drives expression in both the anterior and the posterior domains. Initial gt expression depends exclusively on inputs from maternal factors. Later, gap gene cross-repression and gt auto-activation become increasingly important. We show that auto-regulation creates a positive feedback, which mediates the transition from early to late stages of regulation. We confirm the existence and role of gt auto-activation through targeted mutagenesis of Gt transcription factor binding sites. In summary, our analysis provides a comprehensive picture of spatio-temporal gene regulation by different interacting enhancer elements for an important developmental regulator. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Nishimura, Ken; Ohtaka, Manami; Takada, Hitomi; Kurisaki, Akira; Tran, Nhi Vo Kieu; Tran, Yen Thi Hai; Hisatake, Koji; Sano, Masayuki; Nakanishi, Mahito
2017-08-01
Transgene-free induced pluripotent stem cells (iPSCs) are valuable for both basic research and potential clinical applications. We previously reported that a replication-defective and persistent Sendai virus (SeVdp) vector harboring four reprogramming factors (SeVdp-iPS) can efficiently induce generation of transgene-free iPSCs. This vector can express all four factors stably and simultaneously without chromosomal integration and can be eliminated completely from reprogrammed cells by suppressing vector-derived RNA-dependent RNA polymerase. Here, we describe an improved SeVdp-iPS vector (SeVdp(KOSM)302L) that is automatically erased in response to microRNA-302 (miR-302), uniquely expressed in pluripotent stem cells (PSCs). Gene expression and genome replication of the SeVdp-302L vector, which contains miRNA-302a target sequences at the 3' untranslated region of L mRNA, are strongly suppressed in PSCs. Consequently, SeVdp(KOSM)302L induces expression of reprogramming factors in somatic cells, while it is automatically erased from cells successfully reprogrammed to express miR-302. As this vector can reprogram somatic cells into transgene-free iPSCs without the aid of exogenous short interfering RNA (siRNA), the results we present here demonstrate that this vector may become an invaluable tool for the generation of human iPSCs for future clinical applications. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Formally specifying the logic of an automatic guidance controller
NASA Technical Reports Server (NTRS)
Guaspari, David
1990-01-01
The following topics are covered in viewgraph form: (1) the Penelope Project; (2) the logic of an experimental automatic guidance control system for a 737; (3) Larch/Ada specification; (4) some failures of informal description; (5) description of mode changes caused by switches; (6) intuitive description of window status (chosen vs. current); (7) design of the code; (8) and specifying the code.
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
Nakakura, Shunsuke; Mori, Etsuko; Nagatomi, Nozomi; Tabuchi, Hitoshi; Kiuchi, Yoshiaki
2012-07-01
To evaluate the congruity of anterior chamber depth (ACD) measurements using 4 devices. Saneikai Tsukazaki Hospital, Himeji City, Japan. Comparative case series. In 1 eye of 42 healthy participants, the ACD was measured by 3-dimensional corneal and anterior segment optical coherence tomography (CAS-OCT), partial coherence interferometry (PCI), Scheimpflug imaging, and ultrasound biomicroscopy (UBM). The differences between the measurements were evaluated by 2-way analysis of variance and post hoc analysis. Agreement between the measurements was evaluated using Bland-Altman analysis. To evaluate the true ACD using PCI, the automatically calculated ACD minus the central corneal thickness measured by CAS-OCT was defined as PCI true. Two ACD measurements were also taken with CAS-OCT. The mean ACD was 3.72 mm ± 0.23 (SD) (PCI), 3.18 ± 0.23 mm (PCI true), 3.24 ± 0.25 mm (Scheimpflug), 3.03 ± 0.25 mm (UBM), 3.14 ± 0.24 mm (CAS-OCT auto), and 3.12 ± 0.24 mm (CAS-OCT manual). A significant difference was observed between PCI biometry, Scheimpflug imaging, and UBM measurements and the other methods. Post hoc analysis showed no significant differences between PCI true and CAS-OCT auto or between CAS-OCT auto and CAS-OCT manual. Strong correlations were observed between all measurements; however, Bland-Altman analysis showed good agreement only between PCI true and Scheimpflug imaging and between CAS-OCT auto and CAS OCT manual. The ACD measurements obtained from PCI biometry, Scheimpflug imaging, CAS-OCT, and UBM were significantly different and not interchangeable except for PCI true and CAS-OCT auto and CAS-OCT auto and CAS-OCT manual. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Shi, Hai-Bo; Cheng, Lei; Nakayama, Meiho; Kakazu, Yasuhiro; Yin, Min; Miyoshi, Akira; Komune, Shizuo
2005-09-01
Automatic continuous positive airway pressure (auto-CPAP) machines differ mainly in algorithms used for respiratory event detection and pressure control. The auto-CPAP machines operated by novel algorithms are expected to have better performance than the earlier ones in the treatment of obstructive sleep apnea syndrome (OSAS). The purpose of this study was to determine the therapeutic characteristics between two different auto-CPAP devices, i.e., the third-generation flow-based (f-APAP) and the second-generation vibration-based (v-APAP) machines, during the first night treatment of OSAS. We retrospectively reviewed the polysomnography (PSG) recordings of 43 OSAS patients who were initially performed an overnight diagnostic PSG to confirm the disease and afterwards received the first night auto-CPAP treatment with using either the f-APAP (n=22) or v-APAP (n=21) device under another PSG evaluation. There were 13.6% and 61.9% patients who remained a residual apnea/hypopnea index more than 5 during the f-APAP and v-APAP application, respectively (P<0.005). The f-APAP was more effective than the v-APAP in reducing apnea/hypopnea index (P=0.003), hypopnea index (P=0.023) and apnea index (P=0.007), improving the lowest oxygen saturation index (P=0.007) and shortening stage 1 sleep (P=0.016). However, the f-APAP was less sufficient than the v-APAP in reducing arousal/awakening index (P=0.02). These findings suggest that the f-APAP works better than the v-APAP in abolishing breathing abnormities in the treatment of OSAS; however, the f-APAP device might still have some potential limitations in the clinical application.
NASA Astrophysics Data System (ADS)
Vastaranta, Mikko; Kankare, Ville; Holopainen, Markus; Yu, Xiaowei; Hyyppä, Juha; Hyyppä, Hannu
2012-01-01
The two main approaches to deriving forest variables from laser-scanning data are the statistical area-based approach (ABA) and individual tree detection (ITD). With ITD it is feasible to acquire single tree information, as in field measurements. Here, ITD was used for measuring training data for the ABA. In addition to automatic ITD (ITD auto), we tested a combination of ITD auto and visual interpretation (ITD visual). ITD visual had two stages: in the first, ITD auto was carried out and in the second, the results of the ITD auto were visually corrected by interpreting three-dimensional laser point clouds. The field data comprised 509 circular plots ( r = 10 m) that were divided equally for testing and training. ITD-derived forest variables were used for training the ABA and the accuracies of the k-most similar neighbor ( k-MSN) imputations were evaluated and compared with the ABA trained with traditional measurements. The root-mean-squared error (RMSE) in the mean volume was 24.8%, 25.9%, and 27.2% with the ABA trained with field measurements, ITD auto, and ITD visual, respectively. When ITD methods were applied in acquiring training data, the mean volume, basal area, and basal area-weighted mean diameter were underestimated in the ABA by 2.7-9.2%. This project constituted a pilot study for using ITD measurements as training data for the ABA. Further studies are needed to reduce the bias and to determine the accuracy obtained in imputation of species-specific variables. The method could be applied in areas with sparse road networks or when the costs of fieldwork must be minimized.
Bidirectional automatic release of reserve for low voltage network made with low capacity PLCs
NASA Astrophysics Data System (ADS)
Popa, I.; Popa, G. N.; Diniş, C. M.; Deaconu, S. I.
2018-01-01
The article presents the design of a bidirectional automatic release of reserve made on two types low capacity programmable logic controllers: PS-3 from Klöckner-Moeller and Zelio from Schneider. It analyses the electronic timing circuits that can be used for making the bidirectional automatic release of reserve: time-on delay circuit and time-off delay circuit (two types). In the paper are present the sequences code for timing performed on the PS-3 PLC, the logical functions for the bidirectional automatic release of reserve, the classical control electrical diagram (with contacts, relays, and time relays), the electronic control diagram (with logical gates and timing circuits), the code (in IL language) made for the PS-3 PLC, and the code (in FBD language) made for Zelio PLC. A comparative analysis will be carried out on the use of the two types of PLC and will be present the advantages of using PLCs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios Velazquez, E; Meier, R; Dunn, W
Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showedmore » high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research.« less
Downing, N Lance; Adler-Milstein, Julia; Palma, Jonathan P; Lane, Steven; Eisenberg, Matthew; Sharp, Christopher; Longhurst, Christopher A
2017-01-01
Provider organizations increasingly have the ability to exchange patient health information electronically. Organizational health information exchange (HIE) policy decisions can impact the extent to which external information is readily available to providers, but this relationship has not been well studied. Our objective was to examine the relationship between electronic exchange of patient health information across organizations and organizational HIE policy decisions. We focused on 2 key decisions: whether to automatically search for information from other organizations and whether to require HIE-specific patient consent. We conducted a retrospective time series analysis of the effect of automatic querying and the patient consent requirement on the monthly volume of clinical summaries exchanged. We could not assess degree of use or usefulness of summaries, organizational decision-making processes, or generalizability to other vendors. Between 2013 and 2015, clinical summary exchange volume increased by 1349% across 11 organizations. Nine of the 11 systems were set up to enable auto-querying, and auto-querying was associated with a significant increase in the monthly rate of exchange (P = .006 for change in trend). Seven of the 11 organizations did not require patient consent specifically for HIE, and these organizations experienced a greater increase in volume of exchange over time compared to organizations that required consent. Automatic querying and limited consent requirements are organizational HIE policy decisions that impact the volume of exchange, and ultimately the information available to providers to support optimal care. Future efforts to ensure effective HIE may need to explicitly address these factors. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Fisher, Robert S; Afra, Pegah; Macken, Micheal; Minecan, Daniela N; Bagić, Anto; Benbadis, Selim R; Helmers, Sandra L; Sinha, Saurabh R; Slater, Jeremy; Treiman, David; Begnaud, Jason; Raman, Pradheep; Najimipour, Bita
2016-02-01
The Automatic Stimulation Mode (AutoStim) feature of the Model 106 Vagus Nerve Stimulation (VNS) Therapy System stimulates the left vagus nerve on detecting tachycardia. This study evaluates performance, safety of the AutoStim feature during a 3-5-day Epilepsy Monitoring Unit (EMU) stay and long- term clinical outcomes of the device stimulating in all modes. The E-37 protocol (NCT01846741) was a prospective, unblinded, U.S. multisite study of the AspireSR(®) in subjects with drug-resistant partial onset seizures and history of ictal tachycardia. VNS Normal and Magnet Modes stimulation were present at all times except during the EMU stay. Outpatient visits at 3, 6, and 12 months tracked seizure frequency, severity, quality of life, and adverse events. Twenty implanted subjects (ages 21-69) experienced 89 seizures in the EMU. 28/38 (73.7%) of complex partial and secondarily generalized seizures exhibited ≥20% increase in heart rate change. 31/89 (34.8%) of seizures were treated by Automatic Stimulation on detection; 19/31 (61.3%) seizures ended during the stimulation with a median time from stimulation onset to seizure end of 35 sec. Mean duty cycle at six-months increased from 11% to 16%. At 12 months, quality of life and seizure severity scores improved, and responder rate was 50%. Common adverse events were dysphonia (n = 7), convulsion (n = 6), and oropharyngeal pain (n = 3). The Model 106 performed as intended in the study population, was well tolerated and associated with clinical improvement from baseline. The study design did not allow determination of which factors were responsible for improvements. © 2015 The Authors. Neuromodulation: Technology at the Neural Interface published by Wiley Periodicals, Inc. on behalf of International Neuromodulation Society.
Automatic programming of simulation models
NASA Technical Reports Server (NTRS)
Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.
1990-01-01
The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.
Boldogköi, Zsolt
2012-01-01
The regulation of gene expression is essential for normal functioning of biological systems in every form of life. Gene expression is primarily controlled at the level of transcription, especially at the phase of initiation. Non-coding RNAs are one of the major players at every level of genetic regulation, including the control of chromatin organization, transcription, various post-transcriptional processes, and translation. In this study, the Transcriptional Interference Network (TIN) hypothesis was put forward in an attempt to explain the global expression of antisense RNAs and the overall occurrence of tandem gene clusters in the genomes of various biological systems ranging from viruses to mammalian cells. The TIN hypothesis suggests the existence of a novel layer of genetic regulation, based on the interactions between the transcriptional machineries of neighboring genes at their overlapping regions, which are assumed to play a fundamental role in coordinating gene expression within a cluster of functionally linked genes. It is claimed that the transcriptional overlaps between adjacent genes are much more widespread in genomes than is thought today. The Waterfall model of the TIN hypothesis postulates a unidirectional effect of upstream genes on the transcription of downstream genes within a cluster of tandemly arrayed genes, while the Seesaw model proposes a mutual interdependence of gene expression between the oppositely oriented genes. The TIN represents an auto-regulatory system with an exquisitely timed and highly synchronized cascade of gene expression in functionally linked genes located in close physical proximity to each other. In this study, we focused on herpesviruses. The reason for this lies in the compressed nature of viral genes, which allows a tight regulation and an easier investigation of the transcriptional interactions between genes. However, I believe that the same or similar principles can be applied to cellular organisms too. PMID:22783276
Boldogköi, Zsolt
2012-01-01
The regulation of gene expression is essential for normal functioning of biological systems in every form of life. Gene expression is primarily controlled at the level of transcription, especially at the phase of initiation. Non-coding RNAs are one of the major players at every level of genetic regulation, including the control of chromatin organization, transcription, various post-transcriptional processes, and translation. In this study, the Transcriptional Interference Network (TIN) hypothesis was put forward in an attempt to explain the global expression of antisense RNAs and the overall occurrence of tandem gene clusters in the genomes of various biological systems ranging from viruses to mammalian cells. The TIN hypothesis suggests the existence of a novel layer of genetic regulation, based on the interactions between the transcriptional machineries of neighboring genes at their overlapping regions, which are assumed to play a fundamental role in coordinating gene expression within a cluster of functionally linked genes. It is claimed that the transcriptional overlaps between adjacent genes are much more widespread in genomes than is thought today. The Waterfall model of the TIN hypothesis postulates a unidirectional effect of upstream genes on the transcription of downstream genes within a cluster of tandemly arrayed genes, while the Seesaw model proposes a mutual interdependence of gene expression between the oppositely oriented genes. The TIN represents an auto-regulatory system with an exquisitely timed and highly synchronized cascade of gene expression in functionally linked genes located in close physical proximity to each other. In this study, we focused on herpesviruses. The reason for this lies in the compressed nature of viral genes, which allows a tight regulation and an easier investigation of the transcriptional interactions between genes. However, I believe that the same or similar principles can be applied to cellular organisms too.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dengwang; Liu, Li; Kapp, Daniel S.
2015-06-15
Purpose: For facilitating the current automatic segmentation, in this work we propose a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. Methods: In setting up an atlas-based library, we include not only the coordinates of contour points, but also the image features adjacent to the contour. 139 planning CT scans with normal appearing livers obtained during their radiotherapy treatment planning were used to construct the library. The CT images within the library were registered each other using affine registration. A nonlinear narrow shell with the regionalmore » thickness determined by the distance between two vertices alongside the contour. The narrow shell was automatically constructed both inside and outside of the liver contours. The common image features within narrow shell between a new case and a library case were first selected by a Speed-up Robust Features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the images of the new patient by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy function within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by a physician. Results: Application of the technique to 30 liver cases suggested that the technique was capable of reliably segment organs such as the liver with little human intervention. Compared with the manual segmentation results by a physician, the average and discrepancies of the volumetric overlap percentage (VOP) was found to be 92.43%+2.14%. Conclusion: Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. This work is supported by NIH/NIBIB (1R01-EB016777), National Natural Science Foundation of China (No.61471226 and No.61201441), Research funding from Shandong Province (No.BS2012DX038 and No.J12LN23), and Research funding from Jinan City (No.201401221 and No.20120109)« less
Theory and simulent design of a type of auto-self-protecting optical switches
NASA Astrophysics Data System (ADS)
Li, Binhong; Peng, Songcun
1990-06-01
As the use of lasers in the military and in the civilian economy increases with each passing day, it is often necessary for the human eye or sensitive instruments to observe weak lasers, such as the return waves of laser radar and laser communications signals; but it is also necessary to provide protection against damage to the eye from the strong lasers of enemy laser weapons. For this reason, it is necessary to have a kind of automatic optical self-protecting switch. Based upon a study of the transmitting and scattering characteristics of multilayer dielectric optical waveguides, a practical computer program is set up for designing a type of auto-self-protecting optical switch with a computer model by using the nonlinear property of dielectric layers and the plasma behavior of metal substrates. This technique can be used to protect the human eye and sensitive detectors from damage caused by strong laser beams.
Auto-tuning system for NMR probe with LabView
NASA Astrophysics Data System (ADS)
Quen, Carmen; Mateo, Olivia; Bernal, Oscar
2013-03-01
Typical manual NMR-tuning method is not suitable for broadband spectra spanning several megahertz linewidths. Among the main problems encountered during manual tuning are pulse-power reproducibility, baselines, and transmission line reflections, to name a few. We present a design of an auto-tuning system using graphic programming language, LabVIEW, to minimize these problems. The program is designed to analyze the detected power signal of an antenna near the NMR probe and use this analysis to automatically tune the sample coil to match the impedance of the spectrometer (50 Ω). The tuning capacitors of the probe are controlled by a stepper motor through a LabVIEW/computer interface. Our program calculates the area of the power signal as an indicator to control the motor so disconnecting the coil to tune it through a network analyzer is unnecessary. Work supported by NSF-DMR 1105380
Erdemir, A; Eldeniz, A U; Ari, H; Belli, S; Esener, T
2007-05-01
To determine the influence of various irrigating solutions on the accuracy of the electronic apex locator facility in the Tri Auto ZX handpiece. One hundred and forty teeth with single canals and mature apices, scheduled for extraction for either periodontal or prosthetic reasons in 76 patients were used. Following informed written consent local anaesthesia was administered, access cavities were prepared and pulp tissue removed. The teeth were then randomly divided into seven groups according to the irrigating solutions used. The root canal length measurements were completed using the Tri Auto ZX handpiece with automatic reverse function in the presence of one or other of the following solutions: 0.9% saline, 2.5% NaOCl, 3% H(2)O(2), 0.2% chlorhexidine, 17% EDTA, Ultracaine D-S or in the absence of an irrigating solution (control). Files were immobilized in the access cavity with composite resin. After extraction, the apical regions of the teeth were exposed and the file tips examined under a stereomicroscope. Distances between the file tips and the apical constriction were measured (mm) and analysed using a one-way anova and post hoc Tukey test. Mean distances from the apical constriction to the file tip were longer in the 0.9% saline group (P<0.05). There was no statistically significant difference on file tip position between the other solutions. Tri Auto ZX gave reliable results with all irrigating solutions apart from in the presence of 0.9% saline.
[Comparisons of manual and automatic refractometry with subjective results].
Wübbolt, I S; von Alven, S; Hülssner, O; Erb, C
2006-11-01
Refractometry is very important in everyday clinical practice. The aim of this study is to compare the precision of three objective methods of refractometry with subjective dioptometry (Phoropter). The objective methods with the smallest deviation to subjective refractometry results are evaluated. The objective methods/instruments used were retinoscopy, Prism Refractometer PR 60 (Rodenstock) and Auto Refractometer RM-A 7000 (Topcon). The results of monocular dioptometry (sphere, cylinder and axis) of each objective method were compared to the results of the subjective method. The examination was carried out on 178 eyes, which were divided into 3 age-related groups: 6 - 12 years (103 eyes), 13 - 18 years (38 eyes) and older than 18 years (37 eyes). All measurements were made in cycloplegia. The smallest standard deviation of the measurement error was found for the Auto Refractometer RM-A 7000. Both the PR 60 and retinoscopy had a clearly higher standard deviation. Furthermore, the RM-A 7000 showed in three and retinoscopy in four of the nine comparisons a significant bias in the measurement error. The Auto Refractometer provides measurements with the smallest deviation compared to the subjective method. Here it has to be taken into account that the measurements for the sphere have an average deviation of + 0.2 dpt. In comparison to retinoscopy the examination of children with the RM-A 7000 is difficult. An advantage of the Auto Refractometer is the fast and easy handling, so that measurements can be performed by medical staff.
Automated encoding of clinical documents based on natural language processing.
Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George
2004-01-01
The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.
The zebrafish reference genome sequence and its relationship to the human genome.
Howe, Kerstin; Clark, Matthew D; Torroja, Carlos F; Torrance, James; Berthelot, Camille; Muffato, Matthieu; Collins, John E; Humphray, Sean; McLaren, Karen; Matthews, Lucy; McLaren, Stuart; Sealy, Ian; Caccamo, Mario; Churcher, Carol; Scott, Carol; Barrett, Jeffrey C; Koch, Romke; Rauch, Gerd-Jörg; White, Simon; Chow, William; Kilian, Britt; Quintais, Leonor T; Guerra-Assunção, José A; Zhou, Yi; Gu, Yong; Yen, Jennifer; Vogel, Jan-Hinnerk; Eyre, Tina; Redmond, Seth; Banerjee, Ruby; Chi, Jianxiang; Fu, Beiyuan; Langley, Elizabeth; Maguire, Sean F; Laird, Gavin K; Lloyd, David; Kenyon, Emma; Donaldson, Sarah; Sehra, Harminder; Almeida-King, Jeff; Loveland, Jane; Trevanion, Stephen; Jones, Matt; Quail, Mike; Willey, Dave; Hunt, Adrienne; Burton, John; Sims, Sarah; McLay, Kirsten; Plumb, Bob; Davis, Joy; Clee, Chris; Oliver, Karen; Clark, Richard; Riddle, Clare; Elliot, David; Eliott, David; Threadgold, Glen; Harden, Glenn; Ware, Darren; Begum, Sharmin; Mortimore, Beverley; Mortimer, Beverly; Kerry, Giselle; Heath, Paul; Phillimore, Benjamin; Tracey, Alan; Corby, Nicole; Dunn, Matthew; Johnson, Christopher; Wood, Jonathan; Clark, Susan; Pelan, Sarah; Griffiths, Guy; Smith, Michelle; Glithero, Rebecca; Howden, Philip; Barker, Nicholas; Lloyd, Christine; Stevens, Christopher; Harley, Joanna; Holt, Karen; Panagiotidis, Georgios; Lovell, Jamieson; Beasley, Helen; Henderson, Carl; Gordon, Daria; Auger, Katherine; Wright, Deborah; Collins, Joanna; Raisen, Claire; Dyer, Lauren; Leung, Kenric; Robertson, Lauren; Ambridge, Kirsty; Leongamornlert, Daniel; McGuire, Sarah; Gilderthorp, Ruth; Griffiths, Coline; Manthravadi, Deepa; Nichol, Sarah; Barker, Gary; Whitehead, Siobhan; Kay, Michael; Brown, Jacqueline; Murnane, Clare; Gray, Emma; Humphries, Matthew; Sycamore, Neil; Barker, Darren; Saunders, David; Wallis, Justene; Babbage, Anne; Hammond, Sian; Mashreghi-Mohammadi, Maryam; Barr, Lucy; Martin, Sancha; Wray, Paul; Ellington, Andrew; Matthews, Nicholas; Ellwood, Matthew; Woodmansey, Rebecca; Clark, Graham; Cooper, James D; Cooper, James; Tromans, Anthony; Grafham, Darren; Skuce, Carl; Pandian, Richard; Andrews, Robert; Harrison, Elliot; Kimberley, Andrew; Garnett, Jane; Fosker, Nigel; Hall, Rebekah; Garner, Patrick; Kelly, Daniel; Bird, Christine; Palmer, Sophie; Gehring, Ines; Berger, Andrea; Dooley, Christopher M; Ersan-Ürün, Zübeyde; Eser, Cigdem; Geiger, Horst; Geisler, Maria; Karotki, Lena; Kirn, Anette; Konantz, Judith; Konantz, Martina; Oberländer, Martina; Rudolph-Geiger, Silke; Teucke, Mathias; Lanz, Christa; Raddatz, Günter; Osoegawa, Kazutoyo; Zhu, Baoli; Rapp, Amanda; Widaa, Sara; Langford, Cordelia; Yang, Fengtang; Schuster, Stephan C; Carter, Nigel P; Harrow, Jennifer; Ning, Zemin; Herrero, Javier; Searle, Steve M J; Enright, Anton; Geisler, Robert; Plasterk, Ronald H A; Lee, Charles; Westerfield, Monte; de Jong, Pieter J; Zon, Leonard I; Postlethwait, John H; Nüsslein-Volhard, Christiane; Hubbard, Tim J P; Roest Crollius, Hugues; Rogers, Jane; Stemple, Derek L
2013-04-25
Zebrafish have become a popular organism for the study of vertebrate gene function. The virtually transparent embryos of this species, and the ability to accelerate genetic studies by gene knockdown or overexpression, have led to the widespread use of zebrafish in the detailed investigation of vertebrate gene function and increasingly, the study of human genetic disease. However, for effective modelling of human genetic disease it is important to understand the extent to which zebrafish genes and gene structures are related to orthologous human genes. To examine this, we generated a high-quality sequence assembly of the zebrafish genome, made up of an overlapping set of completely sequenced large-insert clones that were ordered and oriented using a high-resolution high-density meiotic map. Detailed automatic and manual annotation provides evidence of more than 26,000 protein-coding genes, the largest gene set of any vertebrate so far sequenced. Comparison to the human reference genome shows that approximately 70% of human genes have at least one obvious zebrafish orthologue. In addition, the high quality of this genome assembly provides a clearer understanding of key genomic features such as a unique repeat content, a scarcity of pseudogenes, an enrichment of zebrafish-specific genes on chromosome 4 and chromosomal regions that influence sex determination.
The zebrafish reference genome sequence and its relationship to the human genome
Howe, Kerstin; Clark, Matthew D.; Torroja, Carlos F.; Torrance, James; Berthelot, Camille; Muffato, Matthieu; Collins, John E.; Humphray, Sean; McLaren, Karen; Matthews, Lucy; McLaren, Stuart; Sealy, Ian; Caccamo, Mario; Churcher, Carol; Scott, Carol; Barrett, Jeffrey C.; Koch, Romke; Rauch, Gerd-Jörg; White, Simon; Chow, William; Kilian, Britt; Quintais, Leonor T.; Guerra-Assunção, José A.; Zhou, Yi; Gu, Yong; Yen, Jennifer; Vogel, Jan-Hinnerk; Eyre, Tina; Redmond, Seth; Banerjee, Ruby; Chi, Jianxiang; Fu, Beiyuan; Langley, Elizabeth; Maguire, Sean F.; Laird, Gavin K.; Lloyd, David; Kenyon, Emma; Donaldson, Sarah; Sehra, Harminder; Almeida-King, Jeff; Loveland, Jane; Trevanion, Stephen; Jones, Matt; Quail, Mike; Willey, Dave; Hunt, Adrienne; Burton, John; Sims, Sarah; McLay, Kirsten; Plumb, Bob; Davis, Joy; Clee, Chris; Oliver, Karen; Clark, Richard; Riddle, Clare; Eliott, David; Threadgold, Glen; Harden, Glenn; Ware, Darren; Mortimer, Beverly; Kerry, Giselle; Heath, Paul; Phillimore, Benjamin; Tracey, Alan; Corby, Nicole; Dunn, Matthew; Johnson, Christopher; Wood, Jonathan; Clark, Susan; Pelan, Sarah; Griffiths, Guy; Smith, Michelle; Glithero, Rebecca; Howden, Philip; Barker, Nicholas; Stevens, Christopher; Harley, Joanna; Holt, Karen; Panagiotidis, Georgios; Lovell, Jamieson; Beasley, Helen; Henderson, Carl; Gordon, Daria; Auger, Katherine; Wright, Deborah; Collins, Joanna; Raisen, Claire; Dyer, Lauren; Leung, Kenric; Robertson, Lauren; Ambridge, Kirsty; Leongamornlert, Daniel; McGuire, Sarah; Gilderthorp, Ruth; Griffiths, Coline; Manthravadi, Deepa; Nichol, Sarah; Barker, Gary; Whitehead, Siobhan; Kay, Michael; Brown, Jacqueline; Murnane, Clare; Gray, Emma; Humphries, Matthew; Sycamore, Neil; Barker, Darren; Saunders, David; Wallis, Justene; Babbage, Anne; Hammond, Sian; Mashreghi-Mohammadi, Maryam; Barr, Lucy; Martin, Sancha; Wray, Paul; Ellington, Andrew; Matthews, Nicholas; Ellwood, Matthew; Woodmansey, Rebecca; Clark, Graham; Cooper, James; Tromans, Anthony; Grafham, Darren; Skuce, Carl; Pandian, Richard; Andrews, Robert; Harrison, Elliot; Kimberley, Andrew; Garnett, Jane; Fosker, Nigel; Hall, Rebekah; Garner, Patrick; Kelly, Daniel; Bird, Christine; Palmer, Sophie; Gehring, Ines; Berger, Andrea; Dooley, Christopher M.; Ersan-Ürün, Zübeyde; Eser, Cigdem; Geiger, Horst; Geisler, Maria; Karotki, Lena; Kirn, Anette; Konantz, Judith; Konantz, Martina; Oberländer, Martina; Rudolph-Geiger, Silke; Teucke, Mathias; Osoegawa, Kazutoyo; Zhu, Baoli; Rapp, Amanda; Widaa, Sara; Langford, Cordelia; Yang, Fengtang; Carter, Nigel P.; Harrow, Jennifer; Ning, Zemin; Herrero, Javier; Searle, Steve M. J.; Enright, Anton; Geisler, Robert; Plasterk, Ronald H. A.; Lee, Charles; Westerfield, Monte; de Jong, Pieter J.; Zon, Leonard I.; Postlethwait, John H.; Nüsslein-Volhard, Christiane; Hubbard, Tim J. P.; Crollius, Hugues Roest; Rogers, Jane; Stemple, Derek L.
2013-01-01
Zebrafish have become a popular organism for the study of vertebrate gene function1,2. The virtually transparent embryos of this species, and the ability to accelerate genetic studies by gene knockdown or overexpression, have led to the widespread use of zebrafish in the detailed investigation of vertebrate gene function and increasingly, the study of human genetic disease3–5. However, for effective modelling of human genetic disease it is important to understand the extent to which zebrafish genes and gene structures are related to orthologous human genes. To examine this, we generated a high-quality sequence assembly of the zebrafish genome, made up of an overlapping set of completely sequenced large-insert clones that were ordered and oriented using a high-resolution high-density meiotic map. Detailed automatic and manual annotation provides evidence of more than 26,000 protein-coding genes6, the largest gene set of any vertebrate so far sequenced. Comparison to the human reference genome shows that approximately 70% of human genes have at least one obvious zebrafish orthologue. In addition, the high quality of this genome assembly provides a clearer understanding of key genomic features such as a unique repeat content, a scarcity of pseudogenes, an enrichment of zebrafish-specific genes on chromosome 4 and chromosomal regions that influence sex determination. PMID:23594743
Erectable/deployable concepts for large space system technology
NASA Technical Reports Server (NTRS)
Agan, W. E.
1980-01-01
Erectable/deployable space structure concepts particularly relating to the development of a science and applications space platform are presented. Design and operating features for an automatic coupler clevis joint, a side latching detent joint, and a module-to-module auto lock coupler are given. An analysis of the packaging characteristics of stacked subassembly, single fold, hybrid, and double fold concepts is given for various platform structure configurations. Payload carrier systems and assembly techniques are also discussed.
1993-11-18
CABEL Industria Bellcore, Morristown, NJ; and I. M. Plitz, Bellcore, Venezolana de Cables Electricos C.A., Valencia, Red Bank, NJ...COMPOSITE CABLE Salvador camps, Carlos Osorio, Richard Vasquez and J. A. Olszewski CABEL Industria Venezolana de Cables Electricos C.A. Valencia...durability. As a result. the automatic control puller can consistently pull a cable, whether the cable is wet or not. 3.2 Crawler Auto -adjusting mechanism
Fuzzy logic controllers: A knowledge-based system perspective
NASA Technical Reports Server (NTRS)
Bonissone, Piero P.
1993-01-01
Over the last few years we have seen an increasing number of applications of Fuzzy Logic Controllers. These applications range from the development of auto-focus cameras, to the control of subway trains, cranes, automobile subsystems (automatic transmissions), domestic appliances, and various consumer electronic products. In summary, we consider a Fuzzy Logic Controller to be a high level language with its local semantics, interpreter, and compiler, which enables us to quickly synthesize non-linear controllers for dynamic systems.
Lingua, Andrea; Marenchino, Davide; Nex, Francesco
2009-01-01
In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.
Nyberg, G
1977-01-01
1 In a double-blind crossover study, six volunteers performed sustained handgrip at 50% of maximal voluntary contraction before and 90 min following oral administration of 0.25 and 100 mg metoprolol tartrate, a beta1 selective adrenoceptor blocking agent. Blood pressure and heart rate were measured with the Auto-Manometer, an electronic semi-automatic device based on the principles of the London School of Hygiene and Tropical Medicine sphygmomanometer. It eliminates observer and digital bias completely, and also records heart rate at the same time as blood pressure is recorded. 2 Resting heart rate fell 15% after 25 mg, 21% after 100 mg and was unchanged after placebo. Systolic blood pressure fell 6% on both doses and was unchanged on placebo. Diastolic pressure did not change with any of the doses. 3 At 1 min of handgrip, heart rate was significantly lower after 25 and 100 mg than before drug or after placebo. There was no difference between the blood pressure levels attained before or after any of the dose levels. The rise of heart rate tended to be somewhat dampened after 100 mg only. The rise in blood pressure was unchanged after any dose compared with before. Images Figure 1 PMID:901695
NASA Astrophysics Data System (ADS)
Kim, Shin-Hyung; Ruy, Won-Sun; Jang, Beom Seon
2013-09-01
An automatic pipe routing system is proposed and implemented. Generally, the pipe routing design as a part of the shipbuilding process requires a considerable number of man hours due to the complexity which comes from physical and operational constraints and the crucial influence on outfitting construction productivity. Therefore, the automation of pipe routing design operations and processes has always been one of the most important goals for improvements in shipbuilding design. The proposed system is applied to a pipe routing design in the engine room space of a commercial ship. The effectiveness of this system is verified as a reasonable form of support for pipe routing design jobs. The automatic routing result of this system can serve as a good basis model in the initial stages of pipe routing design, allowing the designer to reduce their design lead time significantly. As a result, the design productivity overall can be improved with this automatic pipe routing system
Impact of automatic calibration techniques on HMD life cycle costs and sustainable performance
NASA Astrophysics Data System (ADS)
Speck, Richard P.; Herz, Norman E., Jr.
2000-06-01
Automatic test and calibration has become a valuable feature in many consumer products--ranging from antilock braking systems to auto-tune TVs. This paper discusses HMDs (Helmet Mounted Displays) and how similar techniques can reduce life cycle costs and increase sustainable performance if they are integrated into a program early enough. Optical ATE (Automatic Test Equipment) is already zeroing distortion in the HMDs and thereby making binocular displays a practical reality. A suitcase sized, field portable optical ATE unit could re-zero these errors in the Ready Room to cancel the effects of aging, minor damage and component replacement. Planning on this would yield large savings through relaxed component specifications and reduced logistic costs. Yet, the sustained performance would far exceed that attained with fixed calibration strategies. Major tactical benefits can come from reducing display errors, particularly in information fusion modules and virtual `beyond visual range' operations. Some versions of the ATE described are in production and examples of high resolution optical test data will be discussed.
Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation
NASA Astrophysics Data System (ADS)
Lu, Kongkuo; Hall, Christopher S.
2014-03-01
Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.
Uses of Computer Simulation Models in Ag-Research and Everyday Life
USDA-ARS?s Scientific Manuscript database
When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...
Words-in-Freedom and the Oral Tradition.
ERIC Educational Resources Information Center
Webster, Michael
1989-01-01
Explores how oral and print characteristics mesh or clash in "words-in-freedom," a form of visual poetry invented by Filippo Tommaso Marinetti. Analyzes Marinetti's poster-poem "Apres la Marne, Joffre visita le front en auto," highlighting the different natures of the two media and the coding difficulties occasioned by…
Freiman, Zohar E.; Rosianskey, Yogev; Dasmohapatra, Rajeswari; Kamara, Itzhak; Flaishman, Moshe A.
2015-01-01
The traditional definition of climacteric and non-climacteric fruits has been put into question. A significant example of this paradox is the climacteric fig fruit. Surprisingly, ripening-related ethylene production increases following pre- or postharvest 1-methylcyclopropene (1-MCP) application in an unexpected auto-inhibitory manner. In this study, ethylene production and the expression of potential ripening-regulator, ethylene-synthesis, and signal-transduction genes are characterized in figs ripening on the tree and following preharvest 1-MCP application. Fig ripening-related gene expression was similar to that in tomato and apple during ripening on the tree, but only in the fig inflorescence–drupelet section. Because the pattern in the receptacle is different for most of the genes, the fig drupelets developed inside the syconium are proposed to function as parthenocarpic true fruit, regulating ripening processes for the whole accessory fruit. Transcription of a potential ripening regulator, FcMADS8, increased during ripening on the tree and was inhibited following 1-MCP treatment. Expression patterns of the ethylene-synthesis genes FcACS2, FcACS4, and FcACO3 could be related to the auto-inhibition reaction of ethylene production in 1-MCP-treated fruit. Along with FcMADS8 suppression, gene expression analysis revealed upregulation of FcEBF1, and downregulation of FcEIL3 and several FcERFs by 1-MCP treatment. This corresponded with the high storability of the treated fruit. One FcERF was overexpressed in the 1-MCP-treated fruit, and did not share the increasing pattern of most FcERFs in the tree-ripened fig. This demonstrates the potential of this downstream ethylene-signal-transduction component as an ethylene-synthesis regulator, responsible for the non-climacteric auto-inhibition of ethylene production in fig. PMID:25956879
Kawalilak, C E; Johnston, J D; Cooper, D M L; Olszynski, W P; Kontulainen, S A
2016-02-01
Precision errors of cortical bone micro-architecture from high-resolution peripheral quantitative computed tomography (pQCT) ranged from 1 to 16 % and did not differ between automatic or manually modified endocortical contour methods in postmenopausal women or young adults. In postmenopausal women, manually modified contours led to generally higher cortical bone properties when compared to the automated method. First, the objective of the study was to define in vivo precision errors (coefficient of variation root mean square (CV%RMS)) and least significant change (LSC) for cortical bone micro-architecture using two endocortical contouring methods: automatic (AUTO) and manually modified (MOD) in two groups (postmenopausal women and young adults) from high-resolution pQCT (HR-pQCT) scans. Second, it was to compare precision errors and bone outcomes obtained with both methods within and between groups. Using HR-pQCT, we scanned twice the distal radius and tibia of 34 postmenopausal women (mean age ± SD 74 ± 7 years) and 30 young adults (27 ± 9 years). Cortical micro-architecture was determined using AUTO and MOD contour methods. CV%RMS and LSC were calculated. Repeated measures and multivariate ANOVA were used to compare mean CV% and bone outcomes between the methods within and between the groups. Significance was accepted at P < 0.05. CV%RMS ranged from 0.9 to 16.3 %. Within-group precision did not differ between evaluation methods. Compared to young adults, postmenopausal women had better precision for radial cortical porosity (precision difference 9.3 %) and pore volume (7.5 %) with MOD. Young adults had better precision for cortical thickness (0.8 %, MOD) and tibial cortical density (0.2 %, AUTO). In postmenopausal women, MOD resulted in 0.2-54 % higher values for most cortical outcomes, as well as 6-8 % lower radial and tibial cortical BMD and 2 % lower tibial cortical thickness. Results suggest that AUTO and MOD endocortical contour methods provide comparable repeatability. In postmenopausal women, manual modification of endocortical contours led to generally higher cortical bone properties when compared to the automated method, while no between-method differences were observed in young adults.
Pu, Meng; Rowe-Magnus, Dean Allistair
2018-01-01
Vibrio vulnificus is autochthonous to estuaries and warm coastal waters. Infection occurs via open wounds or ingestion, where its asymptomatic colonization of seafood, most infamously oysters, provides a gateway into the human food chain. Colonization begins with initial surface contact, which is often mediated by bacterial surface appendages called pili. Type IV Tad pili are widely distributed in the Vibrionaceae, but evidence for a physiological role for these structures is scant. The V. vulnificus genome codes for three distinct tad loci. Recently, a positive correlation was demonstrated between the expression of tad-3 and the phenotypes of a V. vulnificus descendent (NT) that exhibited increased biofilm formation, auto-aggregation, and oyster colonization relative to its parent. However, the mechanism by which tad pilus expression promoted these phenotypes was not determined. Here, we show that deletion of the tad pilin gene ( flp ) altered the near-surface motility profile of NT cells from high curvature, orbital retracing patterns characteristic of cells actively probing the surface to low curvature traces indicative of wandering and diminished bacteria-surface interactions. The NT flp pilin mutant also exhibited decreased initial surface attachment, attenuated auto-aggregation and formed fragile biofilms that disintegrated under hydrodynamic flow. Thus, the tad-3 locus, designated iam , promoted i nitial surface attachment, a uto-aggregation and resistance to m echanical clearance of V. vulnificus biofilms. The prevalence of tad loci in the Vibrionaceae suggests that they may play equally important roles in other family members.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuppens, H.; Marynen, P.; Cassiman, J.J.
1993-12-01
The authors have previously shown that about 85% of the mutations in 194 Belgian cystic fibrosis alleles could be detected by a reverse dot-blot assay. In the present study, 50 Belgian chromosomes were analyzed for mutations in the cystic fibrosis transmembrane conductance regulator gene by means of direct solid phase automatic sequencing of PCR products of individual exons. Twenty-six disease mutations and 14 polymorphisms were found. Twelve of these mutations and 3 polymorphisms were not described before. With the exception of one mutant allele carrying two mutations, these mutations were the only mutations found in the complete coding region andmore » their exon/intron boundaries. The total sensitivity of mutant CF alleles that could be identified was 98.5%. Given the heterogeneity of these mutations, most of them very rare, CFTR mutation screening still remains rather complex in the population, and population screening, whether desirable or not, does not appear to be technically feasible with the methods currently available. 24 refs., 1 fig., 2 tabs.« less
Translating expert system rules into Ada code with validation and verification
NASA Technical Reports Server (NTRS)
Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam
1991-01-01
The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.
Translating an AI application from Lisp to Ada: A case study
NASA Technical Reports Server (NTRS)
Davis, Gloria J.
1991-01-01
A set of benchmarks was developed to test the performance of a newly designed computer executing both Lisp and Ada. Among these was AutoClassII -- a large Artificial Intelligence (AI) application written in Common Lisp. The extraction of a representative subset of this complex application was aided by a Lisp Code Analyzer (LCA). The LCA enabled rapid analysis of the code, putting it in a concise and functionally readable form. An equivalent benchmark was created in Ada through manual translation of the Lisp version. A comparison of the execution results of both programs across a variety of compiler-machine combinations indicate that line-by-line translation coupled with analysis of the initial code can produce relatively efficient and reusable target code.
SVGMap: configurable image browser for experimental data.
Rafael-Palou, Xavier; Schroeder, Michael P; Lopez-Bigas, Nuria
2012-01-01
Spatial data visualization is very useful to represent biological data and quickly interpret the results. For instance, to show the expression pattern of a gene in different tissues of a fly, an intuitive approach is to draw the fly with the corresponding tissues and color the expression of the gene in each of them. However, the creation of these visual representations may be a burdensome task. Here we present SVGMap, a java application that automatizes the generation of high-quality graphics for singular data items (e.g. genes) and biological conditions. SVGMap contains a browser that allows the user to navigate the different images created and can be used as a web-based results publishing tool. SVGMap is freely available as precompiled java package as well as source code at http://bg.upf.edu/svgmap. It requires Java 6 and any recent web browser with JavaScript enabled. The software can be run on Linux, Mac OS X and Windows systems. nuria.lopez@upf.edu
Optimal Recovery Trajectories for Automatic Ground Collision Avoidance Systems (Auto GCAS)
2015-03-01
the Multi-Trajectory path uses a sphere buffer (with a 350 ft radius) around each time point in the propagated path. Hence, the yellow Xs indicate the...the HUD as well as a matrix/line of Xs on the radar electro optical (REO) display. Enhanced ground clobber (EGC) mechanization was integrated on the F...reachable in the timespan t ∈ [t0, tf ], and dthreshold is a scalar user-defined terrain buffer. For the work de- veloped herein, dthreshold was set to 350
A System for Heart Sounds Classification
Redlarski, Grzegorz; Gradolewski, Dawid; Palkowski, Aleksander
2014-01-01
The future of quick and efficient disease diagnosis lays in the development of reliable non-invasive methods. As for the cardiac diseases – one of the major causes of death around the globe – a concept of an electronic stethoscope equipped with an automatic heart tone identification system appears to be the best solution. Thanks to the advancement in technology, the quality of phonocardiography signals is no longer an issue. However, appropriate algorithms for auto-diagnosis systems of heart diseases that could be capable of distinguishing most of known pathological states have not been yet developed. The main issue is non-stationary character of phonocardiography signals as well as a wide range of distinguishable pathological heart sounds. In this paper a new heart sound classification technique, which might find use in medical diagnostic systems, is presented. It is shown that by combining Linear Predictive Coding coefficients, used for future extraction, with a classifier built upon combining Support Vector Machine and Modified Cuckoo Search algorithm, an improvement in performance of the diagnostic system, in terms of accuracy, complexity and range of distinguishable heart sounds, can be made. The developed system achieved accuracy above 93% for all considered cases including simultaneous identification of twelve different heart sound classes. The respective system is compared with four different major classification methods, proving its reliability. PMID:25393113
Analysis of Thick Sandwich Shells with Embedded Ceramic Tiles
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Smith, C.; Lumban-Tobing, F.
1996-01-01
The Composite Armored Vehicle (CAV) is an advanced technology demonstrator of an all-composite ground combat vehicle. The CAV upper hull is made of a tough light-weight S2-glass/epoxy laminate with embedded ceramic tiles that serve as armor. The tiles are bonded to a rubber mat with a carefully selected, highly viscoelastic adhesive. The integration of armor and structure offers an efficient combination of ballistic protection and structural performance. The analysis of this anisotropic construction, with its inherent discontinuous and periodic nature, however, poses several challenges. The present paper describes a shell-based 'element-layering' technique that properly accounts for these effects and for the concentrated transverse shear flexibility in the rubber mat. One of the most important advantages of the element-layering technique over advanced higher-order elements is that it is based on conventional elements. This advantage allows the models to be portable to other structural analysis codes, a prerequisite in a program that involves the computational facilities of several manufacturers and government laboratories. The element-layering technique was implemented into an auto-layering program that automatically transforms a conventional shell model into a multi-layered model. The effects of tile layer homogenization, tile placement patterns, and tile gap size on the analysis results are described.
Yan, Zhen-yu; Liang, Yan; Yan, Mei; Fan, Lian-kai; Xiao, Bai; Hua, Bao-lai; Liu, Jing-zhong; Zhao, Yong-qiang
2008-10-21
To investigate the frequency of intron 1 inversion (inv1) in FVIII gene in Chinese hemophilia A (HA) patients and to investigate the mechanism of pathogenesis. Peripheral blood samples were collected from 158 unrelated HA patients, aged 20 (1 - 73), including one female HA patient, aged 5, and several family members of a patient positive in inv1. One-stage method was used to assay the FVIII activity (FVIII:C). Long distance PCR and multiple PCR in duplex reactions were used to screen for the intron 22 inversion (inv22) and inv1 of the FVIII coding gene (F8). The F8 coding sequence was amplified with PCR and sequenced with an automatic sequencer. Two unrelated patients (pedigrees) were detected as inv1 positive with a positive rate of 1.26%. A rare female HA patient with inv1 was also discovered in a positive family (3 HA cases were found in this family and regarded as one case in calculating the total detection rate). The full length of FVIII was sequenced, and no other mutation was detected. There frequency of FVIII inv1 is low in Chinese HA patients compared with other populations. Female HA patients are heterozygous for FVIII inv1 and that may be resulted from nonrandom inactivation of X chromosome.
Trick Simulation Environment 07
NASA Technical Reports Server (NTRS)
Lin, Alexander S.; Penn, John M.
2012-01-01
The Trick Simulation Environment is a generic simulation toolkit used for constructing and running simulations. This release includes a Monte Carlo analysis simulation framework and a data analysis package. It produces all auto documentation in XML. Also, the software is capable of inserting a malfunction at any point during the simulation. Trick 07 adds variable server output options and error messaging and is capable of using and manipulating wide characters for international support. Wide character strings are available as a fundamental type for variables processed by Trick. A Trick Monte Carlo simulation uses a statistically generated, or predetermined, set of inputs to iteratively drive the simulation. Also, there is a framework in place for optimization and solution finding where developers may iteratively modify the inputs per run based on some analysis of the outputs. The data analysis package is capable of reading data from external simulation packages such as MATLAB and Octave, as well as the common comma-separated values (CSV) format used by Excel, without the use of external converters. The file formats for MATLAB and Octave were obtained from their documentation sets, and Trick maintains generic file readers for each format. XML tags store the fields in the Trick header comments. For header files, XML tags for structures and enumerations, and the members within are stored in the auto documentation. For source code files, XML tags for each function and the calling arguments are stored in the auto documentation. When a simulation is built, a top level XML file, which includes all of the header and source code XML auto documentation files, is created in the simulation directory. Trick 07 provides an XML to TeX converter. The converter reads in header and source code XML documentation files and converts the data to TeX labels and tables suitable for inclusion in TeX documents. A malfunction insertion capability allows users to override the value of any simulation variable, or call a malfunction job, at any time during the simulation. Users may specify conditions, use the return value of a malfunction trigger job, or manually activate a malfunction. The malfunction action may consist of executing a block of input file statements in an action block, setting simulation variable values, call a malfunction job, or turn on/off simulation jobs.
Performance Analysis of New Binary User Codes for DS-CDMA Communication
NASA Astrophysics Data System (ADS)
Usha, Kamle; Jaya Sankar, Kottareddygari
2016-03-01
This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.
Modelling Metamorphism by Abstract Interpretation
NASA Astrophysics Data System (ADS)
Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.
Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Autonomous Navigation Performance During The Hartley 2 Comet Flyby
NASA Technical Reports Server (NTRS)
Abrahamson, Matthew J; Kennedy, Brian A.; Bhaskaran, Shyam
2012-01-01
On November 4, 2010, the EPOXI spacecraft performed a 700-km flyby of the comet Hartley 2 as follow-on to the successful 2005 Deep Impact prime mission. EPOXI, an extended mission for the Deep Impact Flyby spacecraft, returned a wealth of visual and infrared data from Hartley 2, marking the fifth time that high-resolution images of a cometary nucleus have been captured by a spacecraft. The highest resolution science return, captured at closest approach to the comet nucleus, was enabled by use of an onboard autonomous navigation system called AutoNav. AutoNav estimates the comet-relative spacecraft trajectory using optical measurements from the Medium Resolution Imager (MRI) and provides this relative position information to the Attitude Determination and Control System (ADCS) for maintaining instrument pointing on the comet. For the EPOXI mission, AutoNav was tasked to enable continuous tracking of a smaller, more active Hartley 2, as compared to Tempel 1, through the full encounter while traveling at a higher velocity. To meet the mission goal of capturing the comet in all MRI science images, position knowledge accuracies of +/- 3.5 km (3-?) cross track and +/- 0.3 seconds (3-?) time of flight were required. A flight-code-in-the-loop Monte Carlo simulation assessed AutoNav's statistical performance under the Hartley 2 flyby dynamics and determined optimal configuration. The AutoNav performance at Hartley 2 was successful, capturing the comet in all of the MRI images. The maximum residual between observed and predicted comet locations was 20 MRI pixels, primarily influenced by the center of brightness offset from the center of mass in the observations and attitude knowledge errors. This paper discusses the Monte Carlo-based analysis that led to the final AutoNav configuration and a comparison of the predicted performance with the flyby performance.
Open-RAC: Open-Design, Recirculating and Auto-Cleaning Zebrafish Maintenance System.
Nema, Shubham; Bhargava, Yogesh
2017-08-01
Zebrafish is a vertebrate animal model. Their maintenance in large number under laboratory conditions is a daunting task. Commercially available recirculating zebrafish maintenance systems are used to efficiently handle the tasks of automatic sediment cleaning from zebrafish tanks with minimal waste of water. Due to their compact nature, they also ensure the maximal use of available lab space. However, the high costs of commercial systems present a limitation to researchers with limited funds. A cost-effective zebrafish maintenance system with major features offered by commercially available systems is highly desirable. Here, we describe a compact and recirculating zebrafish maintenance system. Our system is composed of cost-effective components, which are available in local markets and/or can be procured via online vendors. Depending on the expertise of end users, the system can be assembled in 2 days. The system is completely customizable as it offers geometry independent zebrafish tanks that are capable of auto-cleaning the sediments. Due to these features, we called our setup as Open-RAC (Open-design, Recirculating and Auto-Cleaning zebrafish maintenance system). Open-RAC is a cost-effective and viable alternative to the currently available zebrafish maintenance systems. Thus, we believe that the use of Open-RAC could promote the zebrafish research by removing the cost barrier for researchers.
NASA Astrophysics Data System (ADS)
Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao
2017-10-01
UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
EMAM, M; Eldib, A; Lin, M
2014-06-01
Purpose: An in-house Monte Carlo based treatment planning system (MC TPS) has been developed for modulated electron radiation therapy (MERT). Our preliminary MERT planning experience called for a more user friendly graphical user interface. The current work aimed to design graphical windows and tools to facilitate the contouring and planning process. Methods: Our In-house GUI MC TPS is built on a set of EGS4 user codes namely MCPLAN and MCBEAM in addition to an in-house optimization code, which was named as MCOPTIM. Patient virtual phantom is constructed using the tomographic images in DICOM format exported from clinical treatment planning systemsmore » (TPS). Treatment target volumes and critical structures were usually contoured on clinical TPS and then sent as a structure set file. In our GUI program we developed a visualization tool to allow the planner to visualize the DICOM images and delineate the various structures. We implemented an option in our code for automatic contouring of the patient body and lungs. We also created an interface window displaying a three dimensional representation of the target and also showing a graphical representation of the treatment beams. Results: The new GUI features helped streamline the planning process. The implemented contouring option eliminated the need for performing this step on clinical TPS. The auto detection option for contouring the outer patient body and lungs was tested on patient CTs and it was shown to be accurate as compared to that of clinical TPS. The three dimensional representation of the target and the beams allows better selection of the gantry, collimator and couch angles. Conclusion: An in-house GUI program has been developed for more efficient MERT planning. The application of aiding tools implemented in the program is time saving and gives better control of the planning process.« less
Zou, Zhi; Yang, Lifu; Wang, Danhua; Huang, Qixing; Mo, Yeyong; Xie, Guishui
2016-01-01
WRKY proteins comprise one of the largest transcription factor families in plants and form key regulators of many plant processes. This study presents the characterization of 58 WRKY genes from the castor bean (Ricinus communis L., Euphorbiaceae) genome. Compared with the automatic genome annotation, one more WRKY-encoding locus was identified and 20 out of the 57 predicted gene models were manually corrected. All RcWRKY genes were shown to contain at least one intron in their coding sequences. According to the structural features of the present WRKY domains, the identified RcWRKY genes were assigned to three previously defined groups (I-III). Although castor bean underwent no recent whole-genome duplication event like physic nut (Jatropha curcas L., Euphorbiaceae), comparative genomics analysis indicated that one gene loss, one intron loss and one recent proximal duplication occurred in the RcWRKY gene family. The expression of all 58 RcWRKY genes was supported by ESTs and/or RNA sequencing reads derived from roots, leaves, flowers, seeds and endosperms. Further global expression profiles with RNA sequencing data revealed diverse expression patterns among various tissues. Results obtained from this study not only provide valuable information for future functional analysis and utilization of the castor bean WRKY genes, but also provide a useful reference to investigate the gene family expansion and evolution in Euphorbiaceus plants.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-24
... the proposed settlement may be obtained from Peter Felitti, Assoc. Regional Counsel, EPA, Office of... addressed to Peter Felitti, Assoc. Regional Counsel, EPA, Office of Regional Counsel, Region 5, 77 W. Jackson Blvd., mail code: C-14J, Chicago, Illinois 60604. FOR FURTHER INFORMATION CONTACT: Peter Felitti...
ERIC Educational Resources Information Center
Manitoba Dept. of Education and Training, Winnipeg. Curriculum Services Branch.
This directory lists the unit-credit titles of the technology education courses offered in Manitoba, along with their corresponding department codes and course numbers. Sections A through C list the unit-credit titles of the following vocational-industrial clusters: heavy industrial (agriculture, auto body repair, building construction, building…
Su, Mei; Huai, De; Cao, Juan; Ning, Ding; Xue, Rong; Xu, Meijie; Huang, Mao; Zhang, Xilong
2018-03-01
Although bilevel positive airway pressure (Bilevel PAP) therapy is usually used for overlap syndrome (OS), there is still a portion of OS patients in whom Bilevel PAP therapy could not simultaneously eliminate residual apnea events and hypercapnia. The current study was expected to explore whether auto-trilevel positive airway pressure (auto-trilevel PAP) therapy with auto-adjusting end expiratory positive airway pressure (EEPAP) can serve as a better alternative for these patients. From January of 2014 to June of 2016, 32 hypercapnic OS patients with stable chronic obstructive pulmonary diseases (COPD) and moderate-to-severe obstructive sleep apnea syndrome (OSAS) were recruited. Three variable modes of positive airway pressure (PAP) from the ventilator (Prisma25ST, Weinmann Inc., Germany) were applicated for 8 h per night. We performed the design of each mode at each night with an interval of two nights with no PAP treatment as a washout period among different modes. In Bilevel-1 mode (Bilevel-1), the expiratory positive airway pressure (EPAP) delivered from Bilevel PAP was always set as the lowest PAP for abolishment of snoring. For each patient, the inspiratory positive airway pressure (IPAP) was constantly set the same as the minimal pressure for keeping end-tidal CO 2 (ETCO 2 ) ≤45 mmHg for all three modes. However, the EPAP issued by Bilevel PAP in Bilevel-2 mode (Bilevel-2) was kept 3 cmH 2 O higher than that in Bilevel-1. In auto-trilevel mode (auto-trilevel) with auto-trilevel PAP, the initial part of EPAP was fixed at the same PAP as that in Bilevel-1 while the EEPAP was automatically regulated to rise at a range of ≤4 cmH 2 O based on nasal airflow wave changes. Comparisons were made for parameters before and during or following treatment as well as among different PAP therapy modes. The following parameters were compared such as nocturnal apnea hypopnea index (AHI), minimal SpO 2 (minSpO 2 ), arousal index, sleep structure and efficiency, morning PaCO 2 , and daytime Epworth Sleepiness Scale (ESS). Compared with the parameters before PAP therapies, during each mode of PAP treatment, significant reduction was detected in nocturnal AHI, arousal index, morning PaCO 2 , and daytime ESS while significant elevation was revealed in nocturnal minSpO 2 and sleep efficiency (all P < 0.01). Comparison among three PAP modes indicated that under the same IPAP, the auto-trilevel PAP mode could result in the lowest arousal index, daytime ESS, and the highest sleep efficiency. Compared with Bilevel-1, it was detected that (a) AHI was lower but minSpO 2 was higher in both Bilevel-2 and auto-trilevel (all P < 0.05) and (b) morning PaCO 2 showed no statistical difference from that in auto-trilevel but displayed higher in Bilevel-2 (P < 0.05). Compared with Bilevel-2, in auto-trilevel, both AHI and minSpO 2 showed no obvious changes (all P > 0.05) except with a lower morning PaCO 2 (P < 0.05). Auto-trilevel PAP therapy was superior over conventional Bilevel PAP therapy for hypercapnic OS patients with their OSAS moderate to severe, since auto-trilevel PAP was more efficacious in synchronous elimination of residual obstructive apnea events and CO 2 retention as well as in obtaining a better sleep quality and milder daytime drowsiness.
Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Carter, Jonathan; Oliker, Leonid
2008-02-01
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 14x improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less
Threshold-driven optimization for reference-based auto-planning
NASA Astrophysics Data System (ADS)
Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo
2018-02-01
We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.
Gallio, Elena; Giglioli, Francesca Romana; Girardi, Andrea; Guarneri, Alessia; Ricardi, Umberto; Ropolo, Roberto; Ragona, Riccardo; Fiandra, Christian
2018-02-01
Automated treatment planning is a new frontier in radiotherapy. The Auto-Planning module of the Pinnacle 3 treatment planning system (TPS) was evaluated for liver stereotactic body radiation therapy treatments. Ten cases were included in the study. Six plans were generated for each case by four medical physics experts. The first two planned with Pinnacle TPS, both with manual module (MP) and Auto-Planning one (AP). The other two physicists generated two plans with Monaco TPS (VM). Treatment plan comparisons were then carried on the various dosimetric parameters of target and organs at risk, monitor units, number of segments, plan complexity metrics and human resource planning time. The user dependency of Auto-Planning was also tested and the plans were evaluated by a trained physician. Statistically significant differences (Anova test) were observed for spinal cord doses, plan average beam irregularity, number of segments, monitor units and human planning time. The Fisher-Hayter test applied to these parameters showed significant statistical differences between AP e MP for spinal cord doses and human planning time; between MP and VM for monitor units, number of segments and plan irregularity; for all those between AP and VM. The two plans created by different planners with AP were similar to each other. The plans created with Auto-Planning were comparable to the manually generated plans. The time saved in planning enables the planner to commit more resources to more complex cases. The independence of the planner enables to standardize plan quality. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
VizieR Online Data Catalog: Analytical model for irradiated atmospheres (Parmentier+, 2015)
NASA Astrophysics Data System (ADS)
Parmentier, V.; Guillot, T.; Fortney, J.; Marley, M.
2014-11-01
The model has six parameters to describe the opacities: - {kappa}(N) is the Rosseland mean opacity at each levels of the atmosphere it does not have to be constant with depth. - Gp is the ratio of the thermal Plank mean opacity to the thermal Rosseland mean opacity. - Beta is the width ratio of the two thermal bands in the frequency space. - Gv1 is the ratio of the visible opacity in the first visible band to the thermal Rosseland mean opacity - Gv2 is the ratio of the visible opacity in the second visible band to the thermal Rosseland mean opacity - Gv3 is the ratio of the visible opacity in the second visible band to the thermal Rosseland mean opacity Each visible band has a fixed width of 1/3. Additional parameters describe the physical setting: - Teq0 is the equilibrium temperature of the planet for 0 albedo and full redistribution of energy. - mu is the angle between the vertical direction and the stellar direction. For average profiles set mu=1/sqrt(3) - f is a parameter equal to 0.5 to compute a dayside average profile and 0.25 for planet average profile. - Tint is the internal temperature, given by the internal luminosity - grav is the gravity of the planet - Ab is the Bond albedo of the planet - P(i) are the pressure levels where the temperature is computed. - N is the number of atmospheric levels. Several options are available in order to use the coefficients derived in Parmentier et al. (2014A&A...562A.133P, Cat. J/A+A/562/A133): ROSS can take the values : - "USER" for a Rosseland mean opacity set by the user {kappa}(nlevels) through the atmosphere. - "AUTO" in order to use {kappa}(P,T), the functional form of the Rosseland mean opacities provided by Valencia et al. (2013ApJ...775...10V) and based on the opacities calculated by Freedman et al. (2008ApJS..174..504F). The value of {kappa} is then recalculated and the initial value set by the user is NOT taken into account. COEFF can take the values : - "USER" for coefficients set by the user - "AUTO" for using the fit of the coefficients provided in Parmentier et al. (2014A&A...562A.133P, Cat. J/A+A/562/A133). In that case all the coefficients set by the user are NOT taken into account (apart for the Rosseland mean opacities) COMP can take the values (Valid only if COEFF="AUTO") : - "SOLAR" to use the fit of the coefficients for a solar composition atmosphere - "NOTIO" to use the fit of the coefficients without TiO STAR can take the value (Valid only if COEFF="AUTO"): - "SUN" to use the fit of the coefficients for a sun-like stellar irradiation ALBEDO can thake the value : - "USER" for a user defined albedo - "AUTO" to use the fit of the albedos for solar-composition, clear-sky atmospheres CONV can be either : - "NO" for a pure radiative solution - "YES" for a radiative/convective solution (without taking into account detached convective layers) The code and all the outputs uses SI units. Installation and use : to install the code use the command "make". To test use "make test". The test should be done with the downloaded version of the code, without any changes. To execute the code, once it has been compiled, type ./NonGrey in the same directory.This will output a file PTprofile.csv with the temperature structure in csv format and a file PTprofile.dat in dat format. The input parameters must be changed inside the file paper2.f90. It is necessary to compile the code again each time. The subroutine tprofile2e.f90 can be directly implemented into one's code. (5 data files).
Incorporating Manual and Autonomous Code Generation
NASA Technical Reports Server (NTRS)
McComas, David
1998-01-01
Code can be generated manually or using code-generated software tools, but how do you interpret the two? This article looks at a design methodology that combines object-oriented design with autonomic code generation for attitude control flight software. Recent improvements in space flight computers are allowing software engineers to spend more time engineering the applications software. The application developed was the attitude control flight software for an astronomical satellite called the Microwave Anisotropy Probe (MAP). The MAP flight system is being designed, developed, and integrated at NASA's Goddard Space Flight Center. The MAP controls engineers are using Integrated Systems Inc.'s MATRIXx for their controls analysis. In addition to providing a graphical analysis for an environment, MATRIXx includes an autonomic code generation facility called AutoCode. This article examines the forces that shaped the final design and describes three highlights of the design process: (1) Defining the manual to autonomic code interface; (2) Applying object-oriented design to the manual flight code; (3) Implementing the object-oriented design in C.
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models
NASA Astrophysics Data System (ADS)
Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.
2012-04-01
The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.
Automated Flight Dynamics Product Generation for the EOS AM-1 Spacecraft
NASA Technical Reports Server (NTRS)
Matusow, Carla
1999-01-01
As part of NASA's Earth Science Enterprise, the Earth Observing System (EOS) AM-1 spacecraft is designed to monitor long-term, global, environmental changes. Because of the complexity of the AM-1 spacecraft, the mission operations center requires more than 80 distinct flight dynamics products (reports). To create these products, the AM-1 Flight Dynamics Team (FDT) will use a combination of modified commercial software packages (e.g., Analytical Graphic's Satellite ToolKit) and NASA-developed software applications. While providing the most cost-effective solution to meeting the mission requirements, the integration of these software applications raises several operational concerns: (1) Routine product generation requires knowledge of multiple applications executing on variety of hardware platforms. (2) Generating products is a highly interactive process requiring a user to interact with each application multiple times to generate each product. (3) Routine product generation requires several hours to complete. (4) User interaction with each application introduces the potential for errors, since users are required to manually enter filenames and input parameters as well as run applications in the correct sequence. Generating products requires some level of flight dynamics expertise to determine the appropriate inputs and sequencing. To address these issues, the FDT developed an automation software tool called AutoProducts, which runs on a single hardware platform and provides all necessary coordination and communication among the various flight dynamics software applications. AutoProducts, autonomously retrieves necessary files, sequences and executes applications with correct input parameters, and deliver the final flight dynamics products to the appropriate customers. Although AutoProducts will normally generate pre-programmed sets of routine products, its graphical interface allows for easy configuration of customized and one-of-a-kind products. Additionally, AutoProducts has been designed as a mission-independent tool, and can be easily reconfigured to support other missions or incorporate new flight dynamics software packages. After the AM-1 launch, AutoProducts will run automatically at pre-determined time intervals . The AutoProducts tool reduces many of the concerns associated with the flight dynamics product generation. Although AutoProducts required a significant effort to develop because of the complexity of the interfaces involved, its use will provide significant cost savings through reduced operator time and maximum product reliability. In addition, user satisfaction is significantly improved and flight dynamics experts have more time to perform valuable analysis work. This paper will describe the evolution of the AutoProducts tool, highlighting the cost savings and customer satisfaction resulting from its development. It will also provide details about the tool including its graphical interface and operational capabilities.
AutoBayes Program Synthesis System System Internals
NASA Technical Reports Server (NTRS)
Schumann, Johann Martin
2011-01-01
This lecture combines the theoretical background of schema based program synthesis with the hands-on study of a powerful, open-source program synthesis system (Auto-Bayes). Schema-based program synthesis is a popular approach toward program synthesis. The lecture will provide an introduction into this topic and discuss how this technology can be used to generate customized algorithms. The synthesis of advanced numerical algorithms requires the availability of a powerful symbolic (algebra) system. Its task is to symbolically solve equations, simplify expressions, or to symbolically calculate derivatives (among others) such that the synthesized algorithms become as efficient as possible. We will discuss the use and importance of the symbolic system for synthesis. Any synthesis system is a large and complex piece of code. In this lecture, we will study Autobayes in detail. AutoBayes has been developed at NASA Ames and has been made open source. It takes a compact statistical specification and generates a customized data analysis algorithm (in C/C++) from it. AutoBayes is written in SWI Prolog and many concepts from rewriting, logic, functional, and symbolic programming. We will discuss the system architecture, the schema libary and the extensive support infra-structure. Practical hands-on experiments and exercises will enable the student to get insight into a realistic program synthesis system and provides knowledge to use, modify, and extend Autobayes.
2011-01-01
Background Existing methods of predicting DNA-binding proteins used valuable features of physicochemical properties to design support vector machine (SVM) based classifiers. Generally, selection of physicochemical properties and determination of their corresponding feature vectors rely mainly on known properties of binding mechanism and experience of designers. However, there exists a troublesome problem for designers that some different physicochemical properties have similar vectors of representing 20 amino acids and some closely related physicochemical properties have dissimilar vectors. Results This study proposes a systematic approach (named Auto-IDPCPs) to automatically identify a set of physicochemical and biochemical properties in the AAindex database to design SVM-based classifiers for predicting and analyzing DNA-binding domains/proteins. Auto-IDPCPs consists of 1) clustering 531 amino acid indices in AAindex into 20 clusters using a fuzzy c-means algorithm, 2) utilizing an efficient genetic algorithm based optimization method IBCGA to select an informative feature set of size m to represent sequences, and 3) analyzing the selected features to identify related physicochemical properties which may affect the binding mechanism of DNA-binding domains/proteins. The proposed Auto-IDPCPs identified m=22 features of properties belonging to five clusters for predicting DNA-binding domains with a five-fold cross-validation accuracy of 87.12%, which is promising compared with the accuracy of 86.62% of the existing method PSSM-400. For predicting DNA-binding sequences, the accuracy of 75.50% was obtained using m=28 features, where PSSM-400 has an accuracy of 74.22%. Auto-IDPCPs and PSSM-400 have accuracies of 80.73% and 82.81%, respectively, applied to an independent test data set of DNA-binding domains. Some typical physicochemical properties discovered are hydrophobicity, secondary structure, charge, solvent accessibility, polarity, flexibility, normalized Van Der Waals volume, pK (pK-C, pK-N, pK-COOH and pK-a(RCOOH)), etc. Conclusions The proposed approach Auto-IDPCPs would help designers to investigate informative physicochemical and biochemical properties by considering both prediction accuracy and analysis of binding mechanism simultaneously. The approach Auto-IDPCPs can be also applicable to predict and analyze other protein functions from sequences. PMID:21342579
Arabidopsis TNL-WRKY domain receptor RRS1 contributes to temperature-conditioned RPS4 auto-immunity
Heidrich, Katharina; Tsuda, Kenichi; Blanvillain-Baufumé, Servane; Wirthmueller, Lennart; Bautor, Jaqueline; Parker, Jane E.
2013-01-01
In plant effector-triggered immunity (ETI), intracellular nucleotide binding-leucine rich repeat (NLR) receptors are activated by specific pathogen effectors. The Arabidopsis TIR (Toll-Interleukin-1 receptor domain)-NLR (denoted TNL) gene pair, RPS4 and RRS1, confers resistance to Pseudomonas syringae pv tomato (Pst) strain DC3000 expressing the Type III-secreted effector, AvrRps4. Nuclear accumulation of AvrRps4, RPS4, and the TNL resistance regulator EDS1 is necessary for ETI. RRS1 possesses a C-terminal “WRKY” transcription factor DNA binding domain suggesting that important RPS4/RRS1 recognition and/or resistance signaling events occur at the nuclear chromatin. In Arabidopsis accession Ws-0, the RPS4Ws/RRS1Ws allelic pair governs resistance to Pst/AvrRps4 accompanied by host programed cell death (pcd). In accession Col-0, RPS4Col/RRS1Col effectively limits Pst/AvrRps4 growth without pcd. Constitutive expression of HA-StrepII tagged RPS4Col (in a 35S:RPS4-HS line) confers temperature-conditioned EDS1-dependent auto-immunity. Here we show that a high (28°C, non-permissive) to moderate (19°C, permissive) temperature shift of 35S:RPS4-HS plants can be used to follow defense-related transcriptional dynamics without a pathogen effector trigger. By comparing responses of 35S:RPS4-HS with 35S:RPS4-HS rrs1-11 and 35S:RPS4-HS eds1-2 mutants, we establish that RPS4Col auto-immunity depends entirely on EDS1 and partially on RRS1Col. Examination of gene expression microarray data over 24 h after temperature shift reveals a mainly quantitative RRS1Col contribution to up- or down-regulation of a small subset of RPS4Col-reprogramed, EDS1-dependent genes. We find significant over-representation of WRKY transcription factor binding W-box cis-elements within the promoters of these genes. Our data show that RRS1Col contributes to temperature-conditioned RPS4Col auto-immunity and are consistent with activated RPS4Col engaging RRS1Col for resistance signaling. PMID:24146667
Grimberg, F; Banegas, G; Chiacchio, L; Zmener, O
2002-07-01
The aim of this study was to assess the clinical perfomance of a cordless handpiece with a built-in apex locator - the Tri Auto ZX - designed for root canal preparation with nickel-titanium rotary files. Twenty-five human maxillary incisor and canine teeth scheduled for extraction with mature apices were selected for the study. Informed written consent was obtained from each patient before treatment. After administration of local anaesthesia, the teeth were isolated and the pulp cavities accessed. The Tri Auto ZX along with a size 15 K-file was used in its electronic apex locating function based on the manufacturer's recommendations. A periapical radiograph with the file at the electronically determined constriction was taken, the file removed and the measurement registered as the electronic length (EL). To test the auto reverse function, a size 20 ProFile.04 taper NiTi rotary instrument was mounted in the handpiece. The point for the auto apical reverse function was preset on the panel at the 0.5 mm level. After the file was introduced into the canal and reached the predetermined level, the file automatically stopped and rotated in the opposite direction. A reference point was marked and this measurement was registered as the auto reverse length (ARL). All measurements were made twice by two different investigators. Teeth were then extracted and immersed in a 20% formalin solution for 48 h. After fixation, a size 15 file was inserted into the canal to measure the actual root canal length from the same reference point obtained with the Tri Auto ZX to the apical foramen, as seen in the stereo microscope. When the file tip was visible at the anatomical end of the canal it was withdrawn 0.5 mm and this measurement was registered as the actual length (AL). All measurements were expressed in mm and the measuring accuracy was set to 0.5 mm. The significance of the mean differences between EL and ARL and between EL and AL measurements at the 5% confidence level was evaluated. EL measurements were coincident to ARL in all instances. EL and ARL were coincident to AL in 10 (40%) canals, in the remaining 15 canals (60%) the AL measurements were longer than EL and ARL (+0.5 mm) in 14 instances and shorter (-0.5 mm) in one case. Overall, the AL was longer than the EL or ARL, the mean difference being -0.23 mm +/- 0.32 (P < 0.05). It was concluded that the Tri Auto ZX was useful and reliable. The Tri Auto ZX measurements protected against overpreparation.
Design and implementation of online automatic judging system
NASA Astrophysics Data System (ADS)
Liang, Haohui; Chen, Chaojie; Zhong, Xiuyu; Chen, Yuefeng
2017-06-01
For lower efficiency and poorer reliability in programming training and competition by currently artificial judgment, design an Online Automatic Judging (referred to as OAJ) System. The OAJ system including the sandbox judging side and Web side, realizes functions of automatically compiling and running the tested codes, and generating evaluation scores and corresponding reports. To prevent malicious codes from damaging system, the OAJ system utilizes sandbox, ensuring the safety of the system. The OAJ system uses thread pools to achieve parallel test, and adopt database optimization mechanism, such as horizontal split table, to improve the system performance and resources utilization rate. The test results show that the system has high performance, high reliability, high stability and excellent extensibility.
Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems.
Zerrouki, Taha; Balla, Amar
2017-04-01
Arabic diacritics are often missed in Arabic scripts. This feature is a handicap for new learner to read َArabic, text to speech conversion systems, reading and semantic analysis of Arabic texts. The automatic diacritization systems are the best solution to handle this issue. But such automation needs resources as diactritized texts to train and evaluate such systems. In this paper, we describe our corpus of Arabic diacritized texts. This corpus is called Tashkeela. It can be used as a linguistic resource tool for natural language processing such as automatic diacritics systems, dis-ambiguity mechanism, features and data extraction. The corpus is freely available, it contains 75 million of fully vocalized words mainly 97 books from classical and modern Arabic language. The corpus is collected from manually vocalized texts using web crawling process.
Auto-converging stereo cameras for 3D robotic tele-operation
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Aycock, Todd; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
UMLS content views appropriate for NLP processing of the biomedical literature vs. clinical text.
Demner-Fushman, Dina; Mork, James G; Shooshan, Sonya E; Aronson, Alan R
2010-08-01
Identification of medical terms in free text is a first step in such Natural Language Processing (NLP) tasks as automatic indexing of biomedical literature and extraction of patients' problem lists from the text of clinical notes. Many tools developed to perform these tasks use biomedical knowledge encoded in the Unified Medical Language System (UMLS) Metathesaurus. We continue our exploration of automatic approaches to creation of subsets (UMLS content views) which can support NLP processing of either the biomedical literature or clinical text. We found that suppression of highly ambiguous terms in the conservative AutoFilter content view can partially replace manual filtering for literature applications, and suppression of two character mappings in the same content view achieves 89.5% precision at 78.6% recall for clinical applications. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Kim, Young-Suk; Al Otaiba, Stephanie; Puranik, Cynthia; Folsom, Jessica Sidler; Gruelich, Luana
2014-01-01
In the present study we examined the relation between alphabet knowledge fluency (letter names and sounds) and letter writing automaticity, and unique relations of letter writing automaticity and semantic knowledge (i.e., vocabulary) to word reading and spelling over and above code-related skills such as phonological awareness and alphabet…
Genetic and Environmental Contributions to the Development of Childhood Aggression
ERIC Educational Resources Information Center
Lubke, Gitta H.; McArtor, Daniel B.; Boomsma, Dorret I.; Bartels, Meike
2018-01-01
Longitudinal data from a large sample of twins participating in the Netherlands Twin Register (n = 42,827, age range 3-16) were analyzed to investigate the genetic and environmental contributions to childhood aggression. Genetic auto-regressive (simplex) models were used to assess whether the same genes are involved or whether new genes come into…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atkinson, P; Chen, Q
2016-06-15
Purpose: To assess the clinical efficacy of auto beam hold during prostate RapidArc delivery, triggered by fiducial localization on kV imaging with a Varian True Beam. Methods: Prostate patients with four gold fiducials were candidates in this study. Daily setup was accomplished by aligning to fiducials using orthogonal kV imaging. During RapidArc delivery, a kV image was automatically acquired with a momentary beam hold every 60 degrees of gantry rotation. The position of each fiducial was identified by a search algorithm and compared to a predetermined 1.4 cm diameter target area. Treatment continued if all the fiducials were within themore » target area. If any fiducial was outside the target area the beam hold was not released, and the operators determined if the patient needed re-alignment using the daily setup method. Results: Four patients were initially selected. For three patients, the auto beam hold performed seamlessly. In one instance, the system correctly identified misaligned fiducials, stopped treatment, and the patient was re-positioned. The fourth patient had a prosthetic hip which sometimes blocked the fiducials and caused the fiducial search algorithm to fail. The auto beam hold was disabled for this patient and the therapists manually monitored the fiducial positions during treatment. Average delivery time for a 2-arc fraction was increased by 59 seconds. Phantom studies indicated the dose discrepancy related to multiple beam holds is <0.1%. For a plan with 43 fractions, the additional imaging increased dose by an estimated 68 cGy. Conclusion: Automated intrafraction kV imaging can effectively perform auto beam holds due to patient movement, with the exception of prosthetic hip patients. The additional imaging dose and delivery time are clinically acceptable. It may be a cost-effective alternative to Calypso in RapidArc prostate patient delivery. Further study is warranted to explore its feasibility under various clinical conditions.« less
NASA Astrophysics Data System (ADS)
Wojenski, Andrzej; Kasprowicz, Grzegorz; Pozniak, Krzysztof T.; Romaniuk, Ryszard
2013-10-01
The paper describes a concept of automatic firmware generation for reconfigurable measurement systems, which uses FPGA devices and measurement cards in FMC standard. Following sections are described in details: automatic HDL code generation for FPGA devices, automatic communication interfaces implementation, HDL drivers for measurement cards, automatic serial connection between multiple measurement backplane boards, automatic build of memory map (address space), automatic generated firmware management. Presented solutions are required in many advanced measurement systems, like Beam Position Monitors or GEM detectors. This work is a part of a wider project for automatic firmware generation and management of reconfigurable systems. Solutions presented in this paper are based on previous publication in SPIE.
Miller, Andrew D
2015-02-01
A sense peptide can be defined as a peptide whose sequence is coded by the nucleotide sequence (read 5' → 3') of the sense (positive) strand of DNA. Conversely, an antisense (complementary) peptide is coded by the corresponding nucleotide sequence (read 5' → 3') of the antisense (negative) strand of DNA. Research has been accumulating steadily to suggest that sense peptides are capable of specific interactions with their corresponding antisense peptides. Unfortunately, although more and more examples of specific sense-antisense peptide interactions are emerging, the very idea of such interactions does not conform to standard biology dogma and so there remains a sizeable challenge to lift this concept from being perceived as a peripheral phenomenon if not worse, into becoming part of the scientific mainstream. Specific interactions have now been exploited for the inhibition of number of widely different protein-protein and protein-receptor interactions in vitro and in vivo. Further, antisense peptides have also been used to induce the production of antibodies targeted to specific receptors or else the production of anti-idiotypic antibodies targeted against auto-antibodies. Such illustrations of utility would seem to suggest that observed sense-antisense peptide interactions are not just the consequence of a sequence of coincidental 'lucky-hits'. Indeed, at the very least, one might conclude that sense-antisense peptide interactions represent a potentially new and different source of leads for drug discovery. But could there be more to come from studies in this area? Studies on the potential mechanism of sense-antisense peptide interactions suggest that interactions may be driven by amino acid residue interactions specified from the genetic code. If so, such specified amino acid residue interactions could form the basis for an even wider amino acid residue interaction code (proteomic code) that links gene sequences to actual protein structure and function, even entire genomes to entire proteomes. The possibility that such a proteomic code should exist is discussed. So too the potential implications for biology and pharmaceutical science are also discussed were such a code to exist.
E.W. Fobes; R.W. Rowe
1968-01-01
A system for classifying wood-using industries and recording pertinent statistics for automatic data processing is described. Forms and coding instructions for recording data of primary processing plants are included.
Development, Integration and Testing of Automated Triggering Circuit for Hybrid DC Circuit Breaker
NASA Astrophysics Data System (ADS)
Kanabar, Deven; Roy, Swati; Dodiya, Chiragkumar; Pradhan, Subrata
2017-04-01
A novel concept of Hybrid DC circuit breaker having combination of mechanical switch and static switch provides arc-less current commutation into the dump resistor during quench in superconducting magnet operation. The triggering of mechanical and static switches in Hybrid DC breaker can be automatized which can effectively reduce the overall current commutation time of hybrid DC circuit breaker and make the operation independent of opening time of mechanical switch. With this view, a dedicated control circuit (auto-triggering circuit) has been developed which can decide the timing and pulse duration for mechanical switch as well as static switch from the operating parameters. This circuit has been tested with dummy parameters and thereafter integrated with the actual test set up of hybrid DC circuit breaker. This paper deals with the conceptual design of the auto-triggering circuit, its control logic and operation. The test results of Hybrid DC circuit breaker using this circuit have also been discussed.
Auto TeleCare -- understanding the failures and successes of small business in telehealth.
McMahon, David
2005-01-01
Auto TeleCare provided an automatic daily telephone service for people living alone. The business used an interactive voice response (IVR) system to call clients at a set time each day. The clients were required to press a button on their telephone to listen to a message (e.g. joke of the day), thereby indicating that they were alright. If the client did not respond, staff would call the given list of contacts to check on the client's welfare. The service was first offered in December 2003 and there was a lot of interest from clients and health-care groups. Although the technology was sophisticated, it was very simple for the clients to use. However, it was the marketing and advertising costs of the business that in the end proved to be too costly. The number of clients required for commercial viability was calculated to be 3,000, and after nearly 15 months of business it was decided to close the business.
Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)
NASA Technical Reports Server (NTRS)
Schmalz, Tyler; Ryan, Jack
2011-01-01
Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.
O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; ...
1995-01-01
Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less
FAMA: An automatic code for stellar parameter and abundance determination
NASA Astrophysics Data System (ADS)
Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella
2013-10-01
Context. The large amount of spectra obtained during the epoch of extensive spectroscopic surveys of Galactic stars needs the development of automatic procedures to derive their atmospheric parameters and individual element abundances. Aims: Starting from the widely-used code MOOG by C. Sneden, we have developed a new procedure to determine atmospheric parameters and abundances in a fully automatic way. The code FAMA (Fast Automatic MOOG Analysis) is presented describing its approach to derive atmospheric stellar parameters and element abundances. The code, freely distributed, is written in Perl and can be used on different platforms. Methods: The aim of FAMA is to render the computation of the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) as automatic and as independent of any subjective approach as possible. It is based on the simultaneous search for three equilibria: excitation equilibrium, ionization balance, and the relationship between log n(Fe i) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. The convergence criteria are not fixed "a priori" but are based on the quality of the spectra. Results: In this paper we present tests performed on the solar spectrum EWs that assess the method's dependency on the initial parameters and we analyze a sample of stars observed in Galactic open and globular clusters. The current version of FAMA is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/558/A38
SU-D-BRD-06: Automated Population-Based Planning for Whole Brain Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreibmann, E; Fox, T; Crocker, I
2014-06-01
Purpose: Treatment planning for whole brain radiation treatment is technically a simple process but in practice it takes valuable clinical time of repetitive and tedious tasks. This report presents a method that automatically segments the relevant target and normal tissues and creates a treatment plan in only a few minutes after patient simulation. Methods: Segmentation is performed automatically through morphological operations on the soft tissue. The treatment plan is generated by searching a database of previous cases for patients with similar anatomy. In this search, each database case is ranked in terms of similarity using a customized metric designed formore » sensitivity by including only geometrical changes that affect the dose distribution. The database case with the best match is automatically modified to replace relevant patient info and isocenter position while maintaining original beam and MLC settings. Results: Fifteen patients were used to validate the method. In each of these cases the anatomy was accurately segmented to mean Dice coefficients of 0.970 ± 0.008 for the brain, 0.846 ± 0.009 for the eyes and 0.672 ± 0.111 for the lens as compared to clinical segmentations. Each case was then subsequently matched against a database of 70 validated treatment plans and the best matching plan (termed auto-planned), was compared retrospectively with the clinical plans in terms of brain coverage and maximum doses to critical structures. Maximum doses were reduced by a maximum of 20.809 Gy for the left eye (mean 3.533), by 13.352 (1.311) for the right eye, and by 27.471 (4.856), 25.218 (6.315) for the left and right lens. Time from simulation to auto-plan was 3-4 minutes. Conclusion: Automated database- based matching is an alternative to classical treatment planning that improves quality while providing a cost—effective solution to planning through modifying previous validated plans to match a current patient's anatomy.« less
SU-C-BRB-02: Automatic Planning as a Potential Strategy for Dose Escalation for Pancreas SBRT?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S; Zheng, D; Ma, R
Purpose: Stereotactic body radiation therapy (SBRT) has been suggested to provide high rates of local control for locally advanced pancreatic cancer. However, the close proximity of highly radiosensitive normal tissues usually causes the labor-intensive planning process, and may impede further escalation of the prescription dose. The present study evaluates the potential of an automatic planning system as a dose escalation strategy. Methods: Ten pancreatic cancer patients treated with SBRT were studied retrospectively. SBRT was delivered over 5 consecutive fractions with 6 ∼ 8Gy/fraction. Two plans were generated by Pinnacle Auto-Planning with the original prescription and escalated prescription, respectively. Escalated prescriptionmore » adds 1 Gy/fraction to the original prescription. Manually-created planning volumes were excluded in the optimization goals in order to assess the planning efficiency and quality simultaneously. Critical organs with closest proximity were used to determine the plan normalization to ensure the OAR sparing. Dosimetric parameters including D100, and conformity index (CI) were assessed. Results: Auto-plans directly generate acceptable plans for 70% of the cases without necessity of further improvement, and two more iterations at most are necessary for the rest of the cases. For the pancreas SBRT plans with the original prescription, autoplans resulted in favorable target coverage and PTV conformity (D100 = 96.3% ± 1.48%; CI = 0.88 ± 0.06). For the plans with the escalated prescriptions, no significant target under-dosage was observed, and PTV conformity remains reasonable (D100 = 93.3% ± 3.8%, and CI = 0.84 ± 0.05). Conclusion: Automatic planning, without substantial human-intervention process, results in reasonable PTV coverage and PTV conformity on the premise of adequate OAR sparing for the pancreas SBRT plans with escalated prescription. The results highlight the potential of autoplanning as a dose escalation strategy for pancreas SBRT treatment planning. Further investigations with a larger number of patients are necessary. The project is partially supported by Philips Medical Systems.« less
Neurosurgical robotic arm drilling navigation system.
Lin, Chung-Chih; Lin, Hsin-Cheng; Lee, Wen-Yo; Lee, Shih-Tseng; Wu, Chieh-Tsai
2017-09-01
The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). The system may be useful for neurosurgical robotic arm drilling navigation. Copyright © 2016 John Wiley & Sons, Ltd.
Automatic Blocking Of QR and LU Factorizations for Locality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Q; Kennedy, K; You, H
2004-03-26
QR and LU factorizations for dense matrices are important linear algebra computations that are widely used in scientific applications. To efficiently perform these computations on modern computers, the factorization algorithms need to be blocked when operating on large matrices to effectively exploit the deep cache hierarchy prevalent in today's computer memory systems. Because both QR (based on Householder transformations) and LU factorization algorithms contain complex loop structures, few compilers can fully automate the blocking of these algorithms. Though linear algebra libraries such as LAPACK provides manually blocked implementations of these algorithms, by automatically generating blocked versions of the computations, moremore » benefit can be gained such as automatic adaptation of different blocking strategies. This paper demonstrates how to apply an aggressive loop transformation technique, dependence hoisting, to produce efficient blockings for both QR and LU with partial pivoting. We present different blocking strategies that can be generated by our optimizer and compare the performance of auto-blocked versions with manually tuned versions in LAPACK, both using reference BLAS, ATLAS BLAS and native BLAS specially tuned for the underlying machine architectures.« less
Couvigny, Benoit; Kulakauskas, Saulius; Pons, Nicolas; Quinquis, Benoit; Abraham, Anne-Laure; Meylheuc, Thierry; Delorme, Christine; Renault, Pierre; Briandet, Romain; Lapaque, Nicolas; Guédon, Eric
2018-01-01
Biofilm formation is crucial for bacterial community development and host colonization by Streptococcus salivarius, a pioneer colonizer and commensal bacterium of the human gastrointestinal tract. This ability to form biofilms depends on bacterial adhesion to host surfaces, and on the intercellular aggregation contributing to biofilm cohesiveness. Many S. salivarius isolates auto-aggregate, an adhesion process mediated by cell surface proteins. To gain an insight into the genetic factors of S. salivarius that dictate host adhesion and biofilm formation, we developed a screening method, based on the differential sedimentation of bacteria in semi-liquid conditions according to their auto-aggregation capacity, which allowed us to identify twelve mutations affecting this auto-aggregation phenotype. Mutations targeted genes encoding (i) extracellular components, including the CshA surface-exposed protein, the extracellular BglB glucan-binding protein, the GtfE, GtfG and GtfH glycosyltransferases and enzymes responsible for synthesis of cell wall polysaccharides (CwpB, CwpK), (ii) proteins responsible for the extracellular localization of proteins, such as structural components of the accessory SecA2Y2 system (Asp1, Asp2, SecA2) and the SrtA sortase, and (iii) the LiaR transcriptional response regulator. These mutations also influenced biofilm architecture, revealing that similar cell-to-cell interactions govern assembly of auto-aggregates and biofilm formation. We found that BglB, CshA, GtfH and LiaR were specifically associated with bacterial auto-aggregation, whereas Asp1, Asp2, CwpB, CwpK, GtfE, GtfG, SecA2 and SrtA also contributed to adhesion to host cells and host-derived components, or to interactions with the human pathogen Fusobacterium nucleatum. Our study demonstrates that our screening method could also be used to identify genes implicated in the bacterial interactions of pathogens or probiotics, for which aggregation is either a virulence trait or an advantageous feature, respectively. PMID:29515553
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Startsev, N; Dimov, P; Grosche, B; Tretyakov, F; Schüz, J; Akleyev, A
2015-01-01
To follow up populations exposed to several radiation accidents in the Southern Urals, a cause-of-death registry was established at the Urals Center capturing deaths in the Chelyabinsk, Kurgan and Sverdlovsk region since 1950. When registering deaths over such a long time period, quality measures need to be in place to maintain quality and reduce the impact of individual coders as well as quality changes in death certificates. To ensure the uniformity of coding, a method for semi-automatic coding was developed, which is described here. Briefly, the method is based on a dynamic thesaurus, database-supported coding and parallel coding by two different individuals. A comparison of the proposed method for organizing the coding process with the common procedure of coding showed good agreement, with, at the end of the coding process, 70 - 90% agreement for the three-digit ICD -9 rubrics. The semi-automatic method ensures a sufficiently high quality of coding by at the same time providing an opportunity to reduce the labor intensity inherent in the creation of large-volume cause-of-death registries.
GNormPlus: An Integrative Approach for Tagging Genes, Gene Families, and Protein Domains
Lu, Zhiyong
2015-01-01
The automatic recognition of gene names and their associated database identifiers from biomedical text has been widely studied in recent years, as these tasks play an important role in many downstream text-mining applications. Despite significant previous research, only a small number of tools are publicly available and these tools are typically restricted to detecting only mention level gene names or only document level gene identifiers. In this work, we report GNormPlus: an end-to-end and open source system that handles both gene mention and identifier detection. We created a new corpus of 694 PubMed articles to support our development of GNormPlus, containing manual annotations for not only gene names and their identifiers, but also closely related concepts useful for gene name disambiguation, such as gene families and protein domains. GNormPlus integrates several advanced text-mining techniques, including SimConcept for resolving composite gene names. As a result, GNormPlus compares favorably to other state-of-the-art methods when evaluated on two widely used public benchmarking datasets, achieving 86.7% F1-score on the BioCreative II Gene Normalization task dataset and 50.1% F1-score on the BioCreative III Gene Normalization task dataset. The GNormPlus source code and its annotated corpus are freely available, and the results of applying GNormPlus to the entire PubMed are freely accessible through our web-based tool PubTator. PMID:26380306
Social priming of hemispatial neglect affects spatial coding: Evidence from the Simon task.
Arend, Isabel; Aisenberg, Daniela; Henik, Avishai
2016-10-01
In the Simon effect (SE), choice reactions are fast if the location of the stimulus and the response correspond when stimulus location is task-irrelevant; therefore, the SE reflects the automatic processing of space. Priming of social concepts was found to affect automatic processing in the Stroop effect. We investigated whether spatial coding measured by the SE can be affected by the observer's mental state. We used two social priming manipulations of impairments: one involving spatial processing - hemispatial neglect (HN) and another involving color perception - achromatopsia (ACHM). In two experiments the SE was reduced in the "neglected" visual field (VF) under the HN, but not under the ACHM manipulation. Our results show that spatial coding is sensitive to spatial representations that are not derived from task-relevant parameters, but from the observer's cognitive state. These findings dispute stimulus-response interference models grounded on the idea of the automaticity of spatial processing. Copyright © 2016. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Hou, Gene
1998-01-01
Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.
Peng, Fei; Zhou, Xiao-Dong; Zhao, Kun; Wu, Zhi-Bo; Yang, Li-Zhong
2015-01-01
In this work, the effect of seven different sample orientations from 0° to 90° on pilot and non-pilot ignition of PMMA (poly(methyl methacrylate)) exposed to radiation has been studied with experimental and numerical methods. Some new and significant conclusions are drawn from the study, including a U-shape curve of ignition time and critical mass flux as sample angle increases for pilot ignition conditions. However, in auto-ignition, the ignition time and critical mass flux increases with sample angle α. Furthermore, a computational fluid dynamic model have been built based on the Fire Dynamics Simulator (FDS6) code to investigate the mechanisms controlling the dependence on sample orientation of the ignition of PMMA under external radiant heating. The results of theoretical analysis and modeling results indicate the decrease of total incident heat flux at sample surface plays the dominant role during the ignition processes of auto-ignition, but the volatiles gas flow has greater influence for piloted ignition conditions. PMID:28793421
Generating Customized Verifiers for Automatically Generated Code
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2008-01-01
Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.
Yeap, P L; Noble, D J; Harrison, K; Bates, A M; Burnet, N G; Jena, R; Romanchikova, M; Sutcliffe, M P F; Thomas, S J; Barnett, G C; Benson, R J; Jefferies, S J; Parker, M A
2017-07-12
To determine delivered dose to the spinal cord, a technique has been developed to propagate manual contours from kilovoltage computed-tomography (kVCT) scans for treatment planning to megavoltage computed-tomography (MVCT) guidance scans. The technique uses the Elastix software to perform intensity-based deformable image registration of each kVCT scan to the associated MVCT scans. The registration transform is then applied to contours of the spinal cord drawn manually on the kVCT scan, to obtain contour positions on the MVCT scans. Different registration strategies have been investigated, with performance evaluated by comparing the resulting auto-contours with manual contours, drawn by oncologists. The comparison metrics include the conformity index (CI), and the distance between centres (DBC). With optimised registration, auto-contours generally agree well with manual contours. Considering all 30 MVCT scans for each of three patients, the median CI is [Formula: see text], and the median DBC is ([Formula: see text]) mm. An intra-observer comparison for the same scans gives a median CI of [Formula: see text] and a DBC of ([Formula: see text]) mm. Good levels of conformity are also obtained when auto-contours are compared with manual contours from one observer for a single MVCT scan for each of 30 patients, and when they are compared with manual contours from six observers for two MVCT scans for each of three patients. Using the auto-contours to estimate organ position at treatment time, a preliminary study of 33 patients who underwent radiotherapy for head-and-neck cancers indicates good agreement between planned and delivered dose to the spinal cord.
A New Tool for Classifying Small Solar System Objects
NASA Astrophysics Data System (ADS)
Desfosses, Ryan; Arel, D.; Walker, M. E.; Ziffer, J.; Harvell, T.; Campins, H.; Fernandez, Y. R.
2011-05-01
An artificial intelligence program, AutoClass, which was developed by NASA's Artificial Intelligence Branch, uses Bayesian classification theory to automatically choose the most probable classification distribution to describe a dataset. To investigate its usefulness to the Planetary Science community, we tested its ability to reproduce the taxonomic classes as defined by Tholen and Barucci (1989). Of the 406 asteroids from the Eight Color Asteroid Survey (ECAS) we chose for our test, 346 were firmly classified and all but 3 (<1%) were classified by Autoclass as they had been in the previous classification system (Walker et al., 2011). We are now applying it to larger datasets to improve the taxonomy of currently unclassified objects. Having demonstrated AutoClass's ability to recreate existing classification effectively, we extended this work to investigations of albedo-based classification systems. To determine how predictive albedo can be, we used data from the Infrared Astronomical Satellite (IRAS) database in conjunction with the large Sloan Digital Sky Survey (SDSS), which contains color and position data for over 200,000 classified and unclassified asteroids (Ivesic et al., 2001). To judge our success we compared our results with a similar approach to classifying objects using IRAS albedo and asteroid color by Tedesco et al. (1989). Understanding the distribution of the taxonomic classes is important to understanding the history and evolution of our Solar System. AutoClass's success in categorizing ECAS, IRAS and SDSS asteroidal data highlights its potential to scan large domains for natural classes in small solar system objects. Based upon our AutoClass results, we intend to make testable predictions about asteroids observed with the Wide-field Infrared Survey Explorer (WISE).
NASA Astrophysics Data System (ADS)
Yeap, P. L.; Noble, D. J.; Harrison, K.; Bates, A. M.; Burnet, N. G.; Jena, R.; Romanchikova, M.; Sutcliffe, M. P. F.; Thomas, S. J.; Barnett, G. C.; Benson, R. J.; Jefferies, S. J.; Parker, M. A.
2017-08-01
To determine delivered dose to the spinal cord, a technique has been developed to propagate manual contours from kilovoltage computed-tomography (kVCT) scans for treatment planning to megavoltage computed-tomography (MVCT) guidance scans. The technique uses the Elastix software to perform intensity-based deformable image registration of each kVCT scan to the associated MVCT scans. The registration transform is then applied to contours of the spinal cord drawn manually on the kVCT scan, to obtain contour positions on the MVCT scans. Different registration strategies have been investigated, with performance evaluated by comparing the resulting auto-contours with manual contours, drawn by oncologists. The comparison metrics include the conformity index (CI), and the distance between centres (DBC). With optimised registration, auto-contours generally agree well with manual contours. Considering all 30 MVCT scans for each of three patients, the median CI is 0.759 +/- 0.003 , and the median DBC is (0.87 +/- 0.01 ) mm. An intra-observer comparison for the same scans gives a median CI of 0.820 +/- 0.002 and a DBC of (0.64 +/- 0.01 ) mm. Good levels of conformity are also obtained when auto-contours are compared with manual contours from one observer for a single MVCT scan for each of 30 patients, and when they are compared with manual contours from six observers for two MVCT scans for each of three patients. Using the auto-contours to estimate organ position at treatment time, a preliminary study of 33 patients who underwent radiotherapy for head-and-neck cancers indicates good agreement between planned and delivered dose to the spinal cord.
The Evolution and Expression Pattern of Human Overlapping lncRNA and Protein-coding Gene Pairs.
Ning, Qianqian; Li, Yixue; Wang, Zhen; Zhou, Songwen; Sun, Hong; Yu, Guangjun
2017-03-27
Long non-coding RNA overlapping with protein-coding gene (lncRNA-coding pair) is a special type of overlapping genes. Protein-coding overlapping genes have been well studied and increasing attention has been paid to lncRNAs. By studying lncRNA-coding pairs in human genome, we showed that lncRNA-coding pairs were more likely to be generated by overprinting and retaining genes in lncRNA-coding pairs were given higher priority than non-overlapping genes. Besides, the preference of overlapping configurations preserved during evolution was based on the origin of lncRNA-coding pairs. Further investigations showed that lncRNAs promoting the splicing of their embedded protein-coding partners was a unilateral interaction, but the existence of overlapping partners improving the gene expression was bidirectional and the effect was decreased with the increased evolutionary age of genes. Additionally, the expression of lncRNA-coding pairs showed an overall positive correlation and the expression correlation was associated with their overlapping configurations, local genomic environment and evolutionary age of genes. Comparison of the expression correlation of lncRNA-coding pairs between normal and cancer samples found that the lineage-specific pairs including old protein-coding genes may play an important role in tumorigenesis. This work presents a systematically comprehensive understanding of the evolution and the expression pattern of human lncRNA-coding pairs.
Probabilistic terrain models from waveform airborne LiDAR: AutoProbaDTM project results
NASA Astrophysics Data System (ADS)
Jalobeanu, A.; Goncalves, G. R.
2012-12-01
The main objective of the AutoProbaDTM project was to develop new methods for automated probabilistic topographic map production using the latest LiDAR scanners. It included algorithmic development, implementation and validation over a 200 km2 test area in continental Portugal, representing roughly 100 GB of raw data and half a billion waveforms. We aimed to generate digital terrain models automatically, including ground topography as well as uncertainty maps, using Bayesian inference for model estimation and error propagation, and approaches based on image processing. Here we are presenting the results of the completed project (methodological developments and processing results from the test dataset). In June 2011, the test data were acquired in central Portugal, over an area of geomorphological and ecological interest, using a Riegl LMS-Q680i sensor. We managed to survey 70% of the test area at a satisfactory sampling rate, the angular spacing matching the laser beam divergence and the ground spacing nearly equal to the footprint (almost 4 pts/m2 for a 50cm footprint at 1500 m AGL). This is crucial for a correct processing as aliasing artifacts are significantly reduced. A reverse engineering had to be done as the data were delivered in a proprietary binary format, so we were able to read the waveforms and the essential parameters. A robust waveform processing method has been implemented and tested, georeferencing and geometric computations have been coded. Fast gridding and interpolation techniques have been developed. Validation is nearly completed, as well as geometric calibration, IMU error correction, full error propagation and large-scale DEM reconstruction. A probabilistic processing software package has been implemented and code optimization is in progress. This package includes new boresight calibration procedures, robust peak extraction modules, DEM gridding and interpolation methods, and means to visualize the produced uncertain surfaces (topography and accuracy map). Vegetation filtering for bare ground extraction has been left aside, and we wish to explore this research area in the future. A thorough validation of the new techniques and computed models has been conducted, using large numbers of ground control points (GCP) acquired with GPS, evenly distributed and classified according to ground cover and terrain characteristics. More than 16,000 GCP have been acquired during field work. The results are now freely accessible online through a web map service (GeoServer) thus allowing users to visualize data interactively without having to download the full processed dataset.
Zeeli, T; Padalon-Brauch, G; Ellenbogen, E; Gat, A; Sarig, O; Sprecher, E
2015-06-01
Pyogenic sterile arthritis, pyoderma gangrenosum and acne (PAPA) syndrome is a rare hereditary, autosomal dominant, auto-inflammatory disease caused by mutations in the PSTPIP1 gene, which encodes proline-serine-threonine phosphatase interacting protein 1. The fact that PSTPIP1 is involved in immune regulation provides a rationale for treatment of this rare disease with interleukin (IL)-1 signalling blocking agents. We investigated a 33-year-old man with a long-standing history of ulcerative colitis, severe acne and recurrent skin ulcerations, and a 3-year history of a recalcitrant pustular rash. We used direct sequencing to search for mutations in the PSTPIP1 gene. Examination of biopsies obtained from pustules and skin ulcers revealed folliculitis and ulceration with a diffuse neutrophilic dermal infiltrate, consistent with a diagnosis of pyoderma gangrenosum. Because of the known association of acne and pyoderma gangrenosum in PAPA syndrome, we determined the entire coding sequence of the PSTPIP1 gene, and identified a hitherto unreported heterozygous mutation predicted to alter a highly conserved residue (p.G403R) and to be damaging to the protein function. Based on this finding, we initiated treatment with a human IL-1 receptor antagonist, anakinra, which led to a dramatic improvement in the patient's condition. We describe a novel mutation in PSTPIP1 resulting in pyoderma gangrenosum, acne and ulcerative colitis. This novel constellation of clinical manifestations, which we term 'PAC syndrome', suggests the need to regroup all PSTPIP1-associated phenotypes under one aetiological group. © 2015 British Association of Dermatologists.
Use Them ... or Lose Them? The Case for and against Using QR Codes
ERIC Educational Resources Information Center
Cunningham, Chuck; Dull, Cassie
2011-01-01
A quick-response (QR) code is a two-dimensional, black-and-white square barcode and links directly to a URL of one's choice. When the code is scanned with a smartphone, it will automatically redirect the user to the designated URL. QR codes are popping up everywhere--billboards, magazines, posters, shop windows, TVs, computer screens, and more.…
Automatic hammering of nano-patterns on special polymer film by using a vibrating AFM tip
2012-01-01
Complicated nano-patterns with linewidth less than 18 nm can be automatically hammered by using atomic force microscopy (AFM) tip in tapping mode with high speed. In this study, the special sample was thin poly(styrene-ethylene/butylenes-styrene) (SEBS) block copolymer film with hexagonal spherical microstructures. An ordinary silicon tip was used as a nano-hammer, and the entire hammering process is controlled by a computer program. Experimental results demonstrate that such structure-tailored thin films enable AFM tip hammering to be performed on their surfaces. Both imprinted and embossed nano-patterns can be generated by using a vibrating tip with a larger tapping load and by using a predefined program to control the route of tip movement as it passes over the sample’s surface. Specific details for the fabrication of structure-tailored SEBS film and the theory for auto-hammering patterns were presented in detail. PMID:22889045
A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.
Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi
2016-08-30
This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.
The changing scene in Hashimoto's disease: a review.
Stuart, Angus
2011-09-01
The review briefly describes the evolution of hypotheses about cronic thyroiditis, the escape of colloid hypothesis, basement membrane destruction, the auto-immune theory and the role of disregulatory genes. Copyright © 2011 Elsevier Ltd. All rights reserved.
Automatically Preparing Safe SQL Queries
NASA Astrophysics Data System (ADS)
Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.
We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.
Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Carter, Jonathan; Oliker, Leonid
2009-04-10
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon E5345 (Clovertown), AMD Opteron 2214 (Santa Rosa), AMD Opteron 2356 (Barcelona), Sun T5140 T2+ (Victoria Falls), as well asmore » a QS20 IBM Cell Blade. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 15x improvement compared with the original code at a given concurrency. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less
Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh
Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less
Automatic mathematical modeling for real time simulation system
NASA Technical Reports Server (NTRS)
Wang, Caroline; Purinton, Steve
1988-01-01
A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.
EndoU is a novel regulator of AICD during peripheral B cell selection
Poe, Jonathan C.; Kountikov, Evgueni I.; Lykken, Jacquelyn M.; Natarajan, Abirami; Marchuk, Douglas A.
2014-01-01
Balanced transmembrane signals maintain a competent peripheral B cell pool limited in self-reactive B cells that may produce pathogenic autoantibodies. To identify molecules regulating peripheral B cell survival and tolerance to self-antigens (Ags), a gene modifier screen was performed with B cells from CD22-deficient C57BL/6 (CD22−/−[B6]) mice that undergo activation-induced cell death (AICD) and fail to up-regulate c-Myc expression after B cell Ag receptor ligation. Likewise, lysozyme auto-Ag–specific B cells in IgTg hen egg lysozyme (HEL) transgenic mice inhabit the spleen but undergo AICD after auto-Ag encounter. This gene modifier screen identified EndoU, a single-stranded RNA-binding protein of ancient origin, as a major regulator of B cell survival in both models. EndoU gene disruption prevents AICD and normalizes c-Myc expression. These findings reveal that EndoU is a critical regulator of an unexpected and novel RNA-dependent pathway controlling peripheral B cell survival and Ag responsiveness that may contribute to peripheral B cell tolerance. PMID:24344237
EndoU is a novel regulator of AICD during peripheral B cell selection.
Poe, Jonathan C; Kountikov, Evgueni I; Lykken, Jacquelyn M; Natarajan, Abirami; Marchuk, Douglas A; Tedder, Thomas F
2014-01-13
Balanced transmembrane signals maintain a competent peripheral B cell pool limited in self-reactive B cells that may produce pathogenic autoantibodies. To identify molecules regulating peripheral B cell survival and tolerance to self-antigens (Ags), a gene modifier screen was performed with B cells from CD22-deficient C57BL/6 (CD22(-/-[B6])) mice that undergo activation-induced cell death (AICD) and fail to up-regulate c-Myc expression after B cell Ag receptor ligation. Likewise, lysozyme auto-Ag-specific B cells in Ig(Tg) hen egg lysozyme (HEL) transgenic mice inhabit the spleen but undergo AICD after auto-Ag encounter. This gene modifier screen identified EndoU, a single-stranded RNA-binding protein of ancient origin, as a major regulator of B cell survival in both models. EndoU gene disruption prevents AICD and normalizes c-Myc expression. These findings reveal that EndoU is a critical regulator of an unexpected and novel RNA-dependent pathway controlling peripheral B cell survival and Ag responsiveness that may contribute to peripheral B cell tolerance.
Darwish, Nader Ahmed; Khan, Raham Sher; Ntui, Valentine Otang; Nakamura, Ikuo; Mii, Masahiro
2014-03-01
Marker-free transgenic eggplants, exhibiting enhanced resistance to Alternaria solani , can be generated on plant growth regulators (PGRs)- and antibiotic-free MS medium employing the multi-auto-transformation (MAT) vector, pMAT21 - wasabi defensin , wherein isopentenyl transferase ( ipt ) gene is used as a positive selection marker. Use of the selection marker genes conferring antibiotic or herbicide resistance in transgenic plants has been considered a serious problem for environment and the public. Multi-auto-transformation (MAT) vector system has been one of the tools to excise the selection marker gene and produce marker-free transgenic plants. Ipt gene was used as a selection marker gene. Wasabi defensin gene, isolated from Wasabia japonica (a Japanese horseradish which has been a potential source of antimicrobial proteins), was used as a gene of interest. Wasabi defensin gene was cloned from the binary vector, pEKH-WD, to an ipt-type MAT vector, pMAT21, by gateway cloning technology and transferred to Agrobacterium tumefaciens strain EHA105. Infected cotyledon explants of eggplant were cultured on PGRs- and antibiotic-free MS medium. Extreme shooty phenotype/ipt shoots were produced by the explants infected with the pMAT21-wasabi defensin (WD). The same PGRs- and antibiotic-free MS medium was used in subcultures of the ipt shoots. Subsequently, morphologically normal shoots emerged from the Ipt shoots. Molecular analyses of genomic DNA from transgenic plants confirmed the integration of the WD gene and excision of the selection marker (ipt gene). Expression of the WD gene was confirmed by RT-PCR and Northern blot analyses. In vitro whole plant and detached leaf assay of the marker-free transgenic plants exhibited enhanced resistance against Alternaria solani.
The gene transformer-2 of Anastrepha fruit flies (Diptera, Tephritidae) and its evolution in insects
2010-01-01
Background In the tephritids Ceratitis, Bactrocera and Anastrepha, the gene transformer provides the memory device for sex determination via its auto-regulation; only in females is functional Tra protein produced. To date, the isolation and characterisation of the gene transformer-2 in the tephritids has only been undertaken in Ceratitis, and it has been shown that its function is required for the female-specific splicing of doublesex and transformer pre-mRNA. It therefore participates in transformer auto-regulatory function. In this work, the characterisation of this gene in eleven tephritid species belonging to the less extensively analysed genus Anastrepha was undertaken in order to throw light on the evolution of transformer-2. Results The gene transformer-2 produces a protein of 249 amino acids in both sexes, which shows the features of the SR protein family. No significant partially spliced mRNA isoform specific to the male germ line was detected, unlike in Drosophila. It is transcribed in both sexes during development and in adult life, in both the soma and germ line. The injection of Anastrepha transformer-2 dsRNA into Anastrepha embryos caused a change in the splicing pattern of the endogenous transformer and doublesex pre-mRNA of XX females from the female to the male mode. Consequently, these XX females were transformed into pseudomales. The comparison of the eleven Anastrepha Transformer-2 proteins among themselves, and with the Transformer-2 proteins of other insects, suggests the existence of negative selection acting at the protein level to maintain Transformer-2 structural features. Conclusions These results indicate that transformer-2 is required for sex determination in Anastrepha through its participation in the female-specific splicing of transformer and doublesex pre-mRNAs. It is therefore needed for the auto-regulation of the gene transformer. Thus, the transformer/transfomer-2 > doublesex elements at the bottom of the cascade, and their relationships, probably represent the ancestral state (which still exists in the Tephritidae, Calliphoridae and Muscidae lineages) of the extant cascade found in the Drosophilidae lineage (in which tra is just another component of the sex determination gene cascade regulated by Sex-lethal). In the phylogenetic lineage that gave rise to the drosophilids, evolution co-opted for Sex-lethal, modified it, and converted it into the key gene controlling sex determination. PMID:20465812
Sarno, Francesca; Ruiz, María F; Eirín-López, José M; Perondini, André L P; Selivon, Denise; Sánchez, Lucas
2010-05-13
In the tephritids Ceratitis, Bactrocera and Anastrepha, the gene transformer provides the memory device for sex determination via its auto-regulation; only in females is functional Tra protein produced. To date, the isolation and characterisation of the gene transformer-2 in the tephritids has only been undertaken in Ceratitis, and it has been shown that its function is required for the female-specific splicing of doublesex and transformer pre-mRNA. It therefore participates in transformer auto-regulatory function. In this work, the characterisation of this gene in eleven tephritid species belonging to the less extensively analysed genus Anastrepha was undertaken in order to throw light on the evolution of transformer-2. The gene transformer-2 produces a protein of 249 amino acids in both sexes, which shows the features of the SR protein family. No significant partially spliced mRNA isoform specific to the male germ line was detected, unlike in Drosophila. It is transcribed in both sexes during development and in adult life, in both the soma and germ line. The injection of Anastrepha transformer-2 dsRNA into Anastrepha embryos caused a change in the splicing pattern of the endogenous transformer and doublesex pre-mRNA of XX females from the female to the male mode. Consequently, these XX females were transformed into pseudomales. The comparison of the eleven Anastrepha Transformer-2 proteins among themselves, and with the Transformer-2 proteins of other insects, suggests the existence of negative selection acting at the protein level to maintain Transformer-2 structural features. These results indicate that transformer-2 is required for sex determination in Anastrepha through its participation in the female-specific splicing of transformer and doublesex pre-mRNAs. It is therefore needed for the auto-regulation of the gene transformer. Thus, the transformer/transfomer-2 > doublesex elements at the bottom of the cascade, and their relationships, probably represent the ancestral state (which still exists in the Tephritidae, Calliphoridae and Muscidae lineages) of the extant cascade found in the Drosophilidae lineage (in which tra is just another component of the sex determination gene cascade regulated by Sex-lethal). In the phylogenetic lineage that gave rise to the drosophilids, evolution co-opted for Sex-lethal, modified it, and converted it into the key gene controlling sex determination.
Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms.
Helms, Lucas; Clune, Jeff
2017-01-01
Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding.
Automating Traceability for Generated Software Artifacts
NASA Technical Reports Server (NTRS)
Richardson, Julian; Green, Jeffrey
2004-01-01
Program synthesis automatically derives programs from specifications of their behavior. One advantage of program synthesis, as opposed to manual coding, is that there is a direct link between the specification and the derived program. This link is, however, not very fine-grained: it can be best characterized as Program is-derived- from Specification. When the generated program needs to be understood or modified, more $ne-grained linking is useful. In this paper, we present a novel technique for automatically deriving traceability relations between parts of a specification and parts of the synthesized program. The technique is very lightweight and works -- with varying degrees of success - for any process in which one artifact is automatically derived from another. We illustrate the generality of the technique by applying it to two kinds of automatic generation: synthesis of Kalman Filter programs from speci3cations using the Aut- oFilter program synthesis system, and generation of assembly language programs from C source code using the GCC C compilel: We evaluate the effectiveness of the technique in the latter application.
Introduction of the ASGARD code (Automated Selection and Grouping of events in AIA Regional Data)
NASA Astrophysics Data System (ADS)
Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv K.; Fayock, Brian
2017-08-01
We have developed the ASGARD code to automatically detect and group brightenings ("events") in AIA data. The event selection and grouping can be optimized to the respective dataset with a multitude of control parameters. The code was initially written for IRIS data, but has since been optimized for AIA. However, the underlying algorithm is not limited to either and could be used for other data as well.Results from datasets in various AIA channels show that brightenings are reliably detected and that coherent coronal structures can be isolated by using the obtained information about the start, peak, and end times of events. We are presently working on a follow-up algorithm to automatically determine the heating and cooling timescales of coronal structures. This will be done by correlating the information from different AIA channels with different temperature responses. We will present the code and preliminary results.
Faunus: An object oriented framework for molecular simulation
Lund, Mikael; Trulsson, Martin; Persson, Björn
2008-01-01
Background We present a C++ class library for Monte Carlo simulation of molecular systems, including proteins in solution. The design is generic and highly modular, enabling multiple developers to easily implement additional features. The statistical mechanical methods are documented by extensive use of code comments that – subsequently – are collected to automatically build a web-based manual. Results We show how an object oriented design can be used to create an intuitively appealing coding framework for molecular simulation. This is exemplified in a minimalistic C++ program that can calculate protein protonation states. We further discuss performance issues related to high level coding abstraction. Conclusion C++ and the Standard Template Library (STL) provide a high-performance platform for generic molecular modeling. Automatic generation of code documentation from inline comments has proven particularly useful in that no separate manual needs to be maintained. PMID:18241331
Automatic vehicle location system
NASA Technical Reports Server (NTRS)
Hansen, G. R., Jr. (Inventor)
1973-01-01
An automatic vehicle detection system is disclosed, in which each vehicle whose location is to be detected carries active means which interact with passive elements at each location to be identified. The passive elements comprise a plurality of passive loops arranged in a sequence along the travel direction. Each of the loops is tuned to a chosen frequency so that the sequence of the frequencies defines the location code. As the vehicle traverses the sequence of the loops as it passes over each loop, signals only at the frequency of the loop being passed over are coupled from a vehicle transmitter to a vehicle receiver. The frequencies of the received signals in the receiver produce outputs which together represent a code of the traversed location. The code location is defined by a painted pattern which reflects light to a vehicle carried detector whose output is used to derive the code defined by the pattern.
antiSMASH 3.0—a comprehensive resource for the genome mining of biosynthetic gene clusters
Blin, Kai; Duddela, Srikanth; Krug, Daniel; Kim, Hyun Uk; Bruccoleri, Robert; Lee, Sang Yup; Fischbach, Michael A; Müller, Rolf; Wohlleben, Wolfgang; Breitling, Rainer; Takano, Eriko
2015-01-01
Abstract Microbial secondary metabolism constitutes a rich source of antibiotics, chemotherapeutics, insecticides and other high-value chemicals. Genome mining of gene clusters that encode the biosynthetic pathways for these metabolites has become a key methodology for novel compound discovery. In 2011, we introduced antiSMASH, a web server and stand-alone tool for the automatic genomic identification and analysis of biosynthetic gene clusters, available at http://antismash.secondarymetabolites.org. Here, we present version 3.0 of antiSMASH, which has undergone major improvements. A full integration of the recently published ClusterFinder algorithm now allows using this probabilistic algorithm to detect putative gene clusters of unknown types. Also, a new dereplication variant of the ClusterBlast module now identifies similarities of identified clusters to any of 1172 clusters with known end products. At the enzyme level, active sites of key biosynthetic enzymes are now pinpointed through a curated pattern-matching procedure and Enzyme Commission numbers are assigned to functionally classify all enzyme-coding genes. Additionally, chemical structure prediction has been improved by incorporating polyketide reduction states. Finally, in order for users to be able to organize and analyze multiple antiSMASH outputs in a private setting, a new XML output module allows offline editing of antiSMASH annotations within the Geneious software. PMID:25948579
A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry
Bertamini, Marco; Jones, Andrew; Holmes, Tim; Zanker, Johannes M.
2016-01-01
Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation–symmetry, DS gene) and orientation (0° to 90°, orientation, ORI gene). An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference. PMID:27433324
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations
NASA Astrophysics Data System (ADS)
Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.
Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
Development of an automated ultrasonic testing system
NASA Astrophysics Data System (ADS)
Shuxiang, Jiao; Wong, Brian Stephen
2005-04-01
Non-Destructive Testing is necessary in areas where defects in structures emerge over time due to wear and tear and structural integrity is necessary to maintain its usability. However, manual testing results in many limitations: high training cost, long training procedure, and worse, the inconsistent test results. A prime objective of this project is to develop an automatic Non-Destructive testing system for a shaft of the wheel axle of a railway carriage. Various methods, such as the neural network, pattern recognition methods and knowledge-based system are used for the artificial intelligence problem. In this paper, a statistical pattern recognition approach, Classification Tree is applied. Before feature selection, a thorough study on the ultrasonic signals produced was carried out. Based on the analysis of the ultrasonic signals, three signal processing methods were developed to enhance the ultrasonic signals: Cross-Correlation, Zero-Phase filter and Averaging. The target of this step is to reduce the noise and make the signal character more distinguishable. Four features: 1. The Auto Regressive Model Coefficients. 2. Standard Deviation. 3. Pearson Correlation 4. Dispersion Uniformity Degree are selected. And then a Classification Tree is created and applied to recognize the peak positions and amplitudes. Searching local maximum is carried out before feature computing. This procedure reduces much computation time in the real-time testing. Based on this algorithm, a software package called SOFRA was developed to recognize the peaks, calibrate automatically and test a simulated shaft automatically. The automatic calibration procedure and the automatic shaft testing procedure are developed.
Secure web-based invocation of large-scale plasma simulation codes
NASA Astrophysics Data System (ADS)
Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.
2004-12-01
We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, MinKyu; Ju, Sang Gyu, E-mail: sg.ju@samsung.com, E-mail: doho.choi@samsung.com; Chung, Kwangzoo
2015-02-15
Purpose: A new automatic quality assurance (AutoRCQA) system using a three-dimensional scanner (3DS) with system automation was developed to improve the accuracy and efficiency of the quality assurance (QA) procedure for proton range compensators (RCs). The system performance was evaluated for clinical implementation. Methods: The AutoRCQA system consists of a three-dimensional measurement system (3DMS) based on 3DS and in-house developed verification software (3DVS). To verify the geometrical accuracy, the planned RC data (PRC), calculated with the treatment planning system (TPS), were reconstructed and coregistered with the measured RC data (MRC) based on the beam isocenter. The PRC and MRC innermore » surfaces were compared with composite analysis (CA) using 3DVS, using the CA pass rate for quantitative analysis. To evaluate the detection accuracy of the system, the authors designed a fake PRC by artificially adding small cubic islands with side lengths of 1.5, 2.5, and 3.5 mm on the inner surface of the PRC and performed CA with the depth difference and distance-to-agreement tolerances of [1 mm, 1 mm], [2 mm, 2 mm], and [3 mm, 3 mm]. In addition, the authors performed clinical tests using seven RCs [computerized milling machine (CMM)-RCs] manufactured by CMM, which were designed for treating various disease sites. The systematic offsets of the seven CMM-RCs were evaluated through the automatic registration function of AutoRCQA. For comparison with conventional technique, the authors measured the thickness at three points in each of the seven CMM-RCs using a manual depth measurement device and calculated thickness difference based on the TPS data (TPS-manual measurement). These results were compared with data obtained from 3DVS. The geometrical accuracy of each CMM-RC inner surface was investigated using the TPS data by performing CA with the same criteria. The authors also measured the net processing time, including the scan and analysis time. Results: The AutoRCQA system accurately detected all fake objects in accordance with the given criteria. The median systematic offset of the seven CMM-RCs was 0.08 mm (interquartile range: −0.25 to 0.37 mm) and −0.08 mm (−0.58 to 0.01 mm) in the X- and Y-directions, respectively, while the median distance difference was 0.37 mm (0.23–0.94 mm). The median thickness difference of the TPS-manual measurement at points 1, 2, and 3 was −0.4 mm (−0.4 to −0.2 mm), −0.2 mm (−0.3 to 0.0 mm), and −0.3 mm (−0.6 to −0.1 mm), respectively, while the median difference of 3DMS was 0.0 mm (−0.1 to 0.2 mm), 0.0 mm (−0.1 to 0.3 mm), and 0.1 mm (−0.1 to 0.2 mm), respectively. Thus, 3DMS showed slightly better values compared to the manual measurements for points 1 and 3 in statistical analysis (p < 0.05). The average pass rate of the seven CMM-RCs was 97.97% ± 1.68% for 1-mm CA conditions, increasing to 99.98% ± 0.03% and 100% ± 0.00% for 2- and 3-mm CA conditions, respectively. The average net analysis time was 18.01 ± 1.65 min. Conclusions: The authors have developed an automated 3DS-based proton RC QA system and verified its performance. The AutoRCQA system may improve the accuracy and efficiency of QA for RCs.« less
Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis
Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.
2014-01-01
Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859
ERIC Educational Resources Information Center
Goomas, David T.
2008-01-01
In this report from the field, computerized auditory feedback was used to inform order selectors and order selector auditors in a distribution center to add an electronic article surveillance (EAS) adhesive tag. This was done by programming handheld computers to emit a loud beep for high-priced items upon scanning the item's bar-coded Universal…
Research on Automatic Programming
1975-12-31
Sequential processes, deadlocks, and semaphore primitives , Ph.D. Thesis, Harvard University, November 1974; Center for Research in Computing...verified. 13 Code generated to effect the synchronization makes use of the ECL control extension facility (Prenner’s CI, see [Prenner]). The... semaphore operations [Dijkstra] is being developed. Initial results for this code generator are very encouraging; in many cases generated code is
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)
2001-01-01
This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.
A DS-UWB Cognitive Radio System Based on Bridge Function Smart Codes
NASA Astrophysics Data System (ADS)
Xu, Yafei; Hong, Sheng; Zhao, Guodong; Zhang, Fengyuan; di, Jinshan; Zhang, Qishan
This paper proposes a direct-sequence UWB Gaussian pulse of cognitive radio systems based on bridge function smart sequence matrix and the Gaussian pulse. As the system uses the spreading sequence code, that is the bridge function smart code sequence, the zero correlation zones (ZCZs) which the bridge function sequences' auto-correlation functions had, could reduce multipath fading of the pulse interference. The Modulated channel signal was sent into the IEEE 802.15.3a UWB channel. We analysis the ZCZs's inhibition to the interference multipath interference (MPI), as one of the main system sources interferences. The simulation in SIMULINK/MATLAB is described in detail. The result shows the system has better performance by comparison with that employing Walsh sequence square matrix, and it was verified by the formula in principle.
Multiblock grid generation with automatic zoning
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.
1995-01-01
An overview will be given for multiblock grid generation with automatic zoning. We shall explore the many advantages and benefits of this exciting technology and will also see how to apply it to a number of interesting cases. The technology is available in the form of a commercial code, GridPro(registered trademark)/az3000. This code takes surface geometry definitions and patterns of points as its primary input and produces high quality grids as its output. Before we embark upon our exploration, we shall first give a brief background of the environment in which this technology fits.
Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapenta, G. M.
2002-01-01
We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.
An Expert System for the Development of Efficient Parallel Code
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit
2004-01-01
We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.
Probiotic properties of lactic acid bacteria isolated from water-buffalo mozzarella cheese.
Jeronymo-Ceneviva, Ana Beatriz; de Paula, Aline Teodoro; Silva, Luana Faria; Todorov, Svetoslav Dimitrov; Franco, Bernadette Dora G Mello; Penna, Ana Lúcia B
2014-12-01
This study evaluated the probiotic properties (stability at different pH values and bile salt concentration, auto-aggregation and co-aggregation, survival in the presence of antibiotics and commercial drugs, study of β-galactosidase production, evaluation of the presence of genes encoding MapA and Mub adhesion proteins and EF-Tu elongation factor, and the presence of genes encoding virulence factor) of four LAB strains (Lactobacillus casei SJRP35, Leuconostoc citreum SJRP44, Lactobacillus delbrueckii subsp. bulgaricus SJRP57 and Leuconostoc mesenteroides subsp. mesenteroides SJRP58) which produced antimicrobial substances (antimicrobial peptides). The strains survived the simulated GIT modeled in MRS broth, whole and skim milk. In addition, auto-aggregation and the cell surface hydrophobicity of all strains were high, and various degrees of co-aggregation were observed with indicator strains. All strains presented low resistance to several antibiotics and survived in the presence of commercial drugs. Only the strain SJRP44 did not produce the β-galactosidase enzyme. Moreover, the strain SJRP57 did not show the presence of any genes encoding virulence factors; however, the strain SJRP35 presented vancomycin resistance and adhesion of collagen genes, the strain SJRP44 harbored the ornithine decarboxylase gene and the strain SJRP58 generated positive results for aggregation substance and histidine decarboxylase genes. In conclusion, the strain SJRP57 was considered the best candidate as probiotic cultures for further in vivo studies and functional food products development.
Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nataf, J.M.; Winkelmann, F.
We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less
Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nataf, J.M.; Winkelmann, F.
We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less
Introduction of the ASGARD Code
NASA Technical Reports Server (NTRS)
Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian
2017-01-01
ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).
Vaccine Hesitancy in Discussion Forums: Computer-Assisted Argument Mining with Topic Models.
Skeppstedt, Maria; Kerren, Andreas; Stede, Manfred
2018-01-01
Arguments used when vaccination is debated on Internet discussion forums might give us valuable insights into reasons behind vaccine hesitancy. In this study, we applied automatic topic modelling on a collection of 943 discussion posts in which vaccine was debated, and six distinct discussion topics were detected by the algorithm. When manually coding the posts ranked as most typical for these six topics, a set of semantically coherent arguments were identified for each extracted topic. This indicates that topic modelling is a useful method for automatically identifying vaccine-related discussion topics and for identifying debate posts where these topics are discussed. This functionality could facilitate manual coding of salient arguments, and thereby form an important component in a system for computer-assisted coding of vaccine-related discussions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yixing; Hong, Tianzhen
We present that urban-scale building energy modeling (UBEM)—using building modeling to understand how a group of buildings will perform together—is attracting increasing attention in the energy modeling field. Unlike modeling a single building, which will use detailed information, UBEM generally uses existing building stock data consisting of high-level building information. This study evaluated the impacts of three zoning methods and the use of floor multipliers on the simulated energy use of 940 office and retail buildings in three climate zones using City Building Energy Saver. The first zoning method, OneZone, creates one thermal zone per floor using the target building'smore » footprint. The second zoning method, AutoZone, splits the building's footprint into perimeter and core zones. A novel, pixel-based automatic zoning algorithm is developed for the AutoZone method. The third zoning method, Prototype, uses the U.S. Department of Energy's reference building prototype shapes. Results show that simulated source energy use of buildings with the floor multiplier are marginally higher by up to 2.6% than those modeling each floor explicitly, which take two to three times longer to run. Compared with the AutoZone method, the OneZone method results in decreased thermal loads and less equipment capacities: 15.2% smaller fan capacity, 11.1% smaller cooling capacity, 11.0% smaller heating capacity, 16.9% less heating loads, and 7.5% less cooling loads. Source energy use differences range from -7.6% to 5.1%. When comparing the Prototype method with the AutoZone method, source energy use differences range from -12.1% to 19.0%, and larger ranges of differences are found for the thermal loads and equipment capacities. This study demonstrated that zoning methods have a significant impact on the simulated energy use of UBEM. Finally, one recommendation resulting from this study is to use the AutoZone method with floor multiplier to obtain accurate results while balancing the simulation run time for UBEM.« less
Chen, Yixing; Hong, Tianzhen
2018-02-20
We present that urban-scale building energy modeling (UBEM)—using building modeling to understand how a group of buildings will perform together—is attracting increasing attention in the energy modeling field. Unlike modeling a single building, which will use detailed information, UBEM generally uses existing building stock data consisting of high-level building information. This study evaluated the impacts of three zoning methods and the use of floor multipliers on the simulated energy use of 940 office and retail buildings in three climate zones using City Building Energy Saver. The first zoning method, OneZone, creates one thermal zone per floor using the target building'smore » footprint. The second zoning method, AutoZone, splits the building's footprint into perimeter and core zones. A novel, pixel-based automatic zoning algorithm is developed for the AutoZone method. The third zoning method, Prototype, uses the U.S. Department of Energy's reference building prototype shapes. Results show that simulated source energy use of buildings with the floor multiplier are marginally higher by up to 2.6% than those modeling each floor explicitly, which take two to three times longer to run. Compared with the AutoZone method, the OneZone method results in decreased thermal loads and less equipment capacities: 15.2% smaller fan capacity, 11.1% smaller cooling capacity, 11.0% smaller heating capacity, 16.9% less heating loads, and 7.5% less cooling loads. Source energy use differences range from -7.6% to 5.1%. When comparing the Prototype method with the AutoZone method, source energy use differences range from -12.1% to 19.0%, and larger ranges of differences are found for the thermal loads and equipment capacities. This study demonstrated that zoning methods have a significant impact on the simulated energy use of UBEM. Finally, one recommendation resulting from this study is to use the AutoZone method with floor multiplier to obtain accurate results while balancing the simulation run time for UBEM.« less
Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr
2010-10-28
Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.
Jin, Yinji; Jin, Taixian; Lee, Sun-Mi
Pressure injury risk assessment is the first step toward preventing pressure injuries, but traditional assessment tools are time-consuming, resulting in work overload and fatigue for nurses. The objectives of the study were to build an automated pressure injury risk assessment system (Auto-PIRAS) that can assess pressure injury risk using data, without requiring nurses to collect or input additional data, and to evaluate the validity of this assessment tool. A retrospective case-control study and a system development study were conducted in a 1,355-bed university hospital in Seoul, South Korea. A total of 1,305 pressure injury patients and 5,220 nonpressure injury patients participated for the development of a risk scoring algorithm: 687 and 2,748 for the validation of the algorithm and 237 and 994 for validation after clinical implementation, respectively. A total of 4,211 pressure injury-related clinical variables were extracted from the electronic health record (EHR) systems to develop a risk scoring algorithm, which was validated and incorporated into the EHR. That program was further evaluated for predictive and concurrent validity. Auto-PIRAS, incorporated into the EHR system, assigned a risk assessment score of high, moderate, or low and displayed this on the Kardex nursing record screen. Risk scores were updated nightly according to 10 predetermined risk factors. The predictive validity measures of the algorithm validation stage were as follows: sensitivity = .87, specificity = .90, positive predictive value = .68, negative predictive value = .97, Youden index = .77, and the area under the receiver operating characteristic curve = .95. The predictive validity measures of the Braden Scale were as follows: sensitivity = .77, specificity = .93, positive predictive value = .72, negative predictive value = .95, Youden index = .70, and the area under the receiver operating characteristic curve = .85. The kappa of the Auto-PIRAS and Braden Scale risk classification result was .73. The predictive performance of the Auto-PIRAS was similar to Braden Scale assessments conducted by nurses. Auto-PIRAS is expected to be used as a system that assesses pressure injury risk automatically without additional data collection by nurses.
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
Development of an Automatic Echo-counting Program for HROFFT Spectrograms
NASA Astrophysics Data System (ADS)
Noguchi, Kazuya; Yamamoto, Masa-Yuki
2008-06-01
Radio meteor observations by Ham-band beacon or FM radio broadcasts using “Ham-band Radio meteor Observation Fast Fourier Transform” (HROFFT) an automatic operating software have been performed widely in recent days. Previously, counting of meteor echoes on the spectrograms of radio meteor observation was performed manually by observers. In the present paper, we introduce an automatic meteor echo counting software application. Although output images of the HROFFT contain both the features of meteor echoes and those of various types of noises, a newly developed image processing technique has been applied, resulting in software that enables a useful auto-counting tool. There exists a slight error in the processing on spectrograms when the observation site is affected by many disturbing noises. Nevertheless, comparison between software and manual counting revealed an agreement of almost 90%. Therefore, we can easily obtain a dataset of detection time, duration time, signal strength, and Doppler shift of each meteor echo from the HROFFT spectrograms. Using this software, statistical analyses of meteor activities is based on the results obtained at many Ham-band Radio meteor Observation (HRO) sites throughout the world, resulting in a very useful “standard” for monitoring meteor stream activities in real time.
Masso, Majid; Vaisman, Iosif I
2014-01-01
The AUTO-MUTE 2.0 stand-alone software package includes a collection of programs for predicting functional changes to proteins upon single residue substitutions, developed by combining structure-based features with trained statistical learning models. Three of the predictors evaluate changes to protein stability upon mutation, each complementing a distinct experimental approach. Two additional classifiers are available, one for predicting activity changes due to residue replacements and the other for determining the disease potential of mutations associated with nonsynonymous single nucleotide polymorphisms (nsSNPs) in human proteins. These five command-line driven tools, as well as all the supporting programs, complement those that run our AUTO-MUTE web-based server. Nevertheless, all the codes have been rewritten and substantially altered for the new portable software, and they incorporate several new features based on user feedback. Included among these upgrades is the ability to perform three highly requested tasks: to run "big data" batch jobs; to generate predictions using modified protein data bank (PDB) structures, and unpublished personal models prepared using standard PDB file formatting; and to utilize NMR structure files that contain multiple models.
Lee, Noah; Laine, Andrew F; Smith, R Theodore
2007-01-01
Fundus auto-fluorescence (FAF) images with hypo-fluorescence indicate geographic atrophy (GA) of the retinal pigment epithelium (RPE) in age-related macular degeneration (AMD). Manual quantification of GA is time consuming and prone to inter- and intra-observer variability. Automatic quantification is important for determining disease progression and facilitating clinical diagnosis of AMD. In this paper we describe a hybrid segmentation method for GA quantification by identifying hypo-fluorescent GA regions from other interfering retinal vessel structures. First, we employ background illumination correction exploiting a non-linear adaptive smoothing operator. Then, we use the level set framework to perform segmentation of hypo-fluorescent areas. Finally, we present an energy function combining morphological scale-space analysis with a geometric model-based approach to perform segmentation refinement of false positive hypo- fluorescent areas due to interfering retinal structures. The clinically apparent areas of hypo-fluorescence were drawn by an expert grader and compared on a pixel by pixel basis to our segmentation results. The mean sensitivity and specificity of the ROC analysis were 0.89 and 0.98%.
NASA Astrophysics Data System (ADS)
Marsal, Santiago; José Curto, Juan; Torta, Joan Miquel; Gonsette, Alexandre; Favà, Vicent; Rasson, Jean; Ibañez, Miquel; Cid, Òscar
2017-07-01
The DI-flux, consisting of a fluxgate magnetometer coupled with a theodolite, is used for the absolute manual measurement of the magnetic field angles in most ground-based observatories worldwide. Commercial solutions for an automated DI-flux have recently been developed by the Royal Meteorological Institute of Belgium (RMI), and are practically restricted to the AutoDIF and its variant, the GyroDIF. In this article, we analyze the pros and cons of both instruments in terms of its suitability for installation at the partially manned geomagnetic observatory of Livingston Island (LIV), Antarctica. We conclude that the GyroDIF, even if it is less accurate and more power demanding, is more suitable than the AutoDIF for harsh conditions due to the simpler infrastructure that is necessary. Power constraints in the Spanish Antarctic Station Juan Carlos I (ASJI) during the unmanned season require an energy-efficient design of the thermally regulated box housing the instrument as well as thorough power management. Our experiences can benefit the geomagnetic community, which often faces similar challenges.
A walk through the planned CS building. M.S. Thesis
NASA Technical Reports Server (NTRS)
Khorramabadi, Delnaz
1991-01-01
Using the architectural plan views of our future computer science building as test objects, we have completed the first stage of a Building walkthrough system. The inputs to our system are AutoCAD files. An AutoCAD converter translates the geometrical information in these files into a format suitable for 3D rendering. Major model errors, such as incorrect polygon intersections and random face orientations, are detected and fixed automatically. Interactive viewing and editing tools are provided to view the results, to modify and clean the model and to change surface attributes. Our display system provides a simple-to-use user interface for interactive exploration of buildings. Using only the mouse buttons, the user can move inside and outside the building and change floors. Several viewing and rendering options are provided, such as restricting the viewing frustum, avoiding wall collisions, and selecting different rendering algorithms. A plan view of the current floor, with the position of the eye point and viewing direction on it, is displayed at all times. The scene illumination can be manipulated, by interactively controlling intensity values for 5 light sources.
NASA Astrophysics Data System (ADS)
Porter, Sophia; Strolger, Louis-Gregory; Lagerstrom, Jill; Weissman, Sarah
2016-01-01
The Space Telescope Science Institute annually receives more than one thousand formal proposals for Hubble Space Telescope time, exceeding the available time with the observatory by a factor of over four. With JWST, the proposal pressure will only increase, straining our ability to provide rigorous peer review of each proposal's scientific merit. Significant hurdles in this process include the proper categorization of proposals, to ensure Time Allocation Committees (TACs) have the required and desired expertise to fairly and appropriately judge each proposal, and the selection of reviewers themselves, to establish diverse and well-qualified TACs. The Panel Auto-Categorizer and Manager (PACMan; a naive Bayesian classifier) was developed to automatically sort new proposals into their appropriate science categories and, similarly, to appoint panel reviewers with the best qualifications to serve on the corresponding TACs. We will provide an overview of PACMan and present the results of its testing on five previous cycles of proposals. PACMan will be implemented in upcoming cycles to support and eventually replace the process for constructing the time allocation reviews.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobyshev, A.; Lamore, D.; Demar, P.
2004-12-01
In a large campus network, such at Fermilab, with tens of thousands of nodes, scanning initiated from either outside of or within the campus network raises security concerns. This scanning may have very serious impact on network performance, and even disrupt normal operation of many services. In this paper we introduce a system for detecting and automatic blocking excessive traffic of different kinds of scanning, DoS attacks, virus infected computers. The system, called AutoBlocker, is a distributed computing system based on quasi-real time analysis of network flow data collected from the border router and core switches. AutoBlocker also has anmore » interface to accept alerts from IDS systems (e.g. BRO, SNORT) that are based on other technologies. The system has multiple configurable alert levels for the detection of anomalous behavior and configurable trigger criteria for automated blocking of scans at the core or border routers. It has been in use at Fermilab for about 2 years, and has become a very valuable tool to curtail scan activity within the Fermilab campus network.« less
Screening for Protein-DNA Interactions by Automatable DNA-Protein Interaction ELISA
Schüssler, Axel; Kolukisaoglu, H. Üner; Koch, Grit; Wallmeroth, Niklas; Hecker, Andreas; Thurow, Kerstin; Zell, Andreas; Harter, Klaus; Wanke, Dierk
2013-01-01
DNA-binding proteins (DBPs), such as transcription factors, constitute about 10% of the protein-coding genes in eukaryotic genomes and play pivotal roles in the regulation of chromatin structure and gene expression by binding to short stretches of DNA. Despite their number and importance, only for a minor portion of DBPs the binding sequence had been disclosed. Methods that allow the de novo identification of DNA-binding motifs of known DBPs, such as protein binding microarray technology or SELEX, are not yet suited for high-throughput and automation. To close this gap, we report an automatable DNA-protein-interaction (DPI)-ELISA screen of an optimized double-stranded DNA (dsDNA) probe library that allows the high-throughput identification of hexanucleotide DNA-binding motifs. In contrast to other methods, this DPI-ELISA screen can be performed manually or with standard laboratory automation. Furthermore, output evaluation does not require extensive computational analysis to derive a binding consensus. We could show that the DPI-ELISA screen disclosed the full spectrum of binding preferences for a given DBP. As an example, AtWRKY11 was used to demonstrate that the automated DPI-ELISA screen revealed the entire range of in vitro binding preferences. In addition, protein extracts of AtbZIP63 and the DNA-binding domain of AtWRKY33 were analyzed, which led to a refinement of their known DNA-binding consensi. Finally, we performed a DPI-ELISA screen to disclose the DNA-binding consensus of a yet uncharacterized putative DBP, AtTIFY1. A palindromic TGATCA-consensus was uncovered and we could show that the GATC-core is compulsory for AtTIFY1 binding. This specific interaction between AtTIFY1 and its DNA-binding motif was confirmed by in vivo plant one-hybrid assays in protoplasts. Thus, the value and applicability of the DPI-ELISA screen for de novo binding site identification of DBPs, also under automatized conditions, is a promising approach for a deeper understanding of gene regulation in any organism of choice. PMID:24146751
Method for stitching microbial images using a neural network
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Tolstova, I. V.
2017-05-01
Currently an analog microscope has a wide distribution in the following fields: medicine, animal husbandry, monitoring technological objects, oceanography, agriculture and others. Automatic method is preferred because it will greatly reduce the work involved. Stepper motors are used to move the microscope slide and allow to adjust the focus in semi-automatic or automatic mode view with transfer images of microbiological objects from the eyepiece of the microscope to the computer screen. Scene analysis allows to locate regions with pronounced abnormalities for focusing specialist attention. This paper considers the method for stitching microbial images, obtained of semi-automatic microscope. The method allows to keep the boundaries of objects located in the area of capturing optical systems. Objects searching are based on the analysis of the data located in the area of the camera view. We propose to use a neural network for the boundaries searching. The stitching image boundary is held of the analysis borders of the objects. To auto focus, we use the criterion of the minimum thickness of the line boundaries of object. Analysis produced the object located in the focal axis of the camera. We use method of recovery of objects borders and projective transform for the boundary of objects which are based on shifted relative to the focal axis. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.
Wide-Field Imaging Telescope-0 (WIT0) with automatic observing system
NASA Astrophysics Data System (ADS)
Ji, Tae-Geun; Byeon, Seoyeon; Lee, Hye-In; Park, Woojin; Lee, Sang-Yun; Hwang, Sungyong; Choi, Changsu; Gibson, Coyne Andrew; Kuehne, John W.; Prochaska, Travis; Marshall, Jennifer L.; Im, Myungshin; Pak, Soojong
2018-01-01
We introduce Wide-Field Imaging Telescope-0 (WIT0), with an automatic observing system. It is developed for monitoring the variabilities of many sources at a time, e.g. young stellar objects and active galactic nuclei. It can also find the locations of transient sources such as a supernova or gamma-ray bursts. In 2017 February, we installed the wide-field 10-inch telescope (Takahashi CCA-250) as a piggyback system on the 30-inch telescope at the McDonald Observatory in Texas, US. The 10-inch telescope has a 2.35 × 2.35 deg field-of-view with a 4k × 4k CCD Camera (FLI ML16803). To improve the observational efficiency of the system, we developed a new automatic observing software, KAOS30 (KHU Automatic Observing Software for McDonald 30-inch telescope), which was developed by Visual C++ on the basis of a windows operating system. The software consists of four control packages: the Telescope Control Package (TCP), the Data Acquisition Package (DAP), the Auto Focus Package (AFP), and the Script Mode Package (SMP). Since it also supports the instruments that are using the ASCOM driver, the additional hardware installations become quite simplified. We commissioned KAOS30 in 2017 August and are in the process of testing. Based on the WIT0 experiences, we will extend KAOS30 to control multiple telescopes in future projects.
Rotation invariant eigenvessels and auto-context for retinal vessel detection
NASA Astrophysics Data System (ADS)
Montuoro, Alessio; Simader, Christian; Langs, Georg; Schmidt-Erfurth, Ursula
2015-03-01
Retinal vessels are one of the few anatomical landmarks that are clearly visible in various imaging modalities of the eye. As they are also relatively invariant to disease progression, retinal vessel segmentation allows cross-modal and temporal registration enabling exact diagnosing for various eye diseases like diabetic retinopathy, hypertensive retinopathy or age-related macular degeneration (AMD). Due to the clinical significance of retinal vessels many different approaches for segmentation have been published in the literature. In contrast to other segmentation approaches our method is not specifically tailored to the task of retinal vessel segmentation. Instead we utilize a more general image classification approach and show that this can achieve comparable results. In the proposed method we utilize the concepts of eigenfaces and auto-context. Eigenfaces have been described quite extensively in the literature and their performance is well known. They are however quite sensitive to translation and rotation. The former was addressed by computing the eigenvessels in local image windows of different scales, the latter by estimating and correcting the local orientation. Auto-context aims to incorporate automatically generated context information into the training phase of classification approaches. It has been shown to improve the performance of spinal cord segmentation4 and 3D brain image segmentation. The proposed method achieves an area under the receiver operating characteristic (ROC) curve of Az = 0.941 on the DRIVE data set, being comparable to current state-of-the-art approaches.
Integrated G and C Implementation within IDOS: A Simulink Based Reusable Launch Vehicle Simulation
NASA Technical Reports Server (NTRS)
Fisher, Joseph E.; Bevacqua, Tim; Lawrence, Douglas A.; Zhu, J. Jim; Mahoney, Michael
2003-01-01
The implementation of multiple Integrated Guidance and Control (IG&C) algorithms per flight phase within a vehicle simulation poses a daunting task to coordinate algorithm interactions with the other G&C components and with vehicle subsystems. Currently being developed by Universal Space Lines LLC (USL) under contract from NASA, the Integrated Development and Operations System (IDOS) contains a high fidelity Simulink vehicle simulation, which provides a means to test cutting edge G&C technologies. Combining the modularity of this vehicle simulation and Simulink s built-in primitive blocks provide a quick way to implement algorithms. To add discrete-event functionality to the unfinished IDOS simulation, Vehicle Event Manager (VEM) and Integrated Vehicle Health Monitoring (IVHM) subsystems were created to provide discrete-event and pseudo-health monitoring processing capabilities. Matlab's Stateflow is used to create the IVHM and Event Manager subsystems and to implement a supervisory logic controller referred to as the Auto-commander as part of the IG&C to coordinate the control system adaptation and reconfiguration and to select the control and guidance algorithms for a given flight phase. Manual creation of the Stateflow charts for all of these subsystems is a tedious and time-consuming process. The Stateflow Auto-builder was developed as a Matlab based software tool for the automatic generation of a Stateflow chart from information contained in a database. This paper describes the IG&C, VEM and IVHM implementations in IDOS. In addition, this paper describes the Stateflow Auto-builder.
A knowledge-based approach to automated planning for hepatocellular carcinoma.
Zhang, Yujie; Li, Tingting; Xiao, Han; Ji, Weixing; Guo, Ming; Zeng, Zhaochong; Zhang, Jianying
2018-01-01
To build a knowledge-based model of liver cancer for Auto-Planning, a function in Pinnacle, which is used as an automated inverse intensity modulated radiation therapy (IMRT) planning system. Fifty Tomotherapy patients were enrolled to extract the dose-volume histograms (DVHs) information and construct the protocol for Auto-Planning model. Twenty more patients were chosen additionally to test the model. Manual planning and automatic planning were performed blindly for all twenty test patients with the same machine and treatment planning system. The dose distributions of target and organs at risks (OARs), along with the working time for planning, were evaluated. Statistically significant results showed that automated plans performed better in target conformity index (CI) while mean target dose was 0.5 Gy higher than manual plans. The differences between target homogeneity indexes (HI) of the two methods were not statistically significant. Additionally, the doses of normal liver, left kidney, and small bowel were significantly reduced with automated plan. Particularly, mean dose and V15 of normal liver were 1.4 Gy and 40.5 cc lower with automated plans respectively. Mean doses of left kidney and small bowel were reduced with automated plans by 1.2 Gy and 2.1 Gy respectively. In contrast, working time was also significantly reduced with automated planning. Auto-Planning shows availability and effectiveness in our knowledge-based model for liver cancer. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Michel, Christian J
2017-04-18
In 1996, a set X of 20 trinucleotides was identified in genes of both prokaryotes and eukaryotes which has on average the highest occurrence in reading frame compared to its two shifted frames. Furthermore, this set X has an interesting mathematical property as X is a maximal C 3 self-complementary trinucleotide circular code. In 2015, by quantifying the inspection approach used in 1996, the circular code X was confirmed in the genes of bacteria and eukaryotes and was also identified in the genes of plasmids and viruses. The method was based on the preferential occurrence of trinucleotides among the three frames at the gene population level. We extend here this definition at the gene level. This new statistical approach considers all the genes, i.e., of large and small lengths, with the same weight for searching the circular code X . As a consequence, the concept of circular code, in particular the reading frame retrieval, is directly associated to each gene. At the gene level, the circular code X is strengthened in the genes of bacteria, eukaryotes, plasmids, and viruses, and is now also identified in the genes of archaea. The genes of mitochondria and chloroplasts contain a subset of the circular code X . Finally, by studying viral genes, the circular code X was found in DNA genomes, RNA genomes, double-stranded genomes, and single-stranded genomes.
SIBIS: a Bayesian model for inconsistent protein sequence estimation.
Khenoussi, Walyd; Vanhoutrève, Renaud; Poch, Olivier; Thompson, Julie D
2014-09-01
The prediction of protein coding genes is a major challenge that depends on the quality of genome sequencing, the accuracy of the model used to elucidate the exonic structure of the genes and the complexity of the gene splicing process leading to different protein variants. As a consequence, today's protein databases contain a huge amount of inconsistency, due to both natural variants and sequence prediction errors. We have developed a new method, called SIBIS, to detect such inconsistencies based on the evolutionary information in multiple sequence alignments. A Bayesian framework, combined with Dirichlet mixture models, is used to estimate the probability of observing specific amino acids and to detect inconsistent or erroneous sequence segments. We evaluated the performance of SIBIS on a reference set of protein sequences with experimentally validated errors and showed that the sensitivity is significantly higher than previous methods, with only a small loss of specificity. We also assessed a large set of human sequences from the UniProt database and found evidence of inconsistency in 48% of the previously uncharacterized sequences. We conclude that the integration of quality control methods like SIBIS in automatic analysis pipelines will be critical for the robust inference of structural, functional and phylogenetic information from these sequences. Source code, implemented in C on a linux system, and the datasets of protein sequences are freely available for download at http://www.lbgi.fr/∼julie/SIBIS. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A procedure for automating CFD simulations of an inlet-bleed problem
NASA Technical Reports Server (NTRS)
Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.
1995-01-01
A procedure was developed to improve the turn-around time for computational fluid dynamics (CFD) simulations of an inlet-bleed problem involving oblique shock-wave/boundary-layer interactions on a flat plate with bleed into a plenum through one or more circular holes. This procedure is embodied in a preprocessor called AUTOMAT. With AUTOMAT, once data for the geometry and flow conditions have been specified (either interactively or via a namelist), it will automatically generate all input files needed to perform a three-dimensional Navier-Stokes simulation of the prescribed inlet-bleed problem by using the PEGASUS and OVERFLOW codes. The input files automatically generated by AUTOMAT include those for the grid system and those for the initial and boundary conditions. The grid systems automatically generated by AUTOMAT are multi-block structured grids of the overlapping type. Results obtained by using AUTOMAT are presented to illustrate its capability.
Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, S.
2002-07-01
As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less
Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic
NASA Technical Reports Server (NTRS)
Leucht, Kurt W.; Semmel, Glenn S.
2008-01-01
The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.
Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
NASA Astrophysics Data System (ADS)
Lee, JaeBeom; Eleftheriadis, Alexandros
1997-01-01
We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.
77 FR 66601 - Electronic Tariff Filings; Notice of Change to eTariff Type of Filing Codes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-06
... Tariff Filings; Notice of Change to eTariff Type of Filing Codes Take notice that, effective November 18, 2012, the list of available eTariff Type of Filing Codes (TOFC) will be modified to include a new TOFC... Energy's regulations. Tariff records included in such filings will be automatically accepted to be...
Watch out for reporter gene assays with Renilla luciferase and paclitaxel.
Theile, Dirk; Spalwisz, Adriana; Weiss, Johanna
2013-06-15
Luminescence-based reporter gene assays are widely used in biochemistry. Signals from reporter genes (e.g., firefly luminescence) are usually normalized to signals from constantly luminescing luciferases such as Renilla luciferase. This normalization step can be performed by modern luminometry devices automatically providing final results. Here we demonstrate paclitaxel to strikingly enhance Renilla luminescence, thereby potentially flawing results from reporter gene assays. In consequence, these data advocate for careful examination of raw data and militate against automatic data processing. Copyright © 2013 Elsevier Inc. All rights reserved.
Building Automatic Grading Tools for Basic of Programming Lab in an Academic Institution
NASA Astrophysics Data System (ADS)
Harimurti, Rina; Iwan Nurhidayat, Andi; Asmunin
2018-04-01
The skills of computer programming is a core competency that must be mastered by students majoring in computer sciences. The best way to improve this skill is through the practice of writing many programs to solve various problems from simple to complex. It takes hard work and a long time to check and evaluate the results of student labs one by one, especially if the number of students a lot. Based on these constrain, web proposes Automatic Grading Tools (AGT), the application that can evaluate and deeply check the source code in C, C++. The application architecture consists of students, web-based applications, compilers, and operating systems. Automatic Grading Tools (AGT) is implemented MVC Architecture and using open source software, such as laravel framework version 5.4, PostgreSQL 9.6, Bootstrap 3.3.7, and jquery library. Automatic Grading Tools has also been tested for real problems by submitting source code in C/C++ language and then compiling. The test results show that the AGT application has been running well.
2006-11-01
engines will involve a family of common components. It will consist of a real - time operating system and partitioned application software (AS...system will employ a standard hardware and software architecture. It will consist of a real time operating system and partitioned application...Inputs - Enables Large Cost Reduction 3. Software - FAA Certified Auto Code - Real Time Operating System - Commercial
Liljeqvist, Henning T G; Muscatello, David; Sara, Grant; Dinh, Michael; Lawrence, Glenda L
2014-09-23
Syndromic surveillance in emergency departments (EDs) may be used to deliver early warnings of increases in disease activity, to provide situational awareness during events of public health significance, to supplement other information on trends in acute disease and injury, and to support the development and monitoring of prevention or response strategies. Changes in mental health related ED presentations may be relevant to these goals, provided they can be identified accurately and efficiently. This study aimed to measure the accuracy of using diagnostic codes in electronic ED presentation records to identify mental health-related visits. We selected a random sample of 500 records from a total of 1,815,588 ED electronic presentation records from 59 NSW public hospitals during 2010. ED diagnoses were recorded using any of ICD-9, ICD-10 or SNOMED CT classifications. Three clinicians, blinded to the automatically generated syndromic grouping and each other's classification, reviewed the triage notes and classified each of the 500 visits as mental health-related or not. A "mental health problem presentation" for the purposes of this study was defined as any ED presentation where either a mental disorder or a mental health problem was the reason for the ED visit. The combined clinicians' assessment of the records was used as reference standard to measure the sensitivity, specificity, and positive and negative predictive values of the automatic classification of coded emergency department diagnoses. Agreement between the reference standard and the automated coded classification was estimated using the Kappa statistic. Agreement between clinician's classification and automated coded classification was substantial (Kappa = 0.73. 95% CI: 0.58 - 0.87). The automatic syndromic grouping of coded ED diagnoses for mental health-related visits was found to be moderately sensitive (68% 95% CI: 46%-84%) and highly specific at 99% (95% CI: 98%-99.7%) when compared with the reference standard in identifying mental health related ED visits. Positive predictive value was 81% (95% CI: 0.57 - 0.94) and negative predictive value was 98% (95% CI: 0.97-0.99). Mental health presentations identified using diagnoses coded with various classifications in electronic ED presentation records offers sufficient accuracy for application in near real-time syndromic surveillance.
Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor
NASA Astrophysics Data System (ADS)
Pranger, Casper
2017-04-01
In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.
Chou, A; Burke, J
1999-05-01
DNA sequence clustering has become a valuable method in support of gene discovery and gene expression analysis. Our interest lies in leveraging the sequence diversity within clusters of expressed sequence tags (ESTs) to model gene structure for the study of gene variants that arise from, among other things, alternative mRNA splicing, polymorphism, and divergence after gene duplication, fusion, and translocation events. In previous work, CRAW was developed to discover gene variants from assembled clusters of ESTs. Most importantly, novel gene features (the differing units between gene variants, for example alternative exons, polymorphisms, transposable elements, etc.) that are specialized to tissue, disease, population, or developmental states can be identified when these tools collate DNA source information with gene variant discrimination. While the goal is complete automation of novel feature and gene variant detection, current methods are far from perfect and hence the development of effective tools for visualization and exploratory data analysis are of paramount importance in the process of sifting through candidate genes and validating targets. We present CRAWview, a Java based visualization extension to CRAW. Features that vary between gene forms are displayed using an automatically generated color coded index. The reporting format of CRAWview gives a brief, high level summary report to display overlap and divergence within clusters of sequences as well as the ability to 'drill down' and see detailed information concerning regions of interest. Additionally, the alignment viewing and editing capabilities of CRAWview make it possible to interactively correct frame-shifts and otherwise edit cluster assemblies. We have implemented CRAWview as a Java application across windows NT/95 and UNIX platforms. A beta version of CRAWview will be freely available to academic users from Pangea Systems (http://www.pangeasystems.com). Contact :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mebarki, F.; Forest, M.G.; Josso, N.
The androgen insensivity syndrome (AIS) is a recessive X-linked disorder resulting from a deficient function of the androgen receptor (AR). The human AR gene has 3 functional domains: N-terminal encoded by exon 1, DNA-binding domain encoded by exons 2 and 3, and androgen-binding domain encoded by exons 4 to 8. In order to characterize the molecular defects of the AR gene in AIS, the entire coding regions and the intronic bording sequences of the AR gene were amplified by PCR before automatic direct sequencing in 45 patients. Twenty seven different point mutations were found in 32 unrelated AIS patients: 18more » with a complete form (CAIS), 14 with a partial form (PAIS); 18 of these mutations are novel mutations, not published to date. Only 3 mutations were repeatedly found: R804H in 3 families; M780I in 3 families and R774C in 2 families. For 26 patients out of the 32 found to have a mutation, maternal DNA was collected and sequenced: 6 de novo mutations were detected (i.e. 23% of the cases). Finally, no mutation was detected in 13 patients (29%): 7 with CAIS and 6 familial severe PAIS. The latter all presented with perineal hypospadias, micropenis, 4 out of 6 being raised as girl. Diagnosis of AIS in these 13 families in whom no mutation was detected is supported by the following criteria: clinical data, familial history (2 or 3 index cases in the same family), familial segregation of the polymorphic CAG repeat of the AR gene. Mutations in intronic regions or the promoter of the AR gene could not explain all cases of AIS without mutations in the AR coding regions, because AR binding (performed in 9 out of 13) was normal in 6, suggesting the synthesis of an AR protein. This situation led us to speculate that another X-linked factor associated with the AR could be implicated in some cases of AIS.« less
Applications of automatic differentiation in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.
1994-01-01
Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
NASA Astrophysics Data System (ADS)
Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu
2015-03-01
In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.
Michel, Christian J.
2017-01-01
In 1996, a set X of 20 trinucleotides was identified in genes of both prokaryotes and eukaryotes which has on average the highest occurrence in reading frame compared to its two shifted frames. Furthermore, this set X has an interesting mathematical property as X is a maximal C3 self-complementary trinucleotide circular code. In 2015, by quantifying the inspection approach used in 1996, the circular code X was confirmed in the genes of bacteria and eukaryotes and was also identified in the genes of plasmids and viruses. The method was based on the preferential occurrence of trinucleotides among the three frames at the gene population level. We extend here this definition at the gene level. This new statistical approach considers all the genes, i.e., of large and small lengths, with the same weight for searching the circular code X. As a consequence, the concept of circular code, in particular the reading frame retrieval, is directly associated to each gene. At the gene level, the circular code X is strengthened in the genes of bacteria, eukaryotes, plasmids, and viruses, and is now also identified in the genes of archaea. The genes of mitochondria and chloroplasts contain a subset of the circular code X. Finally, by studying viral genes, the circular code X was found in DNA genomes, RNA genomes, double-stranded genomes, and single-stranded genomes. PMID:28420220
Automatic Testcase Generation for Flight Software
NASA Technical Reports Server (NTRS)
Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.
2008-01-01
The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.
Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms
Helms, Lucas; Clune, Jeff
2017-01-01
Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding. PMID:28334002
Analysis of open-pit mines using high-resolution topography from UAV
NASA Astrophysics Data System (ADS)
Chen, Jianping; Li, Ke; Sofia, Giulia; Tarolli, Paolo
2015-04-01
Among the anthropogenic topographic signatures on the Earth, open-pit mines deserve a great importance, since they significantly affect the Earth's surface and its related processes (e.g. erosion, pollution). Their geomorphological analysis, therefore, represents a real challenge for the Earth science community. The purpose of this research is to characterize the open-pit mining features using a recently published landscape metric, the Slope Local Length of Auto-Correlation (SLLAC) (Sofia et al., 2014), and high-resolution DEMs (Digital Elevation Models) derived from drone surveyed topography. The research focuses on two main case studies of iron mines located in the Beijing district (P.R. China). The main topographic information (Digital Surface Models, DSMs) was derived using Unmanned Aerial Vehicle (UAV) and the Structure from Motion (SfM) photogrammetric technique. The results underline the effectiveness of the adopted methodologies and survey techniques in the characterization of the main geomorphic features of the mines. Thanks to the SLLAC, the terraced area given by multi-benched sideways-moving method for the iron extraction is automatically depicted, and using some SLLAC derived parameters, the related terraces extent is automatically estimated. The analysis of the correlation length orientation, furthermore, allows to identify the terraces orientation respect to the North, and to understand as well the shape of the open-pit area. This provides a basis for a large scale and low cost topographic survey for a sustainable environmental planning and, for example, for the mitigation of environmental anthropogenic impact due to mining. References Sofia G., Marinello F, Tarolli P. 2014. A new landscape metric for the identification of terraced sites: the Slope Local Length of Auto-Correlation (SLLAC). ISPRS Journal of Photogrammetry and Remote Sensing, doi:10.1016/j.isprsjprs.2014.06.018
Nagle, Aniket; Riener, Robert; Wolf, Peter
2015-01-01
Computer games are increasingly being used for training cognitive functions like working memory and attention among the growing population of older adults. While cognitive training games often include elements like difficulty adaptation, rewards, and visual themes to make the games more enjoyable and effective, the effect of different degrees of afforded user control in manipulating these elements has not been systematically studied. To address this issue, two distinct implementations of the three aforementioned game elements were tested among healthy older adults (N = 21, 69.9 ± 6.4 years old) playing a game-like version of the n-back task on a tablet at home for 3 weeks. Two modes were considered, differentiated by the afforded degree of user control of the three elements: user control of difficulty vs. automatic difficulty adaptation, difficulty-dependent rewards vs. automatic feedback messages, and user choice of visual theme vs. no choice. The two modes ("USER-CONTROL" and "AUTO") were compared for frequency of play, duration of play, and in-game performance. Participants were free to play the game whenever and for however long they wished. Participants in USER-CONTROL exhibited significantly higher frequency of playing, total play duration, and in-game performance than participants in AUTO. The results of the present study demonstrate the efficacy of providing user control in the three game elements, while validating a home-based study design in which participants were not bound by any training regimen, and could play the game whenever they wished. The results have implications for designing cognitive training games that elicit higher compliance and better in-game performance, with an emphasis on home-based training.
NASA Astrophysics Data System (ADS)
Zhou, Yuhong; Klages, Peter; Tan, Jun; Chi, Yujie; Stojadinovic, Strahinja; Yang, Ming; Hrycushko, Brian; Medin, Paul; Pompos, Arnold; Jiang, Steve; Albuquerque, Kevin; Jia, Xun
2017-06-01
High dose rate (HDR) brachytherapy treatment planning is conventionally performed manually and/or with aids of preplanned templates. In general, the standard of care would be elevated by conducting an automated process to improve treatment planning efficiency, eliminate human error, and reduce plan quality variations. Thus, our group is developing AutoBrachy, an automated HDR brachytherapy planning suite of modules used to augment a clinical treatment planning system. This paper describes our proof-of-concept module for vaginal cylinder HDR planning that has been fully developed. After a patient CT scan is acquired, the cylinder applicator is automatically segmented using image-processing techniques. The target CTV is generated based on physician-specified treatment depth and length. Locations of the dose calculation point, apex point and vaginal surface point, as well as the central applicator channel coordinates, and the corresponding dwell positions are determined according to their geometric relationship with the applicator and written to a structure file. Dwell times are computed through iterative quadratic optimization techniques. The planning information is then transferred to the treatment planning system through a DICOM-RT interface. The entire process was tested for nine patients. The AutoBrachy cylindrical applicator module was able to generate treatment plans for these cases with clinical grade quality. Computation times varied between 1 and 3 min on an Intel Xeon CPU E3-1226 v3 processor. All geometric components in the automated treatment plans were generated accurately. The applicator channel tip positions agreed with the manually identified positions with submillimeter deviations and the channel orientations between the plans agreed within less than 1 degree. The automatically generated plans obtained clinically acceptable quality.
HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics
NASA Astrophysics Data System (ADS)
Wiebusch, Martin
2015-10-01
This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.
A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors
NASA Astrophysics Data System (ADS)
Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.
2018-04-01
The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.
López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław
2017-01-01
The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.
López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław
2017-01-01
The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding. PMID:29312075
NASA Astrophysics Data System (ADS)
Kawamura, Teruo; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru
In the Evolved UTRA (UMTS Terrestrial Radio Access) uplink, single-carrier frequency division multiple access (SC-FDMA) radio access was adopted owing to its advantageous low peak-to-average power ratio (PAPR) feature, which leads to wide coverage area provisioning with limited peak transmission power of user equipments. This paper proposes orthogonal pilot channel generation using the combination of FDMA and CDMA in the SC-FDMA-based Evolved UTRA uplink. In the proposed method, we employ distributed FDMA transmission for simultaneous accessing users with different transmission bandwidths, and employ CDMA transmission for simultaneous accessing users with identical transmission bandwidth. Moreover, we apply a code sequence with a good auto-correlation property such as a Constant Amplitude Zero Auto-Correlation (CAZAC) sequence employing a cyclic shift to increase the number of sequences. Simulation results show that the average packet error rate performance using an orthogonal pilot channel with the combination of FDMA and CDMA in a six-user environment, i. e., four users each with a 1.25-MHz transmission bandwidth and two users each with a 5-MHz transmission bandwidth, employing turbo coding with the coding r of R=1/2 and QPSK and 16QAM data modulation coincides well with that in a single-user environment with the same transmission bandwidth. We show that the proposed orthogonal pilot channel structure using the combination of distributed FDMA and CDMA transmissions and the application of the CAZAC sequence is effective in the SC-FDMA-based Evolved UTRA uplink.
NASA Technical Reports Server (NTRS)
Mitchell, T. R.
1974-01-01
The development of a test engineer oriented language has been under way at the Kennedy Space Center for several years. The result of this effort is the Ground Operations Aerospace Language, GOAL, a self-documenting, high-order language suitable for coding automatic test, checkout and launch procedures. GOAL is a highly readable, writable, retainable language that is easily learned by nonprogramming oriented engineers. It is sufficiently powerful for use at all levels of Space Shuttle ground processing, from line replaceable unit checkout to integrated launch day operations. This paper will relate the language development, and describe GOAL and its applications.
AMPS/PC - AUTOMATIC MANUFACTURING PROGRAMMING SYSTEM
NASA Technical Reports Server (NTRS)
Schroer, B. J.
1994-01-01
The AMPS/PC system is a simulation tool designed to aid the user in defining the specifications of a manufacturing environment and then automatically writing code for the target simulation language, GPSS/PC. The domain of problems that AMPS/PC can simulate are manufacturing assembly lines with subassembly lines and manufacturing cells. The user defines the problem domain by responding to the questions from the interface program. Based on the responses, the interface program creates an internal problem specification file. This file includes the manufacturing process network flow and the attributes for all stations, cells, and stock points. AMPS then uses the problem specification file as input for the automatic code generator program to produce a simulation program in the target language GPSS. The output of the generator program is the source code of the corresponding GPSS/PC simulation program. The system runs entirely on an IBM PC running PC DOS Version 2.0 or higher and is written in Turbo Pascal Version 4 requiring 640K memory and one 360K disk drive. To execute the GPSS program, the PC must have resident the GPSS/PC System Version 2.0 from Minuteman Software. The AMPS/PC program was developed in 1988.
Management of natural resources through automatic cartographic inventory
NASA Technical Reports Server (NTRS)
Rey, P.; Gourinard, Y.; Cambou, F. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Significant results of the ARNICA program from August 1972 - January 1973 have been: (1) establishment of image to object correspondence codes for all types of soil use and forestry in northern Spain; (2) establishment of a transfer procedure between qualitative (remote identification and remote interpretation) and quantitative (numerization, storage, automatic statistical cartography) use of images; (3) organization of microdensitometric data processing and automatic cartography software; and (4) development of a system for measuring reflectance simultaneous with imagery.
Small passenger car transmission test: Mercury Lynx ATX transmission
NASA Technical Reports Server (NTRS)
Bujold, M. P.
1981-01-01
The testing of a Mercury Lynx automatic transmission is reported. The transmission was tested in accordance with a passenger car automatic transmission test code (SAE J65lb) which required drive performance, coast performance, and no load test conditions. Under these conditions, the transmission attained maximum efficiencies in the mid-ninety percent range both for drive performance test and coast performance tests. The torque, speed, and efficiency curves are presented, which provide the complete performance characteristics for the Mercury Lynx automatic transmission.
[Analysis of gene mutation in a Chinese family with Norrie disease].
Zhang, Tian-xiao; Zhao, Xiu-li; Hua, Rui; Zhang, Jin-song; Zhang, Xue
2012-09-01
To detect the pathogenic mutation in a Chinese family with Norrie disease. Clinical diagnosis was based on familial history, clinical sign and B ultrasonic examination. Peripheral blood samples were obtained from all available members in a Chinese family with Norrie disease. Genomic DNA was extracted from lymphocytes by the standard SDS-proteinase K-phenol/chloroform method. Two coding exons and all intron-exon boundaries of the NDP gene were PCR amplified using three pairs of primers and subjected to automatic DNA sequence. The causative mutation was confirmed by restriction enzyme analysis and genotyping analysis in all members. Sequence analysis of NDP gene revealed a missense mutation c.220C > T (p.Arg74Cys) in the proband and his mother. Further mutation identification by restriction enzyme analysis and genotyping analysis showed that the proband was homozygote of this mutation. His mother and other four unaffected members (III3, IV4, III5 and II2) were carriers of this mutation. The mutant amino acid located in the C-terminal cystine knot-like domain, which was critical motif for the structure and function of NDP. A NDP missense mutation was identified in a Chinese family with Norrie disease.
The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics
NASA Astrophysics Data System (ADS)
Ganander, Hans
2003-10-01
For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.
High-precision radius automatic measurement using laser differential confocal technology
NASA Astrophysics Data System (ADS)
Jiang, Hongwei; Zhao, Weiqian; Yang, Jiamiao; Guo, Yongkui; Xiao, Yang
2015-02-01
A high precision radius automatic measurement method using laser differential confocal technology is proposed. Based on the property of an axial intensity curve that the null point precisely corresponds to the focus of the objective and the bipolar property, the method uses the composite PID (proportional-integral-derivative) control to ensure the steady movement of the motor for process of quick-trigger scanning, and uses least-squares linear fitting to obtain the position of the cat-eye and confocal positions, then calculates the radius of curvature of lens. By setting the number of measure times, precision auto-repeat measurement of the radius of curvature is achieved. The experiment indicates that the method has the measurement accuracy of better than 2 ppm, and the measuring repeatability is better than 0.05 μm. In comparison with the existing manual-single measurement, this method has a high measurement precision, a strong environment anti-interference capability, a better measuring repeatability which is only tenth of former's.
antiSMASH 3.0-a comprehensive resource for the genome mining of biosynthetic gene clusters.
Weber, Tilmann; Blin, Kai; Duddela, Srikanth; Krug, Daniel; Kim, Hyun Uk; Bruccoleri, Robert; Lee, Sang Yup; Fischbach, Michael A; Müller, Rolf; Wohlleben, Wolfgang; Breitling, Rainer; Takano, Eriko; Medema, Marnix H
2015-07-01
Microbial secondary metabolism constitutes a rich source of antibiotics, chemotherapeutics, insecticides and other high-value chemicals. Genome mining of gene clusters that encode the biosynthetic pathways for these metabolites has become a key methodology for novel compound discovery. In 2011, we introduced antiSMASH, a web server and stand-alone tool for the automatic genomic identification and analysis of biosynthetic gene clusters, available at http://antismash.secondarymetabolites.org. Here, we present version 3.0 of antiSMASH, which has undergone major improvements. A full integration of the recently published ClusterFinder algorithm now allows using this probabilistic algorithm to detect putative gene clusters of unknown types. Also, a new dereplication variant of the ClusterBlast module now identifies similarities of identified clusters to any of 1172 clusters with known end products. At the enzyme level, active sites of key biosynthetic enzymes are now pinpointed through a curated pattern-matching procedure and Enzyme Commission numbers are assigned to functionally classify all enzyme-coding genes. Additionally, chemical structure prediction has been improved by incorporating polyketide reduction states. Finally, in order for users to be able to organize and analyze multiple antiSMASH outputs in a private setting, a new XML output module allows offline editing of antiSMASH annotations within the Geneious software. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Priority coding for control room alarms
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1994-01-01
Indicating the priority of a spatially fixed, activated alarm tile on an alarm tile array by a shape coding at the tile, and preferably using the same shape coding wherever the same alarm condition is indicated elsewhere in the control room. The status of an alarm tile can change automatically or by operator acknowledgement, but tones and/or flashing cues continue to provide status information to the operator.
10Gbps 2D MGC OCDMA Code over FSO Communication System
NASA Astrophysics Data System (ADS)
Professor Urmila Bhanja, Associate, Dr.; Khuntia, Arpita; Alamasety Swati, (Student
2017-08-01
Currently, wide bandwidth signal dissemination along with low latency is a leading requisite in various applications. Free space optical wireless communication has introduced as a realistic technology for bridging the gap in present high data transmission fiber connectivity and as a provisional backbone for rapidly deployable wireless communication infrastructure. The manuscript highlights on the implementation of 10Gbps SAC-OCDMA FSO communications using modified two dimensional Golomb code (2D MGC) that possesses better auto correlation, minimum cross correlation and high cardinality. A comparison based on pseudo orthogonal (PSO) matrix code and modified two dimensional Golomb code (2D MGC) is developed in the proposed SAC OCDMA-FSO communication module taking different parameters into account. The simulative outcome signifies that the communication radius is bounded by the multiple access interference (MAI). In this work, a comparison is made in terms of bit error rate (BER), and quality factor (Q) based on modified two dimensional Golomb code (2D MGC) and PSO matrix code. It is observed that the 2D MGC yields better results compared to the PSO matrix code. The simulation results are validated using optisystem version 14.
Motor automaticity in Parkinson’s disease
Wu, Tao; Hallett, Mark; Chan, Piu
2017-01-01
Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020
Code of Federal Regulations, 2010 CFR
2010-10-01
... Disabled; and (5) Other acquisitions not using full and open competition, if authorized by Subpart 6.2 or 6... table: The service(Federal Service Codes from the Federal Procurement Data System Product/Service Code... military services overseas. X X X X (2) (i) Automatic data processing (ADP) telecommunications and...
Staging Sleep in Polysomnograms: Analysis of Inter-Scorer Variability
Younes, Magdy; Raneri, Jill; Hanly, Patrick
2016-01-01
Study Objectives: To determine the reasons for inter-scorer variability in sleep staging of polysomnograms (PSGs). Methods: Fifty-six PSGs were scored (5-stage sleep scoring) by 2 experienced technologists, (first manual, M1). Months later, the technologists edited their own scoring (second manual, M2) based upon feedback from the investigators that highlighted differences between their scoring. The PSGs were then scored with an automatic system (Auto) and the technologists edited them, epoch-by-epoch (Edited-Auto). This resulted in 6 different manual scores for each PSG. Epochs were classified as scorer errors (one M1 score differed from the other 5 scores), scorer bias (all 3 scores of each technologist were similar, but differed from the other technologist) and equivocal (sleep scoring was inconsistent within and between technologists). Results: Percent agreement after M1 was 78.9% ± 9.0% and was unchanged after M2 (78.1% ± 9.7%) despite numerous edits (≈40/PSG) by the scorers. Agreement in Edited-Auto was higher (86.5% ± 6.4%, p < 1E−9). Scorer errors (< 2% of epochs) and scorer bias (3.5% ± 2.3% of epochs) together accounted for < 20% of M1 disagreements. A large number of epochs (92 ± 44/PSG) with scoring agreement in M1 were subsequently changed in M2 and/or Edited-Auto. Equivocal epochs, which showed scoring inconsistency, accounted for 28% ± 12% of all epochs, and up to 76% of all epochs in individual patients. Disagreements were largely between awake/NREM, N1/N2, and N2/N3 sleep. Conclusion: Inter-scorer variability is largely due to epochs that are difficult to classify. Availability of digitally identified events (e.g., spindles) or calculated variables (e.g., depth of sleep, delta wave duration) during scoring may greatly reduce scoring variability. Citation: Younes M, Raneri J, Hanly P. Staging sleep in polysomnograms: analysis of inter-scorer variability. J Clin Sleep Med 2016;12(6):885–894. PMID:27070243
Basu, Swaraj; Larsson, Erik
2018-05-31
Antisense transcripts and other long non-coding RNAs are pervasive in mammalian cells, and some of these molecules have been proposed to regulate proximal protein-coding genes in cis For example, non-coding transcription can contribute to inactivation of tumor suppressor genes in cancer, and antisense transcripts have been implicated in the epigenetic inactivation of imprinted genes. However, our knowledge is still limited and more such regulatory interactions likely await discovery. Here, we make use of available gene expression data from a large compendium of human tumors to generate hypotheses regarding non-coding-to-coding cis -regulatory relationships with emphasis on negative associations, as these are less likely to arise for reasons other than cis -regulation. We document a large number of possible regulatory interactions, including 193 coding/non-coding pairs that show expression patterns compatible with negative cis -regulation. Importantly, by this approach we capture several known cases, and many of the involved coding genes have known roles in cancer. Our study provides a large catalog of putative non-coding/coding cis -regulatory pairs that may serve as a basis for further experimental validation and characterization. Copyright © 2018 Basu and Larsson.
Automated design of infrared digital metamaterials by genetic algorithm
NASA Astrophysics Data System (ADS)
Sugino, Yuya; Ishikawa, Atsushi; Hayashi, Yasuhiko; Tsuruta, Kenji
2017-08-01
We demonstrate automatic design of infrared (IR) metamaterials using a genetic algorithm (GA) and experimentally characterize their IR properties. To implement the automated design scheme of the metamaterial structures, we adopt a digital metamaterial consisting of 7 × 7 Au nano-pixels with an area of 200 nm × 200 nm, and their placements are coded as binary genes in the GA optimization process. The GA combined with three-dimensional (3D) finite element method (FEM) simulation is developed and applied to automatically construct a digital metamaterial to exhibit pronounced plasmonic resonances at the target IR frequencies. Based on the numerical results, the metamaterials are fabricated on a Si substrate over an area of 1 mm × 1 mm by using an EB lithography, Cr/Au (2/20 nm) depositions, and liftoff process. In the FT-IR measurement, pronounced plasmonic responses of each metamaterial are clearly observed near the targeted frequencies, although the synthesized pixel arrangements of the metamaterials are seemingly random. The corresponding numerical simulations reveal the important resonant behavior of each pixel and their hybridized systems. Our approach is fully computer-aided without artificial manipulation, thus paving the way toward the novel device design for next-generation plasmonic device applications.
Tools for Rapid Understanding of Malware Code
2015-05-07
cloaking techniques. We used three malware detectors, covering a wide spectrum of detection technologies, for our experiments: VirusTotal, an online ...Analysis and Manipulation ( SCAM ), 2014. [9] Babak Yadegari, Brian Johannesmeyer, Benjamin Whitely, and Saumya Debray. A generic approach to automatic...and Manipulation ( SCAM ), 2014. [9] Babak Yadegari, Brian Johannesmeyer, Benjamin Whitely, and Saumya Debray. A generic approach to automatic
NASA Astrophysics Data System (ADS)
Bertholet, Jenny; Toftegaard, Jakob; Hansen, Rune; Worm, Esben S.; Wan, Hanlin; Parikh, Parag J.; Weber, Britta; Høyer, Morten; Poulsen, Per R.
2018-03-01
The purpose of this study was to develop, validate and clinically demonstrate fully automatic tumour motion monitoring on a conventional linear accelerator by combined optical and sparse monoscopic imaging with kilovoltage x-rays (COSMIK). COSMIK combines auto-segmentation of implanted fiducial markers in cone-beam computed tomography (CBCT) projections and intra-treatment kV images with simultaneous streaming of an external motion signal. A pre-treatment CBCT is acquired with simultaneous recording of the motion of an external marker block on the abdomen. The 3-dimensional (3D) marker motion during the CBCT is estimated from the auto-segmented positions in the projections and used to optimize an external correlation model (ECM) of internal motion as a function of external motion. During treatment, the ECM estimates the internal motion from the external motion at 20 Hz. KV images are acquired every 3 s, auto-segmented, and used to update the ECM for baseline shifts between internal and external motion. The COSMIK method was validated using Calypso-recorded internal tumour motion with simultaneous camera-recorded external motion for 15 liver stereotactic body radiotherapy (SBRT) patients. The validation included phantom experiments and simulations hereof for 12 fractions and further simulations for 42 fractions. The simulations compared the accuracy of COSMIK with ECM-based monitoring without model updates and with model updates based on stereoscopic imaging as well as continuous kilovoltage intrafraction monitoring (KIM) at 10 Hz without an external signal. Clinical real-time tumour motion monitoring with COSMIK was performed offline for 14 liver SBRT patients (41 fractions) and online for one patient (two fractions). The mean 3D root-mean-square error for the four monitoring methods was 1.61 mm (COSMIK), 2.31 mm (ECM without updates), 1.49 mm (ECM with stereoscopic updates) and 0.75 mm (KIM). COSMIK is the first combined kV/optical real-time motion monitoring method used clinically online on a conventional accelerator. COSMIK gives less imaging dose than KIM and is in addition applicable when the kV imager cannot be deployed such as during non-coplanar fields.
NASA Astrophysics Data System (ADS)
Gamadia, Mark Noel
In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras are presented to further illustrate the real-world AF performance gains achieved by the developed approach. The major contribution of this dissertation is that the developed auto focusing approach can be successfully used by camera manufacturers in the development of the AF feature in future generations of digital still cameras and camera phones.
Chan, Kuang-Lim; Rosli, Rozana; Tatarinova, Tatiana V; Hogan, Michael; Firdaus-Raih, Mohd; Low, Eng-Ti Leslie
2017-01-27
Gene prediction is one of the most important steps in the genome annotation process. A large number of software tools and pipelines developed by various computing techniques are available for gene prediction. However, these systems have yet to accurately predict all or even most of the protein-coding regions. Furthermore, none of the currently available gene-finders has a universal Hidden Markov Model (HMM) that can perform gene prediction for all organisms equally well in an automatic fashion. We present an automated gene prediction pipeline, Seqping that uses self-training HMM models and transcriptomic data. The pipeline processes the genome and transcriptome sequences of the target species using GlimmerHMM, SNAP, and AUGUSTUS pipelines, followed by MAKER2 program to combine predictions from the three tools in association with the transcriptomic evidence. Seqping generates species-specific HMMs that are able to offer unbiased gene predictions. The pipeline was evaluated using the Oryza sativa and Arabidopsis thaliana genomes. Benchmarking Universal Single-Copy Orthologs (BUSCO) analysis showed that the pipeline was able to identify at least 95% of BUSCO's plantae dataset. Our evaluation shows that Seqping was able to generate better gene predictions compared to three HMM-based programs (MAKER2, GlimmerHMM and AUGUSTUS) using their respective available HMMs. Seqping had the highest accuracy in rice (0.5648 for CDS, 0.4468 for exon, and 0.6695 nucleotide structure) and A. thaliana (0.5808 for CDS, 0.5955 for exon, and 0.8839 nucleotide structure). Seqping provides researchers a seamless pipeline to train species-specific HMMs and predict genes in newly sequenced or less-studied genomes. We conclude that the Seqping pipeline predictions are more accurate than gene predictions using the other three approaches with the default or available HMMs.