Sample records for automatisch generierten sourcecode

  1. Automatische Kamerapositionierung für intra-operative Visualisierungen in der onkologischen Leberchirurgie

    NASA Astrophysics Data System (ADS)

    Mühler, Konrad; Hansen, Christian; Neugebauer, Mathias; Preim, Bernhard

    In diesem Beitrag wird ein Verfahren vorgestellt, mit dessen Hilfe automatisch gute Blickpunkte auf dreidimensionale Planungsmodelle für die Leberchirurgie berechnet werden können. Das Verfahren passt die Position der virtuellen Kamera während einer Operation dynamisch an, insbesondere im Falle einer Aktualisierung von onkologischen Planungsdaten durch neue intra-operative Befunde.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    SmartImport.py is a Python source-code file that implements a replacement for the standard Python module importer. The code is derived from knee.py, a file in the standard Python diestribution , and adds functionality to improve the performance of Python module imports in massively parallel contexts.

  3. 78 FR 53630 - Airworthiness Directives; Alexander Schleicher GmbH & Co. Segelflugzeugbau Sailplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-30

    ... rod for conformity following Alexander Schleicher Automatischer H[ouml]henruderanschlu[beta] (English... Manual (CAM) 18, Maintenance, Repair, And Alteration, Of Airframes, Powerplants, Propellers, And...--and-- Guidance--Library/rgccab.nsf/0/41df1277f2dc7e0e86257bcf005112bf/ $FILE/CAM--18--1959.pdf...

  4. Ausdruckskraft und Regelmaessigkeit: Was Esperanto fuer automatische Uebersetzung geeignet macht (Expressiveness and Formal Regularity: What Makes Esperanto Suitable for Machine Translation).

    ERIC Educational Resources Information Center

    Schubert, Klaus

    1988-01-01

    Describes DLT, the multilingual machine translation system that uses Esperanto as an intermediate language in which substantial portions of the translation subprocesses are carried out. The criteria for choosing an intermediate language and the reasons for preferring Esperanto over other languages are explained. (Author/DJD)

  5. Source-Code Plagiarism in Universities: A Comparative Study of Student Perspectives in China and the UK

    ERIC Educational Resources Information Center

    Zhang, Dongyang; Joy, Mike; Cosma, Georgina; Boyatt, Russell; Sinclair, Jane; Yau, Jane

    2014-01-01

    There has been much research and discussion relating to variations in plagiaristic activity observed in students from different demographic backgrounds. Differences in behaviour have been noted in many studies, although the underlying reasons are still a matter of debate. Existing work focuses mainly on textual plagiarism, and most often derives…

  6. Implementation of Scene Shadows in the Target Acquistion TDA (TARGAC).

    DTIC Science & Technology

    1994-11-01

    B-2 APPENDIX C: ENGINEERING CHANGE REPORTS .......................... C-1 APPENDIX D: TASK...Appendix C contains the details of each change made. Each change is accompanied by an Engineering Change Report (ECR) and in-line documentation of the source...code. Appendix D is a formal design document of the changes needed to implement shadowing by small-scale features. The implementation presented in

  7. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  8. GPU-basierte Smart Visibility Techniken für die Planung von Tumor-Operationen

    NASA Astrophysics Data System (ADS)

    Tietjen, Christian; Kubisch, Christoph; Hiller, Stefan; Preim, Bernhard

    Bei der Planung von Tumoroperationen ist die Einschätzung von Abständen und Infiltrationen zu vitalen Strukturen wichtig. Im Bereich der medizinischen Visualisierung wurden hierfür bereits zahlreiche Techniken entwickelt, die unter dem Begriff Smart Visibility zusammengefasst werden. Zu diesen zählen Ghost Views und Section Views. In diesem Beitrag wird eine GPU-basierte Realisierung dieser Techniken für polygonale Daten vorgestellt. Die Parametrisierung der Techniken erfolgt automatisch, um einen klinischen Einsatz ermöglichen zu können.

  9. Verbesserte Visualisierung der Koronararterien in MSCT-Daten mit direkter Vergleichbarkeit zur Angiographie

    NASA Astrophysics Data System (ADS)

    Lacalli, Christina; Jähne, Marion; Wesarg, Stefan

    In diesem Beitrag stellen wir neue, automatisierte Verfahren zur Visualisierung der Koronararterien einerseits und für eine direkte Vergleichbarkeit mit konventionellen Angiogrammen andererseits vor. Unser Ansatz umfasst Methoden für die automatische Extraktion des Herzens aus kontrastverstärkten CT-Daten, sowie für die Maskierung grosser kontrastmittelgefüllter Kavitäten des Herzens, um die Sichtbarkeit der Koronararterien bei der Darstellung mittels Volumenrendering zu verbessern. Zum direkten Vergleich mit konventionellen Angiographien wurde ein Verfahren zur automatischen Generierung von Projektionsansichten aus den CT-Daten entwickelt.

  10. Visualisierung analoger Schaltungen durch 3-D Animation von transienten SPICE-Simulationen

    NASA Astrophysics Data System (ADS)

    Becker, J.; Manoli, Y.

    2007-06-01

    Für das Zeichnen analoger Schaltpläne wird oft versucht, die Potentialverteilung in der entsprechenden Schaltung auszunutzen, um eine Platzierung der Bauteile nach abfallendem Potential vorzunehmen. Mit Hilfe von Computerunterstützung gelingt es, eine verallgemeinerte dreidimensionale Platzierungsstrategie anzuwenden, die allein auf Grund der Potentialwerte einer Schaltung die automatische Generierung einer technisch exakten Potentialdarstellung erlaubt. Somit ist es möglich, die Ergebnisse von transienten SPICE-Simulationen in jedem Zeitschritt darzustellen und eine Animation des zeitlichen Verhaltens zu erzeugen. Die Umsetzung dieser Methode zur Einbettung in eine webbasierte Lern - und Arbeitsplattform wird im Folgenden erläutert.

  11. Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.

  12. Using a source-to-source transformation to introduce multi-threading into the AliRoot framework for a parallel event reconstruction

    NASA Astrophysics Data System (ADS)

    Lohn, Stefan B.; Dong, Xin; Carminati, Federico

    2012-12-01

    Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the problems as described before and to provide a straight-forward way of parallelization with almost no interference between threads. This makes the approach simple and reduces the required manual changes in the code. In a first step, unconditional thread-safety will be introduced to bring the original sequential and thread unaware source-code into the position of utilizing multi-threading. Afterwards further investigations have to be performed to point out candidates of classes that are useful to share amongst threads. Then in a second step, the transformation has to change the code to share these classes and finally to verify if there are anymore invalid interferences between threads.

  13. Gobe: an interactive, web-based tool for comparative genomic visualization.

    PubMed

    Pedersen, Brent S; Tang, Haibao; Freeling, Michael

    2011-04-01

    Gobe is a web-based tool for viewing comparative genomic data. It supports viewing multiple genomic regions simultaneously. Its simple text format and flash-based rendering make it an interactive, exploratory research tool. Gobe can be used without installation through our web service, or downloaded and customized with stylesheets and javascript callback functions. Gobe is a flash application that runs in all modern web-browsers. The full source-code, including that for the online web application is available under the MIT license at: http://github.com/brentp/gobe. Sample applications are hosted at http://try-gobe.appspot.com/ and http://synteny.cnr.berkeley.edu/gobe-app/.

  14. Quantitative Analyse und Visualisierung der Herzfunktionen

    NASA Astrophysics Data System (ADS)

    Sauer, Anne; Schwarz, Tobias; Engel, Nicole; Seitel, Mathias; Kenngott, Hannes; Mohrhardt, Carsten; Loßnitzer, Dirk; Giannitsis, Evangelos; Katus, Hugo A.; Meinzer, Hans-Peter

    Die computergestützte bildbasierte Analyse der Herzfunktionen ist mittlerweile Standard in der Kardiologie. Die verfügbaren Produkte erfordern meist ein hohes Maß an Benutzerinteraktion und somit einen erhöhten Zeitaufwand. In dieser Arbeit wird ein Ansatz vorgestellt, der dem Kardiologen eine größtenteils automatische Analyse der Herzfunktionen mittels MRT-Bilddaten ermöglicht und damit Zeitersparnis schafft. Hierbei werden alle relevanten herzphysiologsichen Parameter berechnet und mithilfe von Diagrammen und Graphen visualisiert. Diese Berechnungen werden evaluiert, indem die ermittelten Werte mit manuell vermessenen verglichen werden. Der hierbei berechnete mittlere Fehler liegt mit 2,85 mm für die Wanddicke und 1,61 mm für die Wanddickenzunahme immer noch im Bereich einer Pixelgrösse der verwendeten Bilder.

  15. XML-basierte Produkt- und Prozessdaten für die Leittechnik-Projektierung

    NASA Astrophysics Data System (ADS)

    Schleipen, Miriam

    Für die Überwachung und Steuerung hochkomplexer Produktionsprozesse werden Prozessleitsysteme eingesetzt. Ständige Veränderungen zwingen Produktionsbetriebe wandlungsfähig zu sein. Entsprechend muss auch die Technik diese Flexibilität unterstützen. Jede Veränderung des Produktionsprozesses muss eingeplant, die Anlagen neu konfiguriert und projektiert werden. Dabei müssen auch neue Prozessbilder für die Bedien- und Steuerungssysteme erstellt werden. Am Fraunhofer IITB wurde ein Engineering-Framework entwickelt, das das Leitsystem automatisch projektiert und die zugehörige Prozessvisualisierung generiert. In diesem Beitrag wird das Modul vorgestellt, dass die Prozessabbilder erstellt. Neben der Visualisierung von Anlagen werden auch laufende Prozesse und bearbeitete Produkte dargestellt. So können beispielsweise Identsysteme mit der Leittechnik gekoppelt werden.

  16. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  17. Growthcurver: an R package for obtaining interpretable metrics from microbial growth curves.

    PubMed

    Sprouffske, Kathleen; Wagner, Andreas

    2016-04-19

    Plate readers can measure the growth curves of many microbial strains in a high-throughput fashion. The hundreds of absorbance readings collected simultaneously for hundreds of samples create technical hurdles for data analysis. Growthcurver summarizes the growth characteristics of microbial growth curve experiments conducted in a plate reader. The data are fitted to a standard form of the logistic equation, and the parameters have clear interpretations on population-level characteristics, like doubling time, carrying capacity, and growth rate. Growthcurver is an easy-to-use R package available for installation from the Comprehensive R Archive Network (CRAN). The source code is available under the GNU General Public License and can be obtained from Github (Sprouffske K, Growthcurver sourcecode, 2016).

  18. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NASA Astrophysics Data System (ADS)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.; Panda Collaboration

    2012-02-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  19. FTAP: a Linux-based program for tapping and music experiments.

    PubMed

    Finney, S A

    2001-02-01

    This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.

  20. Using leap motion to investigate the emergence of structure in speech and language.

    PubMed

    Eryilmaz, Kerem; Little, Hannah

    2017-10-01

     In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergenceof speech structure. These signal spaces need to be continuous, non-discretized spaces from which discrete unitsand patterns can emerge. They need to be dissimilar from-but comparable with-the vocal tract, in order tominimize interference from pre-existing linguistic knowledge, while informing us about language. This is a hardbalance to strike. This article outlines a new approach that uses the Leap Motion, an infrared controller that canconvert manual movement in 3d space into sound. The signal space using this approach is more flexible than signalspaces in previous attempts. Further, output data using this approach is simpler to arrange and analyze. Theexperimental interface was built using free, and mostly open- source libraries in Python. We provide our sourcecode for other researchers as open source.

  1. SynergyFinder: a web application for analyzing drug combination dose-response matrix data.

    PubMed

    Ianevski, Aleksandr; He, Liye; Aittokallio, Tero; Tang, Jing

    2017-08-01

    Rational design of drug combinations has become a promising strategy to tackle the drug sensitivity and resistance problem in cancer treatment. To systematically evaluate the pre-clinical significance of pairwise drug combinations, functional screening assays that probe combination effects in a dose-response matrix assay are commonly used. To facilitate the analysis of such drug combination experiments, we implemented a web application that uses key functions of R-package SynergyFinder, and provides not only the flexibility of using multiple synergy scoring models, but also a user-friendly interface for visualizing the drug combination landscapes in an interactive manner. The SynergyFinder web application is freely accessible at https://synergyfinder.fimm.fi ; The R-package and its source-code are freely available at http://bioconductor.org/packages/release/bioc/html/synergyfinder.html . jing.tang@helsinki.fi. © The Author(s) 2017. Published by Oxford University Press.

  2. Coding conventions and principles for a National Land-Change Modeling Framework

    USGS Publications Warehouse

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  3. Kontinuierliche Wanddickenbestimmung und Visualisierung des linken Herzventrikels

    NASA Astrophysics Data System (ADS)

    Dornheim, Lars; Hahn, Peter; Oeltze, Steffen; Preim, Bernhard; Tönnies, Klaus D.

    Zur Bestimmung von Defekten in der Herztätigkeit kann die Veränderung der Wanddicke des linken Ventrikels in zeitlichen MRTAufnahmesequenzen gemessen werden. Derzeit werden für diese Bestimmung im allgemeinen nur die aufwändig manuell erstellte Segmentierungen der Endsystole und Enddiastole benutzt. Wir stellen ein bis auf die Startpunktinitialisierung automatisches Verfahren zur Bestimmung der Wanddicke des linken Ventrikels und ihrer Veränderung vor, das auf einer vollständigen Segmentierung der Herzwand in allen Zeitschritten durch ein dynamisches dreidimensionales Formmodell (Stabiles Feder-Masse-Modell) basiert. Dieses Modell nutzt bei der Segmentierung neben der Grauwertinformation eines Zeitschrittes auch die Segmentierungen der anderen Zeitschritte und ist so aufgebaut, dass die Wanddicken direkt gemessen und visualisiert werden können. Auf diese Weise werden die lokalen Wanddickenextrema über den gesamten Aufnahmezeitraum detektiert, auch wenn sie nicht in die Endsystole bzw. -diastole fallen. Das Verfahren wurde auf sechs 4D-Kardio-MRT-Datensätzen evaluiert und stellte sich als sehr robust bzgl. der einzig nötigen Interaktion heraus.

  4. Efficient processing of two-dimensional arrays with C or C++

    USGS Publications Warehouse

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  5. CImbinator: a web-based tool for drug synergy analysis in small- and large-scale datasets.

    PubMed

    Flobak, Åsmund; Vazquez, Miguel; Lægreid, Astrid; Valencia, Alfonso

    2017-08-01

    Drug synergies are sought to identify combinations of drugs particularly beneficial. User-friendly software solutions that can assist analysis of large-scale datasets are required. CImbinator is a web-service that can aid in batch-wise and in-depth analyzes of data from small-scale and large-scale drug combination screens. CImbinator offers to quantify drug combination effects, using both the commonly employed median effect equation, as well as advanced experimental mathematical models describing dose response relationships. CImbinator is written in Ruby and R. It uses the R package drc for advanced drug response modeling. CImbinator is available at http://cimbinator.bioinfo.cnio.es , the source-code is open and available at https://github.com/Rbbt-Workflows/combination_index . A Docker image is also available at https://hub.docker.com/r/mikisvaz/rbbt-ci_mbinator/ . asmund.flobak@ntnu.no or miguel.vazquez@cnio.es. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  6. Applying Standard Interfaces to a Process-Control Language

    NASA Technical Reports Server (NTRS)

    Berthold, Richard T.

    2005-01-01

    A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.

  7. A study of low-cost, robust assistive listening system (ALS) based on digital wireless technology.

    PubMed

    Israsena, P; Dubsok, P; Pan-Ngum, S

    2008-11-01

    We have developed a simple, low-cost digital wireless broadcasting system prototype, intended for a classroom of hearing impaired students. The system is designed to be a low-cost alternative to an existing FM system. The system implemented is for short-range communication, with a one-transmitter, multiple-receiver configuration, which is typical for these classrooms. The data is source-coded for voice-band quality, FSK modulated, and broadcasted via a 915 MHz radio frequency. A DES encryption can optionally be added for better information security. Test results show that the system operating range is approximately ten metres, and the sound quality is close to telephone quality as intended. We also discuss performance issues such as sound, power and size, as well as transmission protocols. The test results are the proof of concept that the prototype is a viable alternative to an existing FM system. Improvements can be made to the system's sound quality via techniques such as channel coding, which is also discussed.

  8. Development of an open-source web-based intervention for Brazilian smokers - Viva sem Tabaco.

    PubMed

    Gomide, H P; Bernardino, H S; Richter, K; Martins, L F; Ronzani, T M

    2016-08-02

    Web-based interventions for smoking cessation available in Portuguese do not adhere to evidence-based treatment guidelines. Besides, all existing web-based interventions are built on proprietary platforms that developing countries often cannot afford. We aimed to describe the development of "Viva sem Tabaco", an open-source web-based intervention. The development of the intervention included the selection of content from evidence-based guidelines for smoking cessation, the design of the first layout, conduction of 2 focus groups to identify potential features, refinement of the layout based on focus groups and correction of content based on feedback provided by specialists on smoking cessation. At the end, we released the source-code and intervention on the Internet and translated it into Spanish and English. The intervention developed fills gaps in the information available in Portuguese and the lack of open-source interventions for smoking cessation. The open-source licensing format and its translation system may help researchers from different countries deploying evidence-based interventions for smoking cessation.

  9. elevatr: Access Elevation Data from Various APIs | Science ...

    EPA Pesticide Factsheets

    Several web services are available that provide access to elevation data. This package provides access to several of those services and returns elevation data either as a SpatialPointsDataFrame from point elevation services or as a raster object from raster elevation services. Currently, the package supports access to the Mapzen Elevation Service, Mapzen Terrain Service, and the USGS Elevation Point Query Service. The R language for statistical computing is increasingly used for spatial data analysis . This R package, elevatr, is in response to this and provides access to elevation data from various sources directly in R. The impact of `elevatr` is that it will 1) facilitate spatial analysis in R by providing access to foundational dataset for many types of analyses (e.g. hydrology, limnology) 2) open up a new set of users and uses for APIs widely used outside of R, and 3) provide an excellent example federal open source development as promoted by the Federal Source Code Policy (https://sourcecode.cio.gov/).

  10. An Open-Source Sandbox for Increasing the Accessibility of Functional Programming to the Bioinformatics and Scientific Communities

    PubMed Central

    Fenwick, Matthew; Sesanker, Colbert; Schiller, Martin R.; Ellis, Heidi JC; Hinman, M. Lee; Vyas, Jay; Gryk, Michael R.

    2012-01-01

    Scientists are continually faced with the need to express complex mathematical notions in code. The renaissance of functional languages such as LISP and Haskell is often credited to their ability to implement complex data operations and mathematical constructs in an expressive and natural idiom. The slow adoption of functional computing in the scientific community does not, however, reflect the congeniality of these fields. Unfortunately, the learning curve for adoption of functional programming techniques is steeper than that for more traditional languages in the scientific community, such as Python and Java, and this is partially due to the relative sparseness of available learning resources. To fill this gap, we demonstrate and provide applied, scientifically substantial examples of functional programming, We present a multi-language source-code repository for software integration and algorithm development, which generally focuses on the fields of machine learning, data processing, bioinformatics. We encourage scientists who are interested in learning the basics of functional programming to adopt, reuse, and learn from these examples. The source code is available at: https://github.com/CONNJUR/CONNJUR-Sandbox (see also http://www.connjur.org). PMID:25328913

  11. An Open-Source Sandbox for Increasing the Accessibility of Functional Programming to the Bioinformatics and Scientific Communities.

    PubMed

    Fenwick, Matthew; Sesanker, Colbert; Schiller, Martin R; Ellis, Heidi Jc; Hinman, M Lee; Vyas, Jay; Gryk, Michael R

    2012-01-01

    Scientists are continually faced with the need to express complex mathematical notions in code. The renaissance of functional languages such as LISP and Haskell is often credited to their ability to implement complex data operations and mathematical constructs in an expressive and natural idiom. The slow adoption of functional computing in the scientific community does not, however, reflect the congeniality of these fields. Unfortunately, the learning curve for adoption of functional programming techniques is steeper than that for more traditional languages in the scientific community, such as Python and Java, and this is partially due to the relative sparseness of available learning resources. To fill this gap, we demonstrate and provide applied, scientifically substantial examples of functional programming, We present a multi-language source-code repository for software integration and algorithm development, which generally focuses on the fields of machine learning, data processing, bioinformatics. We encourage scientists who are interested in learning the basics of functional programming to adopt, reuse, and learn from these examples. The source code is available at: https://github.com/CONNJUR/CONNJUR-Sandbox (see also http://www.connjur.org).

  12. Java Source Code Analysis for API Migration to Embedded Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, Victor; McCoy, James A.; Guerrero, Jonathan

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered bymore » APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.« less

  13. OPAL: An Open-Source MPI-IO Library over Cray XT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Weikuan; Vetter, Jeffrey S; Canon, Richard Shane

    Parallel IO over Cray XT is supported by a vendor-supplied MPI-IO package. This package contains a proprietary ADIO implementation built on top of the sysio library. While it is reasonable to maintain a stable code base for application scientists' convenience, it is also very important to the system developers and researchers to analyze and assess the effectiveness of parallel IO software, and accordingly, tune and optimize the MPI-IO implementation. A proprietary parallel IO code base relinquishes such flexibilities. On the other hand, a generic UFS-based MPI-IO implementation is typically used on many Linux-based platforms. We have developed an open-source MPI-IOmore » package over Lustre, referred to as OPAL (OPportunistic and Adaptive MPI-IO Library over Lustre). OPAL provides a single source-code base for MPI-IO over Lustre on Cray XT and Linux platforms. Compared to Cray implementation, OPAL provides a number of good features, including arbitrary specification of striping patterns and Lustre-stripe aligned file domain partitioning. This paper presents the performance comparisons between OPAL and Cray's proprietary implementation. Our evaluation demonstrates that OPAL achieves the performance comparable to the Cray implementation. We also exemplify the benefits of an open source package in revealing the underpinning of the parallel IO performance.« less

  14. PPDMs-a resource for mapping small molecule bioactivities from ChEMBL to Pfam-A protein domains.

    PubMed

    Kruger, Felix A; Gaulton, Anna; Nowotka, Michal; Overington, John P

    2015-03-01

    PPDMs is a resource that maps small molecule bioactivities to protein domains from the Pfam-A collection of protein families. Small molecule bioactivities mapped to protein domains add important precision to approaches that use protein sequence searches alignments to assist applications in computational drug discovery and systems and chemical biology. We have previously proposed a mapping heuristic for a subset of bioactivities stored in ChEMBL with the Pfam-A domain most likely to mediate small molecule binding. We have since refined this mapping using a manual procedure. Here, we present a resource that provides up-to-date mappings and the possibility to review assigned mappings as well as to participate in their assignment and curation. We also describe how mappings provided through the PPDMs resource are made accessible through the main schema of the ChEMBL database. The PPDMs resource and curation interface is available at https://www.ebi.ac.uk/chembl/research/ppdms/pfam_maps. The source-code for PPDMs is available under the Apache license at https://github.com/chembl/pfam_maps. Source code is available at https://github.com/chembl/pfam_map_loader to demonstrate the integration process with the main schema of ChEMBL. © The Author 2014. Published by Oxford University Press.

  15. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data.

    PubMed

    Muir, Dylan R; Kampa, Björn M

    2014-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories.

  16. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data

    PubMed Central

    Muir, Dylan R.; Kampa, Björn M.

    2015-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories1. PMID:25653614

  17. Complexity reduction in the H.264/AVC using highly adaptive fast mode decision based on macroblock motion activity

    NASA Astrophysics Data System (ADS)

    Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir

    2015-11-01

    The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.

  18. Naval Observatory Vector Astrometry Software (NOVAS) Version 3.1, Introducing a Python Edition

    NASA Astrophysics Data System (ADS)

    Barron, Eric G.; Kaplan, G. H.; Bangert, J.; Bartlett, J. L.; Puatua, W.; Harris, W.; Barrett, P.

    2011-01-01

    The Naval Observatory Vector Astrometry Software (NOVAS) is a source-code library that provides common astrometric quantities and transformations. NOVAS calculations are accurate at the sub-milliarcsecond level. The library can supply, in one or two subroutine or function calls, the instantaneous celestial position of any star or planet in a variety of coordinate systems. NOVAS also provides access to all of the building blocks that go into such computations. NOVAS Version 3.1 introduces a Python edition alongside the Fortran and C editions. The Python edition uses the computational code from the C edition and, currently, mimics the function calls of the C edition. Future versions will expand the functionality of the Python edition to harness the object-oriented nature of the Python language, and will implement the ability to handle large quantities of objects or observers using the array functionality in NumPy (a third-party scientific package for Python). NOVAS 3.1 also adds a module to transform GCRS vectors to the ITRS; the ITRS to GCRS transformation was already provided in NOVAS 3.0. The module that corrects an ITRS vector for polar motion has been modified to undo that correction upon demand. In the C edition, the ephemeris-access functions have been revised for use on 64-bit systems and for improved performance in general. NOVAS, including documentation, is available from the USNO website (http://www.usno.navy.mil/USNO/astronomical-applications/software-products/novas).

  19. Thermodynamic properties of sea air

    NASA Astrophysics Data System (ADS)

    Feistel, R.; Wright, D. G.; Kretzschmar, H.-J.; Hagen, E.; Herrmann, S.; Span, R.

    2010-02-01

    Very accurate thermodynamic potential functions are available for fluid water, ice, seawater and humid air covering wide ranges of temperature and pressure conditions. They permit the consistent computation of all equilibrium properties as, for example, required for coupled atmosphere-ocean models or the analysis of observational or experimental data. With the exception of humid air, these potential functions are already formulated as international standards released by the International Association for the Properties of Water and Steam (IAPWS), and have been adopted in 2009 for oceanography by IOC/UNESCO. In this paper, we derive a collection of formulas for important quantities expressed in terms of the thermodynamic potentials, valid for typical phase transitions and composite systems of humid air and water/ice/seawater. Particular attention is given to equilibria between seawater and humid air, referred to as "sea air" here. In a related initiative, these formulas will soon be implemented in a source-code library for easy practical use. The library is primarily aimed at oceanographic applications but will be relevant to air-sea interaction and meteorology as well. The formulas provided are valid for any consistent set of suitable thermodynamic potential functions. Here we adopt potential functions from previous publications in which they are constructed from theoretical laws and empirical data; they are briefly summarized in the appendix. The formulas make use of the full accuracy of these thermodynamic potentials, without additional approximations or empirical coefficients. They are expressed in the temperature scale ITS-90 and the 2008 Reference-Composition Salinity Scale.

  20. Thermodynamic properties of sea air

    NASA Astrophysics Data System (ADS)

    Feistel, R.; Kretzschmar, H.-J.; Span, R.; Hagen, E.; Wright, D. G.; Herrmann, S.

    2009-10-01

    Very accurate thermodynamic potential functions are available for fluid water, ice, seawater and humid air covering wide ranges of temperature and pressure conditions. They permit the consistent computation of all equilibrium properties as, for example, required for coupled atmosphere-ocean models or the analysis of observational or experimental data. With the exception of humid air, these potential functions are already formulated as international standards released by the International Association for the Properties of Water and Steam (IAPWS), and have been adopted in 2009 for oceanography by IOC/UNESCO. In this paper, we derive a collection of formulas for important quantities expressed in terms of the thermodynamic potentials, valid for typical phase transitions and composite systems of humid air and water/ice/seawater. Particular attention is given to equilibria between seawater and humid air, referred to as ''sea air'' here. In a related initiative, these formulas will soon be implemented in a source-code library for easy practical use. The library is primarily aimed at oceanographic applications but will be relevant to air-sea interaction and meteorology as well. The formulas provided are valid for any consistent set of suitable thermodynamic potential functions. Here we adopt potential functions from previous publications in which they are constructed from theoretical laws and empirical data; they are briefly summarized in the appendix. The formulas make use of the full accuracy of these thermodynamic potentials, without additional approximations or empirical coefficients. They are expressed in the temperature scale ITS-90 and the 2008 Reference-Composition Salinity Scale.

  1. PD5: a general purpose library for primer design software.

    PubMed

    Riley, Michael C; Aubrey, Wayne; Young, Michael; Clare, Amanda

    2013-01-01

    Complex PCR applications for large genome-scale projects require fast, reliable and often highly sophisticated primer design software applications. Presently, such applications use pipelining methods to utilise many third party applications and this involves file parsing, interfacing and data conversion, which is slow and prone to error. A fully integrated suite of software tools for primer design would considerably improve the development time, the processing speed, and the reliability of bespoke primer design software applications. The PD5 software library is an open-source collection of classes and utilities, providing a complete collection of software building blocks for primer design and analysis. It is written in object-oriented C(++) with an emphasis on classes suitable for efficient and rapid development of bespoke primer design programs. The modular design of the software library simplifies the development of specific applications and also integration with existing third party software where necessary. We demonstrate several applications created using this software library that have already proved to be effective, but we view the project as a dynamic environment for building primer design software and it is open for future development by the bioinformatics community. Therefore, the PD5 software library is published under the terms of the GNU General Public License, which guarantee access to source-code and allow redistribution and modification. The PD5 software library is downloadable from Google Code and the accompanying Wiki includes instructions and examples: http://code.google.com/p/primer-design.

  2. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    PubMed

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  3. Phylogeny Reconstruction with Alignment-Free Method That Corrects for Horizontal Gene Transfer.

    PubMed

    Bromberg, Raquel; Grishin, Nick V; Otwinowski, Zbyszek

    2016-06-01

    Advances in sequencing have generated a large number of complete genomes. Traditionally, phylogenetic analysis relies on alignments of orthologs, but defining orthologs and separating them from paralogs is a complex task that may not always be suited to the large datasets of the future. An alternative to traditional, alignment-based approaches are whole-genome, alignment-free methods. These methods are scalable and require minimal manual intervention. We developed SlopeTree, a new alignment-free method that estimates evolutionary distances by measuring the decay of exact substring matches as a function of match length. SlopeTree corrects for horizontal gene transfer, for composition variation and low complexity sequences, and for branch-length nonlinearity caused by multiple mutations at the same site. We tested SlopeTree on 495 bacteria, 73 archaea, and 72 strains of Escherichia coli and Shigella. We compared our trees to the NCBI taxonomy, to trees based on concatenated alignments, and to trees produced by other alignment-free methods. The results were consistent with current knowledge about prokaryotic evolution. We assessed differences in tree topology over different methods and settings and found that the majority of bacteria and archaea have a core set of proteins that evolves by descent. In trees built from complete genomes rather than sets of core genes, we observed some grouping by phenotype rather than phylogeny, for instance with a cluster of sulfur-reducing thermophilic bacteria coming together irrespective of their phyla. The source-code for SlopeTree is available at: http://prodata.swmed.edu/download/pub/slopetree_v1/slopetree.tar.gz.

  4. Automated method for the rapid and precise estimation of adherent cell culture characteristics from phase contrast microscopy images.

    PubMed

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-03-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. © 2013 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.

  5. Phylogeny Reconstruction with Alignment-Free Method That Corrects for Horizontal Gene Transfer

    PubMed Central

    Grishin, Nick V.; Otwinowski, Zbyszek

    2016-01-01

    Advances in sequencing have generated a large number of complete genomes. Traditionally, phylogenetic analysis relies on alignments of orthologs, but defining orthologs and separating them from paralogs is a complex task that may not always be suited to the large datasets of the future. An alternative to traditional, alignment-based approaches are whole-genome, alignment-free methods. These methods are scalable and require minimal manual intervention. We developed SlopeTree, a new alignment-free method that estimates evolutionary distances by measuring the decay of exact substring matches as a function of match length. SlopeTree corrects for horizontal gene transfer, for composition variation and low complexity sequences, and for branch-length nonlinearity caused by multiple mutations at the same site. We tested SlopeTree on 495 bacteria, 73 archaea, and 72 strains of Escherichia coli and Shigella. We compared our trees to the NCBI taxonomy, to trees based on concatenated alignments, and to trees produced by other alignment-free methods. The results were consistent with current knowledge about prokaryotic evolution. We assessed differences in tree topology over different methods and settings and found that the majority of bacteria and archaea have a core set of proteins that evolves by descent. In trees built from complete genomes rather than sets of core genes, we observed some grouping by phenotype rather than phylogeny, for instance with a cluster of sulfur-reducing thermophilic bacteria coming together irrespective of their phyla. The source-code for SlopeTree is available at: http://prodata.swmed.edu/download/pub/slopetree_v1/slopetree.tar.gz. PMID:27336403

  6. Automated Method for the Rapid and Precise Estimation of Adherent Cell Culture Characteristics from Phase Contrast Microscopy Images

    PubMed Central

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-01-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521

  7. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology

    PubMed Central

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen

    2013-01-01

    Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415

  8. Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    We present views and analysis of the execution of several PVM (Parallel Virtual Machine) codes for Computational Fluid Dynamics on a networks of Sparcstations, including: (1) NAS Parallel Benchmarks CG and MG; (2) a multi-partitioning algorithm for NAS Parallel Benchmark SP; and (3) an overset grid flowsolver. These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains: (1) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (2) Monitor, a library of runtime trace-collection routines; (3) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (4) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran 77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses XIIR5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (1) the impact of long message latencies; (2) the impact of multiprogramming overheads and associated load imbalance; (3) cache and virtual-memory effects; and (4) significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (1) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets: and (2) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.

  9. Tools for Implementing the Recent IAU Resolutions: USNO Circular 179 and the NOVAS Software Package

    NASA Astrophysics Data System (ADS)

    Kaplan, G. H.; Bangert, J. A.

    2006-08-01

    The resolutions on positional astronomy adopted at the 1997 and 2000 IAU General Assemblies are far-reaching in scope, affecting both the details of various computations and the basic concepts upon which they are built. For many scientists and engineers, applying these recommendations to practical problems is thus doubly challenging. Because the U.S. Naval Observatory (USNO) serves a broad base of users, we have provided two different tools to aid in implementing the resolutions, both of which are intended for the person who is knowledgeable but not necessarily expert in positional astronomy. These tools complement the new material that has been added to The Astronomical Almanac (see paper by Hohenkerk). USNO Circular 179 is a 118-page book that introduces the resolutions to non-specialists. It includes extensive narratives describing the basic concepts as well as compilations of the equations necessary to apply the recommendations. The resolutions have been logically grouped into six main chapters. The Circular is available as a hard-cover book or as a PDF file that can be downloaded from either the USNO/AA web site (http://aa.usno.navy.mil/) or arXiv.org. NOVAS (Naval Observatory Vector Astrometry Subroutines) is a source-code library available in both Fortran and C. It is a long established package with a wide user base that has recently been extensively revised (in version 3.0) to implement the recent IAU resolutions. However, use of NOVAS does not require detailed knowledge of the resolutions, since commonly requested high-level data _ for example, topocentric positions of stars or planets _ are provided in a single call. NOVAS can be downloaded from the USNO/AA web site. Both Circular 179 and NOVAS version 3.0 anticipate IAU adoption of the recommendations of the 2003-2006 working groups on precession and nomenclature.

  10. Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Tucker, Deanne (Technical Monitor)

    1994-01-01

    We present views and analysis of the execution of several PVM codes for Computational Fluid Dynamics on a network of Sparcstations, including (a) NAS Parallel benchmarks CG and MG (White, Alund and Sunderam 1993); (b) a multi-partitioning algorithm for NAS Parallel Benchmark SP (Wijngaart 1993); and (c) an overset grid flowsolver (Smith 1993). These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains (a) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (b) Monitor, a library of run-time trace-collection routines; (c) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (d) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses X11R5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (a) the impact of long message latencies; (b) the impact of multiprogramming overheads and associated load imbalance; (c) cache and virtual-memory effects; and (4significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (a) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets; and (b) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.

  11. An MPI + $X$ implementation of contact global search using Kokkos

    DOE PAGES

    Hansen, Glen A.; Xavier, Patrick G.; Mish, Sam P.; ...

    2015-10-05

    This paper describes an approach that seeks to parallelize the spatial search associated with computational contact mechanics. In contact mechanics, the purpose of the spatial search is to find “nearest neighbors,” which is the prelude to an imprinting search that resolves the interactions between the external surfaces of contacting bodies. In particular, we are interested in the contact global search portion of the spatial search associated with this operation on domain-decomposition-based meshes. Specifically, we describe an implementation that combines standard domain-decomposition-based MPI-parallel spatial search with thread-level parallelism (MPI-X) available on advanced computer architectures (those with GPU coprocessors). Our goal ismore » to demonstrate the efficacy of the MPI-X paradigm in the overall contact search. Standard MPI-parallel implementations typically use a domain decomposition of the external surfaces of bodies within the domain in an attempt to efficiently distribute computational work. This decomposition may or may not be the same as the volume decomposition associated with the host physics. The parallel contact global search phase is then employed to find and distribute surface entities (nodes and faces) that are needed to compute contact constraints between entities owned by different MPI ranks without further inter-rank communication. Key steps of the contact global search include computing bounding boxes, building surface entity (node and face) search trees and finding and distributing entities required to complete on-rank (local) spatial searches. To enable source-code portability and performance across a variety of different computer architectures, we implemented the algorithm using the Kokkos hardware abstraction library. While we targeted development towards machines with a GPU accelerator per MPI rank, we also report performance results for OpenMP with a conventional multi-core compute node per rank. Results here demonstrate a 47 % decrease in the time spent within the global search algorithm, comparing the reference ACME algorithm with the GPU implementation, on an 18M face problem using four MPI ranks. As a result, while further work remains to maximize performance on the GPU, this result illustrates the potential of the proposed implementation.« less

  12. Planar millimeter wave radar frontend for automotive applications

    NASA Astrophysics Data System (ADS)

    Grubert, J.; Heyen, J.; Metz, C.; Stange, L. C.; Jacob, A. F.

    2003-05-01

    A fully integrated planar sensor for 77 GHz automotive applications is presented. The frontend consists of a transceiver multichip module and an electronically steerable microstrip patch array. The antenna feed network is based on a modified Rotman-lens and connected to the array in a multilayer approach offering higher integration. Furthermore, the frontend comprises a phase lock loop to allow proper frequency-modulated continuous wave (FMCW) radar operation. The latest experimental results verify the functionality of this advanced frontend design featuring automatic cruise control, precrash sensing and cut-in detection. These promising radar measurements give reason to a detailed theoretical investigation of system performance. Employing commercially available MMIC various circuit topologies are compared based on signal-tonoise considerations. Different scenarios for both sequential and parallel lobing hint to more advanced sensor designs and better performance. These improvements strongly depend on the availability of suitable MMIC and reliable packaging technologies. Within our present approach possible future MMIC developments are already considered and, thus, can be easily adapted by the flexible frontend design. Es wird ein integrierter planarer Sensor für 77 GHz Radaranwendungen vorgestellt. Das Frontend besteht aus einem Sende- und Empfangs-Multi-Chip-Modul und einer elektronisch schwenkbaren Antenne. Das Speisenetzwerk der Antenne basiert auf einer modifizierten Rotman- Linse. Für eine kompakte Bauweise sind Antenne und Speisenetzwerk mehrlagig integriert. Weiterhin umfasst das Frontend eine Phasenregelschleife für eine präzise Steuerung des frequenzmodulierten Dauerstrichradars. Die aktuellen Messergebnisse bestätigen die Funktionalit¨at dieses neuartigen Frontend-Designs, das automatische Geschwindigkeitsregelung, Kollisionswarnung sowie Nahbereichsüberwachung ermöglicht. Die Qualität der Messergebnisse hat weiterführende theoretische Untersuchungen über die potenzielle Systemleistungsfähigkeit motiviert. Unter Berücksichtigung von kommerziell erhältlichenMMICs werden verschiedene Schaltungstopologien auf der Grundlage des Signal-Rausch-Verhältnisses verglichen. Sowohl für sequenzielle als auch für parallele Ansteuerung der Antennenkeulen wird eine deutliche Leistungssteigerung ermittelt. Diese Verbesserungen hängen maßgeblich von der Verfügbarkeit geeigneter MMICs und einer zuverlässigen Aufbau- und Verbindungstechnik ab. Das vorliegende Frontend-Konzept kann auf Grund seiner Flexibilität leicht an derlei zukünftige Entwicklungen angepasst werden.

  13. AAS 227: Day 1

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-01-01

    Editors Note:This week were at the 227th AAS Meeting in Kissimmee, FL. Along with several fellow authors from astrobites.com, I will bewritingupdates on selectedevents at themeeting and posting at the end of each day. Follow along here or at astrobites.com, or catch ourlive-tweeted updates from the @astrobites Twitter account. The usual posting schedule for AAS Nova will resumenext week.Things kicked off last night at our undergraduate reception booth. Thanks to all of you who stopped by we were delightedto have so many people tell us that they already know about and useastrobites, and we were excited to introduce a new cohort of students at AAS to astrobites for the first time.Tuesday morning was the official start of the meeting. Here are just a few of the talks and workshops astrobiters attended today.Opening Address (by Becky Smethurst)The President of the AAS, aka our fearless leader Meg Urry kicked off the meeting this morning at the purely coffee powered hour of 8am this morning. She spoke about the importance of young astronomers at the meeting (heres looking at you reader!) and also the importance of the new Working Group for Accessibility and Disabilities (aka WGAD pronounced like wicked) at the AAS. The Society has made extra effort this year to make the conference accessible to all,a message which was very well received by everyone in attendance.Kavli Lecture: New Horizons Alan Stern (by Becky Smethurst)We were definitely spoilt with the first Plenary lecture at this years conference Alan Stern gave us a a review of the New Horizons mission of the Pluto Fly By (astrobites covered the mission back in July with this post). We were treated to beautiful images, wonderful results and a foray into geology.Before (Hubble) and after #NewHorizons. #thatisall #science #astro alanstern #aas227 pic.twitter.com/kkMt6RsSIR Science News (@topsciencething) January 5, 2016Some awesome facts from the lecture that blew my mind:New Horizons is now 2AU (!) beyond PlutoThe mission was featured on the front pages of 450 newspapers worldwide on every single continent (including Antartica!)New Horizons reached the Moon in9 HOURSafter launch (compared to the ~3 days it took the Apollo missions)The mission controllers were aiming for a 100km window of space all the way from EarthThere was a window of ~400seconds which the probe had to arrive within the probe arrived90 seconds early! Putting tardy astronomers everywhere to shame.Charon was the only satellite of Pluto known at the time of the mission proposalThe canyon found on Charon is not only bigger than the Grand Canyon but bigger than Mariner Valley on Mars which is already4000 km (2500 mi) long and reaches depths of up to 7 km (4 mi)!Charons surface. Tectonic feature runs about 1500 km, around 10 km deep. Eat it, Mars. #aas227 pic.twitter.com/blewwJaXEn Danny Barringer (@HeavyFe_H) January 5, 2016The mountains ringing the Sputnik Planum (aka the heart of Pluto) are over 4km high and are snow capped with methane icePlutos mountain ranges. Means surface nitrogen layer is thin, probably water ice according to @AlanStern. #aas227 pic.twitter.com/0yyHZvpBOE Danny Barringer (@HeavyFe_H) January 5, 2016Plutos atmosphere has a dozendistincthaze layers but how they arecreated is a mystery#aas227 hazes on Pluto wow pic.twitter.com/VPx99ZhPj1 Lisa StorrieLombardi (@lisajsl) January 5, 2016Alan also spoke about the future of New Horizons there is a new mission proposal for a fly by of a Kuiper Belt object 2014MU69 in Jan 2019 which should give us a better understanding of this icy frontier at the edge ofthe Solar System. As a parting gift Alan playedthemost gorgeously detailed fly over video of Plutos surface that had all in the room melting into their flip flops. Its safe to say that the whole room is now Pluto-curious and wondering whether a change of discipline is in order!Press Conference: Black Holes and Exoplanets (by Susanna Kohler)This morning marked the first press conference of the meeting, covering some hot topics in black holes and exoplanets.Hubble (background) and Chandra (purple) image of SDSS J1126+2944. The arrow marks the second black hole. (From http://casa.colorado.edu/~comerford/press)The first speaker was Julie Comerford (University of Colorado Boulder), who told us about SDSS J1126+2944, a galaxy that was shown by Chandra X-ray detections to contain not just one, but two supermassive black holes. This is a sign of a recent merger between two galaxies, which can result in one new, larger galaxy with two nuclei for a while. The second black hole is surrounded by only a small sphere of stars. This may be because the rest have been stripped away in the process of the merger but its also possible that the second black hole is an elusive intermediate mass black hole of only 100-1,000,000 solar masses! Heres the press release.The second speaker was Eric Schlegel (University of Texas, San Antonio), who spoke about the galaxy NGC 5195. Eric discussed an interesting problem: we know that star formation ends in galaxies after a time, but the gas must be cleared out of the galaxy for the star formation to halt. What process does this? Schlegels collaboration found evidence in NGC 5195 for a burping supermassive black hole the shock from the black holes outflow sweeps up the hydrogen gas and blows it out of the galactic center. Heres the press release.NuSTAR image of Andromeda, inset on a UV image by NASAs Galaxy Evolution Explorer. Click for a better look! [NASA/JPL-Caltech/GSFC]Next up was Daniel Wik (NASA/Goddard SFC), who discussed recent high-energy X-ray observations of Andromeda galaxy with NASAs NuSTAR. As Wik described it, NuSTAR is like a CSI detective, working to identify what fraction of the compact remnants in X-ray binaries of Andromeda are neutron stars, and what fraction are black holes. Since X-ray binaries play a crucial role in heating gas in protogalaxies, shaping galaxy formation, its important that we learn more about this population and how it evolves over time. Heres the press release.The final speaker was grad studentSamuel Grunblatt (University of Hawaii Institute for Astronomy), who spoke about measuring the mass of exoplanets around active stars. In radial velocity studies of exoplanets, a planet orbiting its star causes the star to wobble. This signal for an Earth-like planet is as tiny as 9 cm/s! Unfortunately, activity of the star can cause radial velocity noise of 1-10 m/s so to detect Earth-like planets, we need to find a way of subtracting off the noise. Grunblatt talked about an intriguing new method for determining planet masses that controls for the signature of their hosts activity. Heres his paper.Annie Jump Cannon Award Lecture: On the Dynamics of Planets, Stars and Black Holes (by Erika Nesvold)This year, the Annie Jump Cannon Award was given to Smadar Naoz, an assistant professor at UCLA. The Cannon Award is given every year to a young (less than 5 years since PhD), female astronomer for outstanding work in her field. Traditionally, the Cannon Award recipient delivers a lecture on her research, so this year we were lucky to see a dynamic and engaging talk by Smadar Naoz about her research in dynamical theory.You may have heard the common career advice that you should focus on becoming the expert on one particular facet of astronomy: a particular type of object, an observational technique, a type of instrument, etc. Naoz has managed to follow that advice while still managing to study a huge range of astronomical topics, from exoplanets to cosmology. She studies hierarchical triples, systems of three gravitational bodies in which two of the bodies orbit one another very closely, while the third orbits the other two from a much greater distance. For example, a planet in a tight orbit around a star, with a brown dwarf orbiting hundreds of AU away, make up a hierarchical triple system. So does a system in which two black holes orbit each other closely, with a third black hole orbiting farther away. The physics of these systems are all the same, so by studying the equations that govern a hierarchical triple system, Naoz can study a huge variety of astronomical objects.In particular, Naoz studies a mechanism called the Kozai-Lidov mechanism, named after the two researchers who discovered it independently. If the outer body in a hierarchical triple orbits at a high enough inclination to the inner body ( 40 degrees), the Kozai-Lidov mechanism will excite the inclination and eccentricity of the inner body. In fact, the inclination and eccentricity will oscillate opposite one another: as the inclination increases, the eccentricity will decrease, and vice versa. In the course of her research, Naoz discovered a flaw in Kozais original derivations of this mechanism, and derived a more accurate, general set of equations describing the Kozai-Lidov mechanism. These new equations indicate that the eccentricity of the inner object can become extremely high, and that the inclination can become so high that the objects orbit can flip from prograde to retrograde! In other words, the object can start orbiting in the opposite direction around the central body.Wondering how Naoz found the error in Kozai? I happen to know she rederives all the equations in every paper she reads. Wow. #aas227 Erika Nesvold (@erikanesvold) January 5, 2016This work has applications in many different types of systems. For example, over the past decade, observers have discovered a large number of retrograde hot Jupiters, gas giant planets orbiting very close to their star, in the opposite direction from the stars spin. Naoz showed that the new, correct Kozai-Lidov mechanism can explain the orbits of these exoplanets, because it increases the planets eccentricity until its orbit approaches very close to the star, and it flips the inclination into a retrograde orbit. Naoz: A puzzle: how to explain retrograde planets? Kozai mechanism can do that! #aas227 Peter Edmonds (@PeterDEdmonds) January 5, 2016Naoz also showed applications of the Kozai-Lidov mechanisms to dark matter halos around black holes, triple black hole systems, and so-called blue stragglers: main-sequence stars in clusters that are brighter and bluer than they should be. Her body of work is an excellent example of how theorists can adapt general physics theories to a wonderful variety of astronomical problems.holy styrofoam planets batman naoz just explained everything. #aas227 August Muench (@augustmuench) January 5, 2016Harassment in the Astronomical Sciences Town Hall(by Caroline Morley)The Town Hall on Harassment in the Astronomical Sciences involved a sobering panel discussion on the current state on workplace climate in astronomy and the current steps that the AAS and federal agencies are taking to improve it. Christina Richey kicked it off by presenting preliminary results from the CSWA Survey on workplace climate. This survey involved 426 participants, and reveals that many people, especially junior members of the field, experience harassment including both verbal and physical harassment. These results will be published this year. Next up, Dara Norman, a Councilor of the AAS and a member of the AAS Ethics Task Force, spoke about the proposed changes to the current AAS Ethics Statement. These changes will focus on corrective policies to improve the state of the field; they will solicit community feedback this Spring and vote on the changes at the Summer AAS meeting. Last, Jim Ulvestad, representing the federal agencies including NSF, NASA, and the DOE, spoke about the current policies for reporting to federal funding agencies. He reminds us that if an institution accepts money from the federal government, they are required by law to follow laws such as Title VI (covering racial harassment) and Title IX (covering sexual harassment), and that breaches can be reported to the funding agency.Tools and Tips for Better Software (aka Pain Reduction for Code Authors)(by Caroline Morley)This afternoon breakout session included a drinking-from-the-firehose set of short talks that covered everything from source-code management and software testing to building communities that create sustainable code. First, Kenza Arraki discussed software such as Git to do version control to keep track of code changes. (Version Control is my (science) New Years Resolution, so I was happy to learn that there is aCodeAcademy tutorial for Git!). Next up, AdrianPrice-Whelan described the merits of software testing and suggests that we actually do Test-driven development where we write tests for the code first, then write code, run tests and debug until tests all pass. Erik Tollerud spoke on Why Document code and how you might convince yourself to do so (documenting code is another good science New Years Resolution!) The most important rule is to always document as you code because you wont ever go back! Bruce Berriman described the best practices for code release, including, importantly, licensing it and describing it well (with tutorials, examples). Matthew Turk reminded us the importance of building community around code development. Robert Nemiroff ended the talks with a discussion of what to do withdeadcodes. The lowest bar? Put it in your Dropbox and share it with your collaborators and students!For more info on all of these topics and more, consider attending a Software Carpentry workshop.

Top