An automated program for reinforcement requirements for openings in cylindrical pressure vessels
NASA Technical Reports Server (NTRS)
Wilson, J. F.; Taylor, J. T.
1975-01-01
An automated interactive program for calculating the reinforcement requirements for openings in cylindrical pressure vessels subjected to internal pressure is described. The program is written for an electronic desk top calculator. The program calculates the required area of reinforcement for a given opening and compares this value with the area of reinforcement provided by a proposed design. All program steps, operating instructions, and example problems with input and sample output are documented.
Integral flange design program. [procedure for computing stresses
NASA Technical Reports Server (NTRS)
Wilson, J. F.
1974-01-01
An automated interactive flange design program utilizing an electronic desk top calculator is presented. The program calculates the operating and seating stresses for circular flanges of the integral or optional type subjected to internal pressure. The required input information is documented. The program provides an automated procedure for computing stresses in selected flange geometries for comparison to the allowable code values.
Automated peak picking and peak integration in macromolecular NMR spectra using AUTOPSY.
Koradi, R; Billeter, M; Engeli, M; Güntert, P; Wüthrich, K
1998-12-01
A new approach for automated peak picking of multidimensional protein NMR spectra with strong overlap is introduced, which makes use of the program AUTOPSY (automated peak picking for NMR spectroscopy). The main elements of this program are a novel function for local noise level calculation, the use of symmetry considerations, and the use of lineshapes extracted from well-separated peaks for resolving groups of strongly overlapping peaks. The algorithm generates peak lists with precise chemical shift and integral intensities, and a reliability measure for the recognition of each peak. The results of automated peak picking of NOESY spectra with AUTOPSY were tested in combination with the combined automated NOESY cross peak assignment and structure calculation routine NOAH implemented in the program DYANA. The quality of the resulting structures was found to be comparable with those from corresponding data obtained with manual peak picking. Copyright 1998 Academic Press.
NASA Technical Reports Server (NTRS)
Bjorklund, J. R.
1978-01-01
The cloud-rise preprocessor and multilayer diffusion computer programs were used by NASA in predicting concentrations and dosages downwind from normal and abnormal launches of rocket vehicles. These programs incorporated: (1) the latest data for the heat content and chemistry of rocket exhaust clouds; (2) provision for the automated calculation of surface water pH due to deposition of HCl from precipitation scavenging; (3) provision for automated calculation of concentration and dosage parameters at any level within the vertical grounds for which meteorological inputs have been specified; and (4) provision for execution of multiple cases of meteorological data. Procedures used to automatically calculate wind direction shear in a layer were updated.
Master control data handling program uses automatic data input
NASA Technical Reports Server (NTRS)
Alliston, W.; Daniel, J.
1967-01-01
General purpose digital computer program is applicable for use with analysis programs that require basic data and calculated parameters as input. It is designed to automate input data preparation for flight control computer programs, but it is general enough to permit application in other areas.
Interfacing of high temperature Z-meter setup using python
NASA Astrophysics Data System (ADS)
Patel, Ashutosh; Sisodia, Shashank; Pandey, Sudhir K.
2017-05-01
In this work, we interface high temperature Z-meter setup to automize the whole measurement process. A program is built on open source programming language `Python' which convert the manual measurement process into fully automated process without any cost addition. Using this program, simultaneous measurement of Seebeck coefficient (α), thermal conductivity (κ) and electrical resistivity (ρ), are performed and using all three, figure-of-merit (ZT) is calculated. Developed program is verified by performing measurement over p-type Bi0.36Sb1.45Te3 sample and the data obtained are found to be in good agreement with the reported data.
Application of automated measurement and verification to utility energy efficiency program data
Granderson, Jessica; Touzani, Samir; Fernandes, Samuel; ...
2017-02-17
Trustworthy savings calculations are critical to convincing regulators of both the cost-effectiveness of energy efficiency program investments and their ability to defer supply-side capital investments. Today’s methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of energy efficiency programs. They also require time-consuming data acquisition. A spectrum of savings calculation approaches is used, with some relying more heavily on measured data and others relying more heavily on estimated, modeled, or stipulated data. The increasing availability of “smart” meters and devices that report near-real time data, combined with new analytical approaches to quantifymore » savings, offers the potential to conduct M&V more quickly and at lower cost, with comparable or improved accuracy. Commercial energy management and information systems (EMIS) technologies are beginning to offer these ‘M&V 2.0’ capabilities, and program administrators want to understand how they might assist programs in quickly and accurately measuring energy savings. This paper presents the results of recent testing of the ability to use automation to streamline the M&V process. In this paper, we apply an automated whole-building M&V tool to historic data sets from energy efficiency programs to begin to explore the accuracy, cost, and time trade-offs between more traditional M&V, and these emerging streamlined methods that use high-resolution energy data and automated computational intelligence. For the data sets studied we evaluate the fraction of buildings that are well suited to automated baseline characterization, the uncertainty in gross savings that is due to M&V 2.0 tools’ model error, and indications of labor time savings, and how the automated savings results compare to prior, traditionally determined savings results. The results show that 70% of the buildings were well suited to the automated approach. In a majority of the cases (80%) savings and uncertainties for each individual building were quantified to levels above the criteria in ASHRAE Guideline 14. In addition the findings suggest that M&V 2.0 methods may also offer time-savings relative to traditional approaches. Lastly, we discuss the implications of these findings relative to the potential evolution of M&V, and pilots currently being launched to test how M&V automation can be integrated into ratepayer-funded programs and professional implementation and evaluation practice.« less
Application of automated measurement and verification to utility energy efficiency program data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Touzani, Samir; Fernandes, Samuel
Trustworthy savings calculations are critical to convincing regulators of both the cost-effectiveness of energy efficiency program investments and their ability to defer supply-side capital investments. Today’s methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of energy efficiency programs. They also require time-consuming data acquisition. A spectrum of savings calculation approaches is used, with some relying more heavily on measured data and others relying more heavily on estimated, modeled, or stipulated data. The increasing availability of “smart” meters and devices that report near-real time data, combined with new analytical approaches to quantifymore » savings, offers the potential to conduct M&V more quickly and at lower cost, with comparable or improved accuracy. Commercial energy management and information systems (EMIS) technologies are beginning to offer these ‘M&V 2.0’ capabilities, and program administrators want to understand how they might assist programs in quickly and accurately measuring energy savings. This paper presents the results of recent testing of the ability to use automation to streamline the M&V process. In this paper, we apply an automated whole-building M&V tool to historic data sets from energy efficiency programs to begin to explore the accuracy, cost, and time trade-offs between more traditional M&V, and these emerging streamlined methods that use high-resolution energy data and automated computational intelligence. For the data sets studied we evaluate the fraction of buildings that are well suited to automated baseline characterization, the uncertainty in gross savings that is due to M&V 2.0 tools’ model error, and indications of labor time savings, and how the automated savings results compare to prior, traditionally determined savings results. The results show that 70% of the buildings were well suited to the automated approach. In a majority of the cases (80%) savings and uncertainties for each individual building were quantified to levels above the criteria in ASHRAE Guideline 14. In addition the findings suggest that M&V 2.0 methods may also offer time-savings relative to traditional approaches. Lastly, we discuss the implications of these findings relative to the potential evolution of M&V, and pilots currently being launched to test how M&V automation can be integrated into ratepayer-funded programs and professional implementation and evaluation practice.« less
NASA Technical Reports Server (NTRS)
Milner, E. J.; Krosel, S. M.
1977-01-01
Techniques are presented for determining the elements of the A, B, C, and D state variable matrices for systems simulated on an EAI Pacer 100 hybrid computer. An automated procedure systematically generates disturbance data necessary to linearize the simulation model and stores these data on a floppy disk. A separate digital program verifies this data, calculates the elements of the system matrices, and prints these matrices appropriately labeled. The partial derivatives forming the elements of the state variable matrices are approximated by finite difference calculations.
NASA Technical Reports Server (NTRS)
Klunker, E. B.; South, J. C., Jr.; Davis, R. M.
1972-01-01
A user's manual for a computer program which calculates the supersonic flow about circular, elliptic, and bielliptic cones at incidence and elliptic cones at yaw by the method of lines is presented. The program is automated to compute a case from known or easily calculated solution by changing the parameters through a sequence of steps. It provides information including the shock shape, flow field, isentropic surface properties, entropy layer, and force coefficients. A description of the program operation, sample computations, and a FORTRAN 4 listing are presented.
Automated data acquisition and processing for a Hohlraum reflectometer
NASA Technical Reports Server (NTRS)
Difilippo, Frank; Mirtich, Michael J.
1988-01-01
A computer and data acquisition board were used to automate a Perkin-Elmer Model 13 spectrophotometer with a Hohlraum reflectivity attachment. Additional electronic circuitry was necessary for amplification, filtering, and debouncing. The computer was programmed to calculate spectral emittance from 1.7 to 14.7 micrometers and also total emittance versus temperature. Automation of the Hohlraum reflectometer reduced the time required to determine total emittance versus temperature from about three hours to about 40 minutes.
[The use of programmed microcalculators for automation of leukocyte count].
Plykin, D L
1989-01-01
Soviet programmed microcalculators are recommended to be used for the calculation of the leukocytic formulae when making serial blood analyses at clinical laboratories. The suggested program helps completely automate the process of estimating the leukocyte types, detectable in microscopic examination of the blood smears; the results may be obtained as a per cent ratio of the cells (a form most prevalent nowadays) and as their quantity per microliter of blood. The presence of service elements in the program essentially simplifies the work, making it convenient for an untrained user of the microcalculator. Since commercial Soviet programmed microcalculators somewhat differ in the systems of program steps, two variants of the program are suggested, adapted to the two most prevalent designs.
A semi-automated tool for treatment plan-quality evaluation and clinical trial quality assurance
NASA Astrophysics Data System (ADS)
Wang, Jiazhou; Chen, Wenzhou; Studenski, Matthew; Cui, Yunfeng; Lee, Andrew J.; Xiao, Ying
2013-07-01
The goal of this work is to develop a plan-quality evaluation program for clinical routine and multi-institutional clinical trials so that the overall evaluation efficiency is improved. In multi-institutional clinical trials evaluating the plan quality is a time-consuming and labor-intensive process. In this note, we present a semi-automated plan-quality evaluation program which combines MIMVista, Java/MATLAB, and extensible markup language (XML). More specifically, MIMVista is used for data visualization; Java and its powerful function library are implemented for calculating dosimetry parameters; and to improve the clarity of the index definitions, XML is applied. The accuracy and the efficiency of the program were evaluated by comparing the results of the program with the manually recorded results in two RTOG trials. A slight difference of about 0.2% in volume or 0.6 Gy in dose between the semi-automated program and manual recording was observed. According to the criteria of indices, there are minimal differences between the two methods. The evaluation time is reduced from 10-20 min to 2 min by applying the semi-automated plan-quality evaluation program.
NASA Astrophysics Data System (ADS)
Makarycheva, A. I.; Faerman, V. A.
2017-02-01
The analyses of automation patterns is performed and the programming solution for the automation of data processing of the chromatographic data and their further information storage with a help of a software package, Mathcad and MS Excel spreadsheets, is developed. The offered approach concedes the ability of data processing algorithm modification and does not require any programming experts participation. The approach provides making a measurement of the given time and retention volumes, specific retention volumes, a measurement of differential molar free adsorption energy, and a measurement of partial molar solution enthalpies and isosteric heats of adsorption. The developed solution is focused on the appliance in a small research group and is tested on the series of some new gas chromatography sorbents. More than 20 analytes were submitted to calculation of retention parameters and thermodynamic sorption quantities. The received data are provided in the form accessible to comparative analysis, and they are able to find sorbing agents with the most profitable properties to solve some concrete analytic issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Touzani, Samir; Taylor, Cody
Trustworthy savings calculations are critical to convincing regulators of both the cost-effectiveness of energy efficiency program investments and their ability to defer supply-side capital investments. Today’s methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of energy efficiency programs. They also require time-consuming data acquisition. A spectrum of savings calculation approaches is used, with some relying more heavily on measured data and others relying more heavily on estimated, modeled, or stipulated data. The rising availability of “smart” meters and devices that report near-real time data, combined with new analytical approaches to quantifyingmore » savings, offers potential to conduct M&V more quickly and at lower cost, with comparable or improved accuracy. Commercial energy management and information systems (EMIS) technologies are beginning to offer M&V capabilities, and program administrators want to understand how they might assist programs in quickly and accurately measuring energy savings. This paper presents the results of recent testing of the ability to use automation to streamline some parts of M&V. Here in this paper, we detail metrics to assess the performance of these new M&V approaches, and a framework to compute the metrics. We also discuss the accuracy, cost, and time trade-offs between more traditional M&V, and these emerging streamlined methods that use high-resolution energy data and automated computational intelligence. Finally we discuss the potential evolution of M&V and early results of pilots currently underway to incorporate M&V automation into ratepayer-funded programs and professional implementation and evaluation practice.« less
Prospective validation of a near real-time EHR-integrated automated SOFA score calculator.
Aakre, Christopher; Franco, Pablo Moreno; Ferreyra, Micaela; Kitson, Jaben; Li, Man; Herasevich, Vitaly
2017-07-01
We created an algorithm for automated Sequential Organ Failure Assessment (SOFA) score calculation within the Electronic Health Record (EHR) to facilitate detection of sepsis based on the Third International Consensus Definitions for Sepsis and Septic Shock (SEPSIS-3) clinical definition. We evaluated the accuracy of near real-time and daily automated SOFA score calculation compared with manual score calculation. Automated SOFA scoring computer programs were developed using available EHR data sources and integrated into a critical care focused patient care dashboard at Mayo Clinic in Rochester, Minnesota. We prospectively compared the accuracy of automated versus manual calculation for a sample of patients admitted to the medical intensive care unit at Mayo Clinic Hospitals in Rochester, Minnesota and Jacksonville, Florida. Agreement was calculated with Cohen's kappa statistic. Reason for discrepancy was tabulated during manual review. Random spot check comparisons were performed 134 times on 27 unique patients, and daily SOFA score comparisons were performed for 215 patients over a total of 1206 patient days. Agreement between automatically scored and manually scored SOFA components for both random spot checks (696 pairs, κ=0.89) and daily calculation (5972 pairs, κ=0.89) was high. The most common discrepancies were in the respiratory component (inaccurate fraction of inspired oxygen retrieval; 200/1206) and creatinine (normal creatinine in patients with no urine output on dialysis; 128/1094). 147 patients were at risk of developing sepsis after intensive care unit admission, 10 later developed sepsis confirmed by chart review. All were identified before onset of sepsis with the ΔSOFA≥2 point criterion and 46 patients were false-positives. Near real-time automated SOFA scoring was found to have strong agreement with manual score calculation and may be useful for the detection of sepsis utilizing the new SEPSIS-3 definition. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yussup, N.; Rahman, N. A. A.; Ibrahim, M. M.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.
2017-01-01
Neutron Activation Analysis (NAA) process has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s. Most of the procedures established especially from sample registration to sample analysis are performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient. Hence, a software to support the system automation is developed to provide an effective method to replace redundant manual data entries and produce faster sample analysis and calculation process. This paper describes the design and development of automation software for NAA process which consists of three sub-programs. The sub-programs are sample registration, hardware control and data acquisition; and sample analysis. The data flow and connection between the sub-programs will be explained. The software is developed by using National Instrument LabView development package.
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
Automated Analysis, Classification, and Display of Waveforms
NASA Technical Reports Server (NTRS)
Kwan, Chiman; Xu, Roger; Mayhew, David; Zhang, Frank; Zide, Alan; Bonggren, Jeff
2004-01-01
A computer program partly automates the analysis, classification, and display of waveforms represented by digital samples. In the original application for which the program was developed, the raw waveform data to be analyzed by the program are acquired from space-shuttle auxiliary power units (APUs) at a sampling rate of 100 Hz. The program could also be modified for application to other waveforms -- for example, electrocardiograms. The program begins by performing principal-component analysis (PCA) of 50 normal-mode APU waveforms. Each waveform is segmented. A covariance matrix is formed by use of the segmented waveforms. Three eigenvectors corresponding to three principal components are calculated. To generate features, each waveform is then projected onto the eigenvectors. These features are displayed on a three-dimensional diagram, facilitating the visualization of the trend of APU operations.
Singh, J; Thornton, J M
1990-02-05
Automated methods have been developed to determine the preferred packing arrangement between interacting protein groups. A suite of FORTRAN programs, SIRIUS, is described for calculating and analysing the geometries of interacting protein groups using crystallographically derived atomic co-ordinates. The programs involved in calculating the geometries search for interacting pairs of protein groups using a distance criterion, and then calculate the spatial disposition and orientation of the pair. The second set of programs is devoted to analysis. This involves calculating the observed and expected distributions of the angles and assessing the statistical significance of the difference between the two. A database of the geometries of the 400 combinations of side-chain to side-chain interaction has been created. The approach used in analysing the geometrical information is illustrated here with specific examples of interactions between side-chains, peptide groups and particular types of atom. At the side-chain level, an analysis of aromatic-amino interactions, and the interactions of peptide carbonyl groups with arginine residues is presented. At the atomic level the analyses include the spatial disposition of oxygen atoms around tyrosine residues, and the frequency and type of contact between carbon, nitrogen and oxygen atoms. This information is currently being applied to the modelling of protein interactions.
Automated optimization techniques for aircraft synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.
Microcomputer software for calculating the western Oregon elk habitat effectiveness index.
Alan Ager; Mark Hitchcock
1992-01-01
This paper describes the operation of the microcomputer program HEIWEST, which was developed to automate calculation of the western Oregon elk habitat effectiveness index (HEI). HEIWEST requires little or no training to operate and vastly simplifies the task of measuring HEI for either site-specific project analysis or long-term monitoring of elk habitat. It is...
NASA Technical Reports Server (NTRS)
Schlosser, H.
1981-01-01
The self consistent calculation of the electronic energy levels of noble gas pseudomolecules formed when a metal surface is bombarded by noble gas ions is discussed along with the construction of energy level correlation diagrams as a function of interatomic spacing. The self consistent field x alpha scattered wave (SCF-Xalpha-SW) method is utilized. Preliminary results on the Ne-Mg system are given. An interactive x alpha programming system, implemented on the LeRC IBM 370 computer, is described in detail. This automated system makes use of special PROCDEFS (procedure definitions) to minimize the data to be entered manually at a remote terminal. Listings of the special PROCDEFS and of typical input data are given.
Barminova, H Y; Saratovskyh, M S
2016-02-01
The experiment automation system is supposed to be developed for experimental facility for material science at ITEP, based on a Bernas ion source. The program CAMFT is assumed to be involved into the program of the experiment automation. CAMFT is developed to simulate the intense charged particle bunch motion in the external magnetic fields with arbitrary geometry by means of the accurate solution of the particle motion equation. Program allows the consideration of the bunch intensity up to 10(10) ppb. Preliminary calculations are performed at ITEP supercomputer. The results of the simulation of the beam pre-acceleration and following turn in magnetic field are presented for different initial conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barminova, H. Y., E-mail: barminova@bk.ru; Saratovskyh, M. S.
2016-02-15
The experiment automation system is supposed to be developed for experimental facility for material science at ITEP, based on a Bernas ion source. The program CAMFT is assumed to be involved into the program of the experiment automation. CAMFT is developed to simulate the intense charged particle bunch motion in the external magnetic fields with arbitrary geometry by means of the accurate solution of the particle motion equation. Program allows the consideration of the bunch intensity up to 10{sup 10} ppb. Preliminary calculations are performed at ITEP supercomputer. The results of the simulation of the beam pre-acceleration and following turnmore » in magnetic field are presented for different initial conditions.« less
aMC fast: automation of fast NLO computations for PDF fits
NASA Astrophysics Data System (ADS)
Bertone, Valerio; Frederix, Rikkert; Frixione, Stefano; Rojo, Juan; Sutton, Mark
2014-08-01
We present the interface between M adG raph5_ aMC@NLO, a self-contained program that calculates cross sections up to next-to-leading order accuracy in an automated manner, and APPL grid, a code that parametrises such cross sections in the form of look-up tables which can be used for the fast computations needed in the context of PDF fits. The main characteristic of this interface, which we dub aMC fast, is its being fully automated as well, which removes the need to extract manually the process-specific information for additional physics processes, as is the case with other matrix-element calculators, and renders it straightforward to include any new process in the PDF fits. We demonstrate this by studying several cases which are easily measured at the LHC, have a good constraining power on PDFs, and some of which were previously unavailable in the form of a fast interface.
Performance Test Data Analysis of Scintillation Cameras
NASA Astrophysics Data System (ADS)
Demirkaya, Omer; Mazrou, Refaat Al
2007-10-01
In this paper, we present a set of image analysis tools to calculate the performance parameters of gamma camera systems from test data acquired according to the National Electrical Manufacturers Association NU 1-2001 guidelines. The calculation methods are either completely automated or require minimal user interaction; minimizing potential human errors. The developed methods are robust with respect to varying conditions under which these tests may be performed. The core algorithms have been validated for accuracy. They have been extensively tested on images acquired by the gamma cameras from different vendors. All the algorithms are incorporated into a graphical user interface that provides a convenient way to process the data and report the results. The entire application has been developed in MATLAB programming environment and is compiled to run as a stand-alone program. The developed image analysis tools provide an automated, convenient and accurate means to calculate the performance parameters of gamma cameras and SPECT systems. The developed application is available upon request for personal or non-commercial uses. The results of this study have been partially presented in Society of Nuclear Medicine Annual meeting as an InfoSNM presentation.
NASA Astrophysics Data System (ADS)
Yu, Haoyu S.; Fiedler, Lucas J.; Alecu, I. M.; Truhlar, Donald G.
2017-01-01
We present a Python program, FREQ, for calculating the optimal scale factors for calculating harmonic vibrational frequencies, fundamental vibrational frequencies, and zero-point vibrational energies from electronic structure calculations. The program utilizes a previously published scale factor optimization model (Alecu et al., 2010) to efficiently obtain all three scale factors from a set of computed vibrational harmonic frequencies. In order to obtain the three scale factors, the user only needs to provide zero-point energies of 15 or 6 selected molecules. If the user has access to the Gaussian 09 or Gaussian 03 program, we provide the option for the user to run the program by entering the keywords for a certain method and basis set in the Gaussian 09 or Gaussian 03 program. Four other Python programs, input.py, input6, pbs.py, and pbs6.py, are also provided for generating Gaussian 09 or Gaussian 03 input and PBS files. The program can also be used with data from any other electronic structure package. A manual of how to use this program is included in the code package.
A geometry package for generation of input data for a three-dimensional potential-flow program
NASA Technical Reports Server (NTRS)
Halsey, N. D.; Hess, J. L.
1978-01-01
The preparation of geometric data for input to three-dimensional potential flow programs was automated and simplified by a geometry package incorporated into the NASA Langley version of the 3-D lifting potential flow program. Input to the computer program for the geometry package consists of a very sparse set of coordinate data, often with an order of magnitude of fewer points than required for the actual potential flow calculations. Isolated components, such as wings, fuselages, etc. are paneled automatically, using one of several possible element distribution algorithms. Curves of intersection between components are calculated, using a hybrid curve-fit/surface-fit approach. Intersecting components are repaneled so that adjacent elements on either side of the intersection curves line up in a satisfactory manner for the potential-flow calculations. Many cases may be run completely (from input, through the geometry package, and through the flow calculations) without interruption. Use of the package significantly reduces the time and expense involved in making three-dimensional potential flow calculations.
Calculating Henry’s Constants of Charged Molecules Using SPARC
SPARC Performs Automated Reasoning in Chemistry is a computer program designed to model physical and chemical properties of molecules solely based on thier chemical structure. SPARC uses a toolbox of mechanistic perturbation models to model intermolecular interactions. SPARC has ...
Development of computer software for pavement life cycle cost analysis.
DOT National Transportation Integrated Search
1988-01-01
The life cycle cost analysis program (LCCA) is designed to automate and standardize life cycle costing in Virginia. It allows the user to input information necessary for the analysis, and it then completes the calculations and produces a printed copy...
Automated Routines for Calculating Whole-Stream Metabolism: Theoretical Background and User's Guide
Bales, Jerad D.; Nardi, Mark R.
2007-01-01
In order to standardize methods and facilitate rapid calculation and archival of stream-metabolism variables, the Stream Metabolism Program was developed to calculate gross primary production, net ecosystem production, respiration, and selected other variables from continuous measurements of dissolved-oxygen concentration, water temperature, and other user-supplied information. Methods for calculating metabolism from continuous measurements of dissolved-oxygen concentration and water temperature are fairly well known, but a standard set of procedures and computation software for all aspects of the calculations were not available previously. The Stream Metabolism Program addresses this deficiency with a stand-alone executable computer program written in Visual Basic.NET?, which runs in the Microsoft Windows? environment. All equations and assumptions used in the development of the software are documented in this report. Detailed guidance on application of the software is presented, along with a summary of the data required to use the software. Data from either a single station or paired (upstream, downstream) stations can be used with the software to calculate metabolism variables.
Automated calculation of surface energy fluxes with high-frequency lake buoy data
Woolway, R. Iestyn; Jones, Ian D; Hamilton, David P.; Maberly, Stephen C; Muroaka, Kohji; Read, Jordan S.; Smyth, Robyn L; Winslow, Luke A.
2015-01-01
Lake Heat Flux Analyzer is a program used for calculating the surface energy fluxes in lakes according to established literature methodologies. The program was developed in MATLAB for the rapid analysis of high-frequency data from instrumented lake buoys in support of the emerging field of aquatic sensor network science. To calculate the surface energy fluxes, the program requires a number of input variables, such as air and water temperature, relative humidity, wind speed, and short-wave radiation. Available outputs for Lake Heat Flux Analyzer include the surface fluxes of momentum, sensible heat and latent heat and their corresponding transfer coefficients, incoming and outgoing long-wave radiation. Lake Heat Flux Analyzer is open source and can be used to process data from multiple lakes rapidly. It provides a means of calculating the surface fluxes using a consistent method, thereby facilitating global comparisons of high-frequency data from lake buoys.
Automatic specification of reliability models for fault-tolerant computers
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1993-01-01
The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.
MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Wu, D; Rutel, I
2015-06-15
Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less
NASA Technical Reports Server (NTRS)
Wynn, L. K.
1985-01-01
The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.
1986 Petroleum Software Directory. [800 mini, micro and mainframe computer software packages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1985-01-01
Pennwell's 1986 Petroleum Software Directory is a complete listing of software created specifically for the petroleum industry. Details are provided on over 800 mini, micro and mainframe computer software packages from more than 250 different companies. An accountant can locate programs to automate bookkeeping functions in large oil and gas production firms. A pipeline engineer will find programs designed to calculate line flow and wellbore pressure drop.
Ren, Guo-Ping; Yan, Jia-Qing; Yu, Zhi-Xin; Wang, Dan; Li, Xiao-Nan; Mei, Shan-Shan; Dai, Jin-Dong; Li, Xiao-Li; Li, Yun-Lin; Wang, Xiao-Fei; Yang, Xiao-Feng
2018-02-01
High frequency oscillations (HFOs) are considered as biomarker for epileptogenicity. Reliable automation of HFOs detection is necessary for rapid and objective analysis, and is determined by accurate computation of the baseline. Although most existing automated detectors measure baseline accurately in channels with rare HFOs, they lose accuracy in channels with frequent HFOs. Here, we proposed a novel algorithm using the maximum distributed peak points method to improve baseline determination accuracy in channels with wide HFOs activity ranges and calculate a dynamic baseline. Interictal ripples (80-200[Formula: see text]Hz), fast ripples (FRs, 200-500[Formula: see text]Hz) and baselines in intracerebral EEGs from seven patients with intractable epilepsy were identified by experienced reviewers and by our computer-automated program, and the results were compared. We also compared the performance of our detector to four well-known detectors integrated in RIPPLELAB. The sensitivity and specificity of our detector were, respectively, 71% and 75% for ripples and 66% and 84% for FRs. Spearman's rank correlation coefficient comparing automated and manual detection was [Formula: see text] for ripples and [Formula: see text] for FRs ([Formula: see text]). In comparison to other detectors, our detector had a relatively higher sensitivity and specificity. In conclusion, our automated detector is able to accurately calculate a dynamic iEEG baseline in different HFO activity channels using the maximum distributed peak points method, resulting in higher sensitivity and specificity than other available HFO detectors.
Guimaraes, Carolina V; Grzeszczuk, Robert; Bisset, George S; Donnelly, Lane F
2018-03-01
When implementing or monitoring department-sanctioned standardized radiology reports, feedback about individual faculty performance has been shown to be a useful driver of faculty compliance. Most commonly, these data are derived from manual audit, which can be both time-consuming and subject to sampling error. The purpose of this study was to evaluate whether a software program using natural language processing and machine learning could accurately audit radiologist compliance with the use of standardized reports compared with performed manual audits. Radiology reports from a 1-month period were loaded into such a software program, and faculty compliance with use of standardized reports was calculated. For that same period, manual audits were performed (25 reports audited for each of 42 faculty members). The mean compliance rates calculated by automated auditing were then compared with the confidence interval of the mean rate by manual audit. The mean compliance rate for use of standardized reports as determined by manual audit was 91.2% with a confidence interval between 89.3% and 92.8%. The mean compliance rate calculated by automated auditing was 92.0%, within that confidence interval. This study shows that by use of natural language processing and machine learning algorithms, an automated analysis can accurately define whether reports are compliant with use of standardized report templates and language, compared with manual audits. This may avoid significant labor costs related to conducting the manual auditing process. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
NASA Astrophysics Data System (ADS)
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-08-01
We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances are rejected and full-waveform inversion in a space-time grid around a provided hypocentre. A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequency ranges. The method is tested on synthetic and observed data. It is applied on a data set from the Swiss seismic network and the results are compared with the existing high-quality MT catalogue. The software package programmed in Python is designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large pre-existing earthquake catalogues and data sets.
1986-09-01
implement a computer program as a function of the Function Point Total. As shown in Table 9, the software product (referred to as SPQR ) establishes the...language being used. Source code statements are defined in SPQR as consisting of executable statements and data definitions. The factors used to calculate... SPQR is a trademark of Software Productivity Research, Inc, 233 TABLE 9 NUMBER OF COMPUTER PROGRAM SOURCE STATEMENTS PER FUNCTION POINT TOTAL
Fully automated calculation of cardiothoracic ratio in digital chest radiographs
NASA Astrophysics Data System (ADS)
Cong, Lin; Jiang, Luan; Chen, Gang; Li, Qiang
2017-03-01
The calculation of Cardiothoracic Ratio (CTR) in digital chest radiographs would be useful for cardiac anomaly assessment and heart enlargement related disease indication. The purpose of this study was to develop and evaluate a fully automated scheme for calculation of CTR in digital chest radiographs. Our automated method consisted of three steps, i.e., lung region localization, lung segmentation, and CTR calculation. We manually annotated the lung boundary with 84 points in 100 digital chest radiographs, and calculated an average lung model for the subsequent work. Firstly, in order to localize the lung region, generalized Hough transform was employed to identify the upper, lower, and outer boundaries of lung by use of Sobel gradient information. The average lung model was aligned to the localized lung region to obtain the initial lung outline. Secondly, we separately applied dynamic programming method to detect the upper, lower, outer and inner boundaries of lungs, and then linked the four boundaries to segment the lungs. Based on the identified outer boundaries of left lung and right lung, we corrected the center and the declination of the original radiography. Finally, CTR was calculated as a ratio of the transverse diameter of the heart to the internal diameter of the chest, based on the segmented lungs. The preliminary results on 106 digital chest radiographs showed that the proposed method could obtain accurate segmentation of lung based on subjective observation, and achieved sensitivity of 88.9% (40 of 45 abnormalities), and specificity of 100% (i.e. 61 of 61 normal) for the identification of heart enlargements.
Lithography-based automation in the design of program defect masks
NASA Astrophysics Data System (ADS)
Vakanas, George P.; Munir, Saghir; Tejnil, Edita; Bald, Daniel J.; Nagpal, Rajesh
2004-05-01
In this work, we are reporting on a lithography-based methodology and automation in the design of Program Defect masks (PDM"s). Leading edge technology masks have ever-shrinking primary features and more pronounced model-based secondary features such as optical proximity corrections (OPC), sub-resolution assist features (SRAF"s) and phase-shifted mask (PSM) structures. In order to define defect disposition specifications for critical layers of a technology node, experience alone in deciding worst-case scenarios for the placement of program defects is necessary but may not be sufficient. MEEF calculations initiated from layout pattern data and their integration in a PDM layout flow provide a natural approach for improvements, relevance and accuracy in the placement of programmed defects. This methodology provides closed-loop feedback between layout and hard defect disposition specifications, thereby minimizing engineering test restarts, improving quality and reducing cost of high-end masks. Apart from SEMI and industry standards, best-known methods (BKM"s) in integrated lithographically-based layout methodologies and automation specific to PDM"s are scarce. The contribution of this paper lies in the implementation of Design-For-Test (DFT) principles to a synergistic interaction of CAD Layout and Aerial Image Simulator to drive layout improvements, highlight layout-to-fracture interactions and output accurate program defect placement coordinates to be used by tools in the mask shop.
NASA Technical Reports Server (NTRS)
Bensalem, Saddek; Ganesh, Vijay; Lakhnech, Yassine; Munoz, Cesar; Owre, Sam; Ruess, Harald; Rushby, John; Rusu, Vlad; Saiedi, Hassen; Shankar, N.
2000-01-01
To become practical for assurance, automated formal methods must be made more scalable, automatic, and cost-effective. Such an increase in scope, scale, automation, and utility can be derived from an emphasis on a systematic separation of concerns during verification. SAL (Symbolic Analysis Laboratory) attempts to address these issues. It is a framework for combining different tools to calculate properties of concurrent systems. The heart of SAL is a language, developed in collaboration with Stanford, Berkeley, and Verimag for specifying concurrent systems in a compositional way. Our instantiation of the SAL framework augments PVS with tools for abstraction, invariant generation, program analysis (such as slicing), theorem proving, and model checking to separate concerns as well as calculate properties (i.e., perform, symbolic analysis) of concurrent systems. We. describe the motivation, the language, the tools, their integration in SAL/PAS, and some preliminary experience of their use.
Haider, Kamran; Cruz, Anthony; Ramsey, Steven; Gilson, Michael K; Kurtzman, Tom
2018-01-09
We have developed SSTMap, a software package for mapping structural and thermodynamic water properties in molecular dynamics trajectories. The package introduces automated analysis and mapping of local measures of frustration and enhancement of water structure. The thermodynamic calculations are based on Inhomogeneous Fluid Solvation Theory (IST), which is implemented using both site-based and grid-based approaches. The package also extends the applicability of solvation analysis calculations to multiple molecular dynamics (MD) simulation programs by using existing cross-platform tools for parsing MD parameter and trajectory files. SSTMap is implemented in Python and contains both command-line tools and a Python module to facilitate flexibility in setting up calculations and for automated generation of large data sets involving analysis of multiple solutes. Output is generated in formats compatible with popular Python data science packages. This tool will be used by the molecular modeling community for computational analysis of water in problems of biophysical interest such as ligand binding and protein function.
An automated procedure for developing hybrid computer simulations of turbofan engines
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Krosel, S. M.
1980-01-01
A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all of the calculations and date manipulations needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self contained engine model to match specified design point information. A test case is described and comparisons between hybrid simulation and specified engine performance data are presented.
Design Automation Using Script Languages. High-Level CAD Templates in Non-Parametric Programs
NASA Astrophysics Data System (ADS)
Moreno, R.; Bazán, A. M.
2017-10-01
The main purpose of this work is to study the advantages offered by the application of traditional techniques of technical drawing in processes for automation of the design, with non-parametric CAD programs, provided with scripting languages. Given that an example drawing can be solved with traditional step-by-step detailed procedures, is possible to do the same with CAD applications and to generalize it later, incorporating references. In today’s modern CAD applications, there are striking absences of solutions for building engineering: oblique projections (military and cavalier), 3D modelling of complex stairs, roofs, furniture, and so on. The use of geometric references (using variables in script languages) and their incorporation into high-level CAD templates allows the automation of processes. Instead of repeatedly creating similar designs or modifying their data, users should be able to use these templates to generate future variations of the same design. This paper presents the automation process of several complex drawing examples based on CAD script files aided with parametric geometry calculation tools. The proposed method allows us to solve complex geometry designs not currently incorporated in the current CAD applications and to subsequently create other new derivatives without user intervention. Automation in the generation of complex designs not only saves time but also increases the quality of the presentations and reduces the possibility of human errors.
Automation of Educational Tasks for Academic Radiology.
Lamar, David L; Richardson, Michael L; Carlson, Blake
2016-07-01
The process of education involves a variety of repetitious tasks. We believe that appropriate computer tools can automate many of these chores, and allow both educators and their students to devote a lot more of their time to actual teaching and learning. This paper details tools that we have used to automate a broad range of academic radiology-specific tasks on Mac OS X, iOS, and Windows platforms. Some of the tools we describe here require little expertise or time to use; others require some basic knowledge of computer programming. We used TextExpander (Mac, iOS) and AutoHotKey (Win) for automated generation of text files, such as resident performance reviews and radiology interpretations. Custom statistical calculations were performed using TextExpander and the Python programming language. A workflow for automated note-taking was developed using Evernote (Mac, iOS, Win) and Hazel (Mac). Automated resident procedure logging was accomplished using Editorial (iOS) and Python. We created three variants of a teaching session logger using Drafts (iOS) and Pythonista (iOS). Editorial and Drafts were used to create flashcards for knowledge review. We developed a mobile reference management system for iOS using Editorial. We used the Workflow app (iOS) to automatically generate a text message reminder for daily conferences. Finally, we developed two separate automated workflows-one with Evernote (Mac, iOS, Win) and one with Python (Mac, Win)-that generate simple automated teaching file collections. We have beta-tested these workflows, techniques, and scripts on several of our fellow radiologists. All of them expressed enthusiasm for these tools and were able to use one or more of them to automate their own educational activities. Appropriate computer tools can automate many educational tasks, and thereby allow both educators and their students to devote a lot more of their time to actual teaching and learning. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Git, K-A; Fioravante, L A B; Fernandes, J L
2015-09-01
To assess whether an online open-source tool would provide accurate calculations of T2(*) values for iron concentrations in the liver and heart compared with a standard reference software. An online open-source tool, written in pure HTML5/Javascript, was tested in 50 patients (age 26.0 ± 18.9 years, 46% males) who underwent T2(*) MRI of the liver and heart for iron overload assessment as part of their routine workup. Automated truncation correction was the default with optional manual adjustment provided if needed. The results were compared against a standard reference measurement using commercial software with manual truncation (CVI(42)(®) v. 5.1; Circle Cardiovascular Imaging; Calgary, AB). The mean liver T2(*) values calculated with the automated tool was 4.3 ms [95% confidence interval (CI) 3.1 to 5.5 ms] vs 4.26 ms using the reference software (95% CI 3.1 to 5.4 ms) without any significant differences (p = 0.71). In the liver, the mean difference was 0.036 ms (95% CI -0.1609 to 0.2329 ms) with a regression correlation coefficient of 0.97. For the heart, the automated T2(*) value was 26.0 ms (95% CI 22.9 to 29.0 ms) vs 25.3 ms (95% CI 22.3 to 28.3 ms), p = 0.28. The mean difference was 0.72 ms (95% CI 0.08191 to 1.3621 ms) with a correlation coefficient of 0.96. The automated online tool provides similar T2(*) values for the liver and myocardial iron concentrations as compared with a standard reference software. The online program provides an open-source tool for the calculation of T2(*) values, incorporating an automated correction algorithm in a simple and easy-to-use interface.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2009-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations
NASA Astrophysics Data System (ADS)
Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.
Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
Chattanooga Electric Power Board Case Study Distribution Automation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glass, Jim; Melin, Alexander M.; Starke, Michael R.
In 2009, the U.S. Department of Energy under the American Recovery and Reinvestment Act (ARRA) awarded a grant to the Chattanooga, Tennessee, Electric Power Board (EPB) as part of the Smart Grid Investment Grant Program. The grant had the objective “to accelerate the transformation of the nation’s electric grid by deploying smart grid technologies.” This funding award enabled EPB to expedite the original smart grid implementation schedule from an estimated 10-12 years to 2.5 years. With this funding, EPB invested heavily in distribution automation technologies including installing over 1,200 automated circuit switches and sensors on 171 circuits. For utilities consideringmore » a commitment to distribution automation, there are underlying questions such as the following: “What is the value?” and “What are the costs?” This case study attempts to answer these questions. The primary benefit of distribution automation is increased reliability or reduced power outage duration and frequency. Power outages directly impact customer economics by interfering with business functions. In the past, this economic driver has been difficult to effectively evaluate. However, as this case study demonstrates, tools and analysis techniques are now available. In this case study, the impact on customer costs associated with power outages before and after the implementation of distribution automation are compared. Two example evaluations are performed to demonstrate the benefits: 1) a savings baseline for customers under normal operations1 and 2) customer savings for a single severe weather event. Cost calculations for customer power outages are performed using the US Department of Energy (DOE) Interruption Cost Estimate (ICE) calculator2. This tool uses standard metrics associated with outages and the customers to calculate cost impact. The analysis shows that EPB customers have seen significant reliability improvements from the implementation of distribution automation. Under normal operations, the investment in distribution automation has enabled a 43.5% reduction in annual outage minutes since 2012. This has led to an estimated total savings of $26.8 million per year. Examining a single severe weather event3, the distribution automation was able to restore power to 40,579 (nearly 56%) customers within 1–2 seconds and reduce outage minutes by 29.0%. This saved customers an estimated $23.2 million over the course of the storm.« less
Heredia-López, Francisco J; Álvarez-Cervera, Fernando J; Collí-Alfaro, José G; Bata-García, José L; Arankowsky-Sandoval, Gloria; Góngora-Alfaro, José L
2016-12-01
Continuous spontaneous alternation behavior (SAB) in a Y-maze is used for evaluating working memory in rodents. Here, the design of an automated Y-maze equipped with three infrared optocouplers per arm, and commanded by a reduced instruction set computer (RISC) microcontroller is described. The software was devised for recording only true entries and exits to the arms. Experimental settings are programmed via a keyboard with three buttons and a display. The sequence of arm entries and the time spent in each arm and the neutral zone (NZ) are saved as a text file in a non-volatile memory for later transfer to a USB flash memory. Data files are analyzed with a program developed under LabVIEW® environment, and the results are exported to an Excel® spreadsheet file. Variables measured are: latency to exit the starting arm, sequence and number of arm entries, number of alternations, alternation percentage, and cumulative times spent in each arm and NZ. The automated Y-maze accurately detected the SAB decrease produced in rats by the muscarinic antagonist trihexyphenidyl, and its reversal by caffeine, having 100 % concordance with the alternation percentages calculated by two trained observers who independently watched videos of the same experiments. Although the values of time spent in the arms and NZ measured by the automated system had small discrepancies with those calculated by the observers, Bland-Altman analysis showed 95 % concordance in three pairs of comparisons, while in one it was 90 %, indicating that this system is a reliable and inexpensive alternative for the study of continuous SAB in rodents.
Automating Partial Period Bond Valuation with Excel's Day Counting Functions
ERIC Educational Resources Information Center
Vicknair, David; Spruell, James
2009-01-01
An Excel model for calculating the actual price of bonds under a 30 day/month, 360 day/year day counting assumption by nesting the DAYS360 function within the PV function is developed. When programmed into an Excel spreadsheet, the model can accommodate annual and semiannual payment bonds sold on or between interest dates using six fundamental…
NASA Astrophysics Data System (ADS)
Huber, Matthew S.; Ferriãre, Ludovic; Losiak, Anna; Koeberl, Christian
2011-09-01
Abstract- Planar deformation features (PDFs) in quartz, one of the most commonly used diagnostic indicators of shock metamorphism, are planes of amorphous material that follow crystallographic orientations, and can thus be distinguished from non-shock-induced fractures in quartz. The process of indexing data for PDFs from universal-stage measurements has traditionally been performed using a manual graphical method, a time-consuming process in which errors can easily be introduced. A mathematical method and computer algorithm, which we call the Automated Numerical Index Executor (ANIE) program for indexing PDFs, was produced, and is presented here. The ANIE program is more accurate and faster than the manual graphical determination of Miller-Bravais indices, as it allows control of the exact error used in the calculation and removal of human error from the process.
Samal, Lipika; D'Amore, John D; Bates, David W; Wright, Adam
2017-11-01
Clinical decision support tools for risk prediction are readily available, but typically require workflow interruptions and manual data entry so are rarely used. Due to new data interoperability standards for electronic health records (EHRs), other options are available. As a clinical case study, we sought to build a scalable, web-based system that would automate calculation of kidney failure risk and display clinical decision support to users in primary care practices. We developed a single-page application, web server, database, and application programming interface to calculate and display kidney failure risk. Data were extracted from the EHR using the Consolidated Clinical Document Architecture interoperability standard for Continuity of Care Documents (CCDs). EHR users were presented with a noninterruptive alert on the patient's summary screen and a hyperlink to details and recommendations provided through a web application. Clinic schedules and CCDs were retrieved using existing application programming interfaces to the EHR, and we provided a clinical decision support hyperlink to the EHR as a service. We debugged a series of terminology and technical issues. The application was validated with data from 255 patients and subsequently deployed to 10 primary care clinics where, over the course of 1 year, 569 533 CCD documents were processed. We validated the use of interoperable documents and open-source components to develop a low-cost tool for automated clinical decision support. Since Consolidated Clinical Document Architecture-based data extraction extends to any certified EHR, this demonstrates a successful modular approach to clinical decision support. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M
2008-01-01
To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.
Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM
2009-01-01
Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages. PMID:17700582
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
NASA Astrophysics Data System (ADS)
Brandelik, Andreas
2009-07-01
CALCMIN, an open source Visual Basic program, was implemented in EXCEL™. The program was primarily developed to support geoscientists in their routine task of calculating structural formulae of minerals on the basis of chemical analysis mainly obtained by electron microprobe (EMP) techniques. Calculation programs for various minerals are already included in the form of sub-routines. These routines are arranged in separate modules containing a minimum of code. The architecture of CALCMIN allows the user to easily develop new calculation routines or modify existing routines with little knowledge of programming techniques. By means of a simple mouse-click, the program automatically generates a rudimentary framework of code using the object model of the Visual Basic Editor (VBE). Within this framework simple commands and functions, which are provided by the program, can be used, for example, to perform various normalization procedures or to output the results of the computations. For the clarity of the code, element symbols are used as variables initialized by the program automatically. CALCMIN does not set any boundaries in complexity of the code used, resulting in a wide range of possible applications. Thus, matrix and optimization methods can be included, for instance, to determine end member contents for subsequent thermodynamic calculations. Diverse input procedures are provided, such as the automated read-in of output files created by the EMP. Furthermore, a subsequent filter routine enables the user to extract specific analyses in order to use them for a corresponding calculation routine. An event-driven, interactive operating mode was selected for easy application of the program. CALCMIN leads the user from the beginning to the end of the calculation process.
CrasyDSE: A framework for solving Dyson–Schwinger equations☆
Huber, Markus Q.; Mitter, Mario
2012-01-01
Dyson–Schwinger equations are important tools for non-perturbative analyses of quantum field theories. For example, they are very useful for investigations in quantum chromodynamics and related theories. However, sometimes progress is impeded by the complexity of the equations. Thus automating parts of the calculations will certainly be helpful in future investigations. In this article we present a framework for such an automation based on a C++ code that can deal with a large number of Green functions. Since also the creation of the expressions for the integrals of the Dyson–Schwinger equations needs to be automated, we defer this task to a Mathematica notebook. We illustrate the complete workflow with an example from Yang–Mills theory coupled to a fundamental scalar field that has been investigated recently. As a second example we calculate the propagators of pure Yang–Mills theory. Our code can serve as a basis for many further investigations where the equations are too complicated to tackle by hand. It also can easily be combined with DoFun, a program for the derivation of Dyson–Schwinger equations.1 Program summary Program title: CrasyDSE Catalogue identifier: AEMY _v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49030 No. of bytes in distributed program, including test data, etc.: 303958 Distribution format: tar.gz Programming language: Mathematica 8 and higher, C++. Computer: All on which Mathematica and C++ are available. Operating system: All on which Mathematica and C++ are available (Windows, Unix, Mac OS). Classification: 11.1, 11.4, 11.5, 11.6. Nature of problem: Solve (large) systems of Dyson–Schwinger equations numerically. Solution method: Create C++ functions in Mathematica to be used for the numeric code in C++. This code uses structures to handle large numbers of Green functions. Unusual features: Provides a tool to convert Mathematica expressions into C++ expressions including conversion of function names. Running time: Depending on the complexity of the investigated system solving the equations numerically can take seconds on a desktop PC to hours on a cluster. PMID:25540463
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-04
... Customs Automation Program Test Concerning Automated Commercial Environment (ACE) Cargo Release (Formerly... Simplified Entry functionality in the Automated Commercial Environment (ACE). Originally, the test was known...) test concerning Automated Commercial Environment (ACE) Simplified Entry (SE test) functionality is...
Devine, Emily Beth; Capurro, Daniel; van Eaton, Erik; Alfonso-Cristancho, Rafael; Devlin, Allison; Yanez, N. David; Yetisgen-Yildiz, Meliha; Flum, David R.; Tarczy-Hornoch, Peter
2013-01-01
Background: The field of clinical research informatics includes creation of clinical data repositories (CDRs) used to conduct quality improvement (QI) activities and comparative effectiveness research (CER). Ideally, CDR data are accurately and directly abstracted from disparate electronic health records (EHRs), across diverse health-systems. Objective: Investigators from Washington State’s Surgical Care Outcomes and Assessment Program (SCOAP) Comparative Effectiveness Research Translation Network (CERTAIN) are creating such a CDR. This manuscript describes the automation and validation methods used to create this digital infrastructure. Methods: SCOAP is a QI benchmarking initiative. Data are manually abstracted from EHRs and entered into a data management system. CERTAIN investigators are now deploying Caradigm’s Amalga™ tool to facilitate automated abstraction of data from multiple, disparate EHRs. Concordance is calculated to compare data automatically to manually abstracted. Performance measures are calculated between Amalga and each parent EHR. Validation takes place in repeated loops, with improvements made over time. When automated abstraction reaches the current benchmark for abstraction accuracy - 95% - itwill ‘go-live’ at each site. Progress to Date: A technical analysis was completed at 14 sites. Five sites are contributing; the remaining sites prioritized meeting Meaningful Use criteria. Participating sites are contributing 15–18 unique data feeds, totaling 13 surgical registry use cases. Common feeds are registration, laboratory, transcription/dictation, radiology, and medications. Approximately 50% of 1,320 designated data elements are being automatically abstracted—25% from structured data; 25% from text mining. Conclusion: In semi-automating data abstraction and conducting a rigorous validation, CERTAIN investigators will semi-automate data collection to conduct QI and CER, while advancing the Learning Healthcare System. PMID:25848565
Systematic review automation technologies
2014-01-01
Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128
Applications of Automation Methods for Nonlinear Fracture Test Analysis
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.
NASA Astrophysics Data System (ADS)
Pukhov, A.
2003-04-01
CalcHEP is a clone of the CompHEP project which is developed by the author outside of the CompHEP group. CompHEP/CalcHEP are packages for automatic calculations of elementary particle decay and collision properties in the lowest order of perturbation theory. The main idea prescribed into the packages is to make available passing on from the Lagrangian to the final distributions effectively with a high level of automation. According to this, the packages were created as a menu driven user friendly programs for calculations in the interactive mode. From the other side, long-time calculations should be done in the non-interactive regime. Thus, from the beginning CompHEP has a problem of batch calculations. In CompHEP 33.23 the batch session was realized by mean of interactive menu which allows to the user to formulate the task for batch. After that the not-interactive session was launched. This way is too restricted, not flexible, and leads to doubling in programming. In this article I discuss another approach how one can force an interactive program to work in non-interactive mode. This approach was realized in CalcHEP 2.1 disposed on http://theory.sinp.msu.ru/~pukhov/calchep.html.
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset.
Shirts, Michael R; Klein, Christoph; Swails, Jason M; Yin, Jian; Gilson, Michael K; Mobley, David L; Case, David A; Zhong, Ellen D
2017-01-01
We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to better than 0.1 % relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb's constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison.
Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset
Shirts, Michael R.; Klein, Christoph; Swails, Jason M.; Yin, Jian; Gilson, Michael K.; Mobley, David L.; Case, David A.; Zhong, Ellen D.
2017-01-01
We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to a better than 0.1% relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb’s constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison. PMID:27787702
Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset
NASA Astrophysics Data System (ADS)
Shirts, Michael R.; Klein, Christoph; Swails, Jason M.; Yin, Jian; Gilson, Michael K.; Mobley, David L.; Case, David A.; Zhong, Ellen D.
2017-01-01
We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to better than 0.1 % relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb's constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison.
CrasyDSE: A framework for solving Dyson-Schwinger equations.
Huber, Markus Q; Mitter, Mario
2012-11-01
Dyson-Schwinger equations are important tools for non-perturbative analyses of quantum field theories. For example, they are very useful for investigations in quantum chromodynamics and related theories. However, sometimes progress is impeded by the complexity of the equations. Thus automating parts of the calculations will certainly be helpful in future investigations. In this article we present a framework for such an automation based on a C++ code that can deal with a large number of Green functions. Since also the creation of the expressions for the integrals of the Dyson-Schwinger equations needs to be automated, we defer this task to a Mathematica notebook. We illustrate the complete workflow with an example from Yang-Mills theory coupled to a fundamental scalar field that has been investigated recently. As a second example we calculate the propagators of pure Yang-Mills theory. Our code can serve as a basis for many further investigations where the equations are too complicated to tackle by hand. It also can easily be combined with DoFun , a program for the derivation of Dyson-Schwinger equations. Program title : CrasyDSE Catalogue identifier : AEMY _v1_0 Program summary URL : http://cpc.cs.qub.ac.uk/summaries/AEMY_v1_0.html Program obtainable from : CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions : Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc. : 49030 No. of bytes in distributed program, including test data, etc. : 303958 Distribution format : tar.gz Programming language : Mathematica 8 and higher, C++ . Computer : All on which Mathematica and C++ are available. Operating system : All on which Mathematica and C++ are available (Windows, Unix, Mac OS). Classification : 11.1, 11.4, 11.5, 11.6. Nature of problem : Solve (large) systems of Dyson-Schwinger equations numerically. Solution method : Create C++ functions in Mathematica to be used for the numeric code in C++ . This code uses structures to handle large numbers of Green functions. Unusual features : Provides a tool to convert Mathematica expressions into C++ expressions including conversion of function names. Running time : Depending on the complexity of the investigated system solving the equations numerically can take seconds on a desktop PC to hours on a cluster.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
... CBP with authority to conduct limited test programs or procedures designed to evaluate planned... aspects of this test, including the design, conduct and implementation of the test, in order to determine... Environment (ACE); Announcement of National Customs Automation Program Test of Automated Procedures for In...
Probst, Yasmine; Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh
2016-07-28
Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Automated modeling tools can streamline the modeling process for dietary intervention trials ensuring consistency of the background diets, although appropriate constraints must be used in their development to achieve desired results. The DMT was found to be a valid automated tool producing similar results to tools with less automation. The results of this study suggest interchangeability of the modeling approaches used, although implementation should reflect the requirements of the dietary intervention trial in which it is used.
Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh
2016-01-01
Background Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. Objective This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Methods Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. Results The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Conclusions Automated modeling tools can streamline the modeling process for dietary intervention trials ensuring consistency of the background diets, although appropriate constraints must be used in their development to achieve desired results. The DMT was found to be a valid automated tool producing similar results to tools with less automation. The results of this study suggest interchangeability of the modeling approaches used, although implementation should reflect the requirements of the dietary intervention trial in which it is used. PMID:27471104
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-14
... Program (NCAP) Test Concerning Automated Commercial Environment (ACE) Simplified Entry: Modification of... Automated Commercial Environment (ACE). The test's participant selection criteria are modified to reflect... (NCAP) test concerning Automated Commercial Environment (ACE) Simplified Entry functionality (Simplified...
NASA Astrophysics Data System (ADS)
Chęciński, Jakub; Frankowski, Marek
2016-10-01
We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.
Patel, Mehul D; Rose, Kathryn M; Owens, Cindy R; Bang, Heejung; Kaufman, Jay S
2012-03-01
Occupational data are a common source of workplace exposure and socioeconomic information in epidemiologic research. We compared the performance of two occupation coding methods, an automated software and a manual coder, using occupation and industry titles from U.S. historical records. We collected parental occupational data from 1920-40s birth certificates, Census records, and city directories on 3,135 deceased individuals in the Atherosclerosis Risk in Communities (ARIC) study. Unique occupation-industry narratives were assigned codes by a manual coder and the Standardized Occupation and Industry Coding software program. We calculated agreement between coding methods of classification into major Census occupational groups. Automated coding software assigned codes to 71% of occupations and 76% of industries. Of this subset coded by software, 73% of occupation codes and 69% of industry codes matched between automated and manual coding. For major occupational groups, agreement improved to 89% (kappa = 0.86). Automated occupational coding is a cost-efficient alternative to manual coding. However, some manual coding is required to code incomplete information. We found substantial variability between coders in the assignment of occupations although not as large for major groups.
CrasyDSE: A framework for solving Dyson-Schwinger equations
NASA Astrophysics Data System (ADS)
Huber, Markus Q.; Mitter, Mario
2012-11-01
Dyson-Schwinger equations are important tools for non-perturbative analyses of quantum field theories. For example, they are very useful for investigations in quantum chromodynamics and related theories. However, sometimes progress is impeded by the complexity of the equations. Thus automating parts of the calculations will certainly be helpful in future investigations. In this article we present a framework for such an automation based on a C++ code that can deal with a large number of Green functions. Since also the creation of the expressions for the integrals of the Dyson-Schwinger equations needs to be automated, we defer this task to a Mathematica notebook. We illustrate the complete workflow with an example from Yang-Mills theory coupled to a fundamental scalar field that has been investigated recently. As a second example we calculate the propagators of pure Yang-Mills theory. Our code can serve as a basis for many further investigations where the equations are too complicated to tackle by hand. It also can easily be combined with DoFun, a program for the derivation of Dyson-Schwinger equations.
Can automation in radiotherapy reduce costs?
Massaccesi, Mariangela; Corti, Michele; Azario, Luigi; Balducci, Mario; Ferro, Milena; Mantini, Giovanna; Mattiucci, Gian Carlo; Valentini, Vincenzo
2015-01-01
Computerized automation is likely to play an increasingly important role in radiotherapy. The objective of this study was to report the results of the first part of a program to implement a model for economical evaluation based on micro-costing method. To test the efficacy of the model, the financial impact of the introduction of an automation tool was estimated. A single- and multi-center validation of the model by a prospective collection of data is planned as the second step of the program. The model was implemented by using an interactive spreadsheet (Microsoft Excel, 2010). The variables to be included were identified across three components: productivity, staff, and equipment. To calculate staff requirements, the workflow of Gemelli ART center was mapped out and relevant workload measures were defined. Profit and loss, productivity and staffing were identified as significant outcomes. Results were presented in terms of earnings before interest and taxes (EBIT). Three different scenarios were hypothesized: baseline situation at Gemelli ART (scenario 1); reduction by 2 minutes of the average duration of treatment fractions (scenario 2); and increased incidence of advanced treatment modalities (scenario 3). By using the model, predicted EBIT values for each scenario were calculated across a period of eight years (from 2015 to 2022). For both scenarios 2 and 3 costs are expected to slightly increase as compared to baseline situation that is particularly due to a little increase in clinical personnel costs. However, in both cases EBIT values are more favorable than baseline situation (EBIT values: scenario 1, 27%, scenario 2, 30%, scenario 3, 28% of revenues). A model based on a micro-costing method was able to estimate the financial consequences of the introduction of an automation tool in our radiotherapy department. A prospective collection of data at Gemelli ART and in a consortium of centers is currently under way to prospectively validate the model.
SU-G-BRB-05: Automation of the Photon Dosimetric Quality Assurance Program of a Linear Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebron, S; Lu, B; Yan, G
Purpose: To develop an automated method to calculate a linear accelerator (LINAC) photon radiation field size, flatness, symmetry, output and beam quality in a single delivery for flattened (FF) and flattening-filter-free (FFF) beams using an ionization chamber array. Methods: The proposed method consists of three control points that deliver 30×30, 10×10 and 5×5cm{sup 2} fields (FF or FFF) in a step-and-shoot sequence where the number of monitor units is weighted for each field size. The IC Profiler (Sun Nuclear Inc.) with 5mm detector spacing was used for this study. The corrected counts (CCs) were calculated and the locations of themore » maxima and minima values of the first-order gradient determined data of each sub field. Then, all CCs for each field size are summed in order to obtain the final profiles. For each profile, the radiation field size, symmetry, flatness, output factor and beam quality were calculated. For field size calculation, a parameterized gradient method was used. For method validation, profiles were collected in the detector array both, individually and as part of the step-and-shoot plan, with 9.9cm buildup for FF and FFF beams at 90cm source-to-surface distance. The same data were collected with the device (plus buildup) placed on a movable platform to achieve a 1mm resolution. Results: The differences between the dosimetric quantities calculated from both deliveries, individually and step-and-shoot, were within 0.31±0.20% and 0.04±0.02mm. The differences between the calculated field sizes with 5mm and 1mm resolution were ±0.1mm. Conclusion: The proposed single delivery method proved to be simple and efficient in automating the photon dosimetric monthly and annual quality assurance.« less
Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking
NASA Technical Reports Server (NTRS)
Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward
2011-01-01
To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk calculation procedure.
"Hyperstat": an educational and working tool in epidemiology.
Nicolosi, A
1995-01-01
The work of a researcher in epidemiology is based on studying literature, planning studies, gathering data, analyzing data and writing results. Therefore he has need for performing, more or less, simple calculations, the need for consulting or quoting literature, the need for consulting textbooks about certain issues or procedures, and the need for looking at a specific formula. There are no programs conceived as a workstation to assist the different aspects of researcher work in an integrated fashion. A hypertextual system was developed which supports different stages of the epidemiologist's work. It combines database management, statistical analysis or planning, and literature searches. The software was developed on Apple Macintosh by using Hypercard 2.1 as a database and HyperTalk as a programming language. The program is structured in 7 "stacks" or files: Procedures; Statistical Tables; Graphs; References; Text; Formulas; Help. Each stack has its own management system with an automated Table of Contents. Stacks contain "cards" which make up the databases and carry executable programs. The programs are of four kinds: association; statistical procedure; formatting (input/output); database management. The system performs general statistical procedures, procedures applicable to epidemiological studies only (follow-up and case-control), and procedures for clinical trials. All commands are given by clicking the mouse on self-explanatory "buttons". In order to perform calculations, the user only needs to enter the data into the appropriate cells and then click on the selected procedure's button. The system has a hypertextual structure. The user can go from a procedure to other cards following the preferred order of succession and according to built-in associations. The user can access different levels of knowledge or information from any stack he is consulting or operating. From every card, the user can go to a selected procedure to perform statistical calculations, to the reference database management system, to the textbook in which all procedures and issues are discussed in detail, to the database of statistical formulas with automated table of contents, to statistical tables with automated table of contents, or to the help module. he program has a very user-friendly interface and leaves the user free to use the same format he would use on paper. The interface does not require special skills. It reflects the Macintosh philosophy of using windows, buttons and mouse. This allows the user to perform complicated calculations without losing the "feel" of data, weight alternatives, and simulations. This program shares many features in common with hypertexts. It has an underlying network database where the nodes consist of text, graphics, executable procedures, and combinations of these; the nodes in the database correspond to windows on the screen; the links between the nodes in the database are visible as "active" text or icons in the windows; the text is read by following links and opening new windows. The program is especially useful as an educational tool, directed to medical and epidemiology students. The combination of computing capabilities with a textbook and databases of formulas and literature references, makes the program versatile and attractive as a learning tool. The program is also helpful in the work done at the desk, where the researcher examines results, consults literature, explores different analytic approaches, plans new studies, or writes grant proposals or scientific articles.
ERIC Educational Resources Information Center
Coleman, Thomas; Langberg, George
An experimental public school speech therapy program is described, which offers automated, programed instruction in sound production and auditory training. The experiment includes self-teaching methods, as well as utilization of paraprofessional personnel under the supervision of a qualified speech therapist. Although the automated program was…
Automated Design of Restraint Layer of an Inflatable Vessel
NASA Technical Reports Server (NTRS)
Spexarth, Gary
2007-01-01
A Mathcad computer program largely automates the design and analysis of the restraint layer (the primary load-bearing layer) of an inflatable vessel that consists of one or more sections having cylindrical, toroidal, and/or spherical shape(s). A restraint layer typically comprises webbing in the form of multiple straps. The design task includes choosing indexing locations along the straps, computing the load at every location in each strap, computing the resulting stretch at each location, and computing the amount of undersizing required of each strap so that, once the vessel is inflated and the straps thus stretched, the vessel can be expected to assume the desired shape. Prior to the development of this program, the design task was performed by use of a difficult-to-use spreadsheet program that required manual addition of rows and columns depending on the numbers of strap rows and columns of a given design. In contrast, this program is completely parametric and includes logic that automatically adds or deletes rows and columns as needed. With minimal input from the user, this program automatically computes indexing locations, strap lengths, undersizing requirements, and all design data required to produce detailed drawings and assembly procedures. It also generates textual comments that help the user understand the calculations.
NASA Astrophysics Data System (ADS)
Steposhina, S. V.; Fedonin, O. N.
2018-03-01
Dependencies that make it possible to automate the force calculation during surface plastic deformation (SPD) processing and, thus, to shorten the time for technological preparation of production have been developed.
An automated approach to design of solid rockets utilizing a special internal ballistics model
NASA Technical Reports Server (NTRS)
Sforzini, R. H.
1980-01-01
A pattern search technique is presented, which is utilized in a computer program that minimizes the sum of the squares of the differences, at various times, between a desired thrust-time trace and that calculated with a special mathematical internal ballistics model of a solid propellant rocket motor. The program is demonstrated by matching the thrust-time trace obtained from static tests of the first Space Shuttle SRM starting with input values of 10 variables which are, in general, 10% different from the as-built SRM. It is concluded that an excellent match is obtained.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2010-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... DEPARTMENT OF HOMELAND SECURITY U.S. Customs and Border Protection Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated Commercial Environment (ACE) Document Image System (DIS) and Simplified Entry (SE); Correction AGENCY: U.S. Customs and Border Protection, Department...
Automated generation of lattice QCD Feynman rules
NASA Astrophysics Data System (ADS)
Hart, A.; von Hippel, G. M.; Horgan, R. R.; Müller, E. H.
2009-12-01
The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. Program summaryProgram title: HiPPY, HPsrc Catalogue identifier: AEDX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 (see Additional comments below) No. of lines in distributed program, including test data, etc.: 513 426 No. of bytes in distributed program, including test data, etc.: 4 893 707 Distribution format: tar.gz Programming language: Python, Fortran95 Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systems Operating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is available Has the code been vectorised or parallelised?: Yes RAM: Problem specific, typically less than 1 GB for either code Classification: 4.4, 11.5 Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions. Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc). Restrictions: No general restrictions. Specific restrictions are discussed in the text. Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us. Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run. References:A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340-353, doi:10.1016/j.jcp.2005.03.010, arXiv:hep-lat/0411026. M. Lüscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5.
NASA Astrophysics Data System (ADS)
Zlatanovic, Nikola; Milovanovic, Irina; Cotric, Jelena
2014-05-01
Drainage basins are for the most part ungauged or poorly gauged not only in Serbia but in most parts of the world, usually due to insufficient funds, but also the decommission of river gauges in upland catchments to focus on downstream areas which are more populated. Very often, design discharges are needed for these streams or rivers where no streamflow data is available, for various applications. Examples include river training works for flood protection measures or erosion control, design of culverts, water supply facilities, small hydropower plants etc. The estimation of discharges in ungauged basins is most often performed using rainfall-runoff models, whose parameters heavily rely on geomorphometric attributes of the basin (e.g. catchment area, elevation, slopes of channels and hillslopes etc.). The calculation of these, as well as other paramaters, is most often done in GIS (Geographic Information System) software environments. This study deals with the application of freely available and open source software and datasets for automating rainfall-runoff analysis of ungauged basins using methodologies currently in use hydrological practice. The R programming language was used for scripting and automating the hydrological calculations, coupled with SAGA GIS (System for Automated Geoscientivic Analysis) for geocomputing functions and terrain analysis. Datasets used in the analyses include the freely available SRTM (Shuttle Radar Topography Mission) terrain data, CORINE (Coordination of Information on the Environment) Land Cover data, as well as soil maps and rainfall data. The choice of free and open source software and datasets makes the project ideal for academic and research purposes and cross-platform projects. The geomorphometric module was tested on more than 100 catchments throughout Serbia and compared to manually calculated values (using topographic maps). The discharge estimation module was tested on 21 catchments where data were available and compared to results obtained by frequency analysis of annual maximum discharge. The geomorphometric module of the calculation system showed excellent results, saving a great deal of time that would otherwise have been spent on manual processing of geospatial data. This type of automated analysis presented in this study will enable a much quicker hydrologic analysis on multiple watersheds, providing the platform for further research into spatial variability of runoff.
Minor, K S; Willits, J A; Marggraf, M P; Jones, M N; Lysaker, P H
2018-04-25
Conveying information cohesively is an essential element of communication that is disrupted in schizophrenia. These disruptions are typically expressed through disorganized symptoms, which have been linked to neurocognitive, social cognitive, and metacognitive deficits. Automated analysis can objectively assess disorganization within sentences, between sentences, and across paragraphs by comparing explicit communication to a large text corpus. Little work in schizophrenia has tested: (1) links between disorganized symptoms measured via automated analysis and neurocognition, social cognition, or metacognition; and (2) if automated analysis explains incremental variance in cognitive processes beyond clinician-rated scales. Disorganization was measured in schizophrenia (n = 81) with Coh-Metrix 3.0, an automated program that calculates basic and complex language indices. Trained staff also assessed neurocognition, social cognition, metacognition, and clinician-rated disorganization. Findings showed that all three cognitive processes were significantly associated with at least one automated index of disorganization. When automated analysis was compared with a clinician-rated scale, it accounted for significant variance in neurocognition and metacognition beyond the clinician-rated measure. When combined, these two methods explained 28-31% of the variance in neurocognition, social cognition, and metacognition. This study illustrated how automated analysis can highlight the specific role of disorganization in neurocognition, social cognition, and metacognition. Generally, those with poor cognition also displayed more disorganization in their speech-making it difficult for listeners to process essential information needed to tie the speaker's ideas together. Our findings showcase how implementing a mixed-methods approach in schizophrenia can explain substantial variance in cognitive processes.
Automated Program Analysis for Cybersecurity (APAC)
2016-07-14
AUTOMATED PROGRAM ANALYSIS FOR CYBERSECURITY (APAC) FIVE DIRECTIONS, INC JULY 2016 FINAL TECHNICAL REPORT APPROVED... CYBERSECURITY (APAC) 5a. CONTRACT NUMBER FA8750-14-C-0050 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) William Arbaugh...AC Team Adversarial Challenge Team, responsible for creating malicious applications APAC Automated Program Analysis for Cybersecurity BAE BAE Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-13
... Customs Automation Program Test (NCAP) Regarding Reconciliation for Filing Certain Post-Importation Claims... Automation Program (NCAP) Reconciliation prototype test to include the filing of post-importation [[Page... notices. DATES: The test is modified to allow Reconciliation of post-importation preferential tariff...
47 CFR 73.4097 - EBS (now EAS) attention signals on automated programing systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false EBS (now EAS) attention signals on automated programing systems. 73.4097 Section 73.4097 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED... (now EAS) attention signals on automated programing systems. See Public Notice dated March 1, 1979. 72...
Automated structure determination of proteins with the SAIL-FLYA NMR method.
Takeda, Mitsuhiro; Ikeya, Teppei; Güntert, Peter; Kainosho, Masatsune
2007-01-01
The labeling of proteins with stable isotopes enhances the NMR method for the determination of 3D protein structures in solution. Stereo-array isotope labeling (SAIL) provides an optimal stereospecific and regiospecific pattern of stable isotopes that yields sharpened lines, spectral simplification without loss of information, and the ability to collect rapidly and evaluate fully automatically the structural restraints required to solve a high-quality solution structure for proteins up to twice as large as those that can be analyzed using conventional methods. Here, we describe a protocol for the preparation of SAIL proteins by cell-free methods, including the preparation of S30 extract and their automated structure analysis using the FLYA algorithm and the program CYANA. Once efficient cell-free expression of the unlabeled or uniformly labeled target protein has been achieved, the NMR sample preparation of a SAIL protein can be accomplished in 3 d. A fully automated FLYA structure calculation can be completed in 1 d on a powerful computer system.
Automated nystagmus analysis. [on-line computer technique for eye data processing
NASA Technical Reports Server (NTRS)
Oman, C. M.; Allum, J. H. J.; Tole, J. R.; Young, L. R.
1973-01-01
Several methods have recently been used for on-line analysis of nystagmus: A digital computer program has been developed to accept sampled records of eye position, detect fast phase components, and output cumulative slow phase position, continuous slow phase velocity, instantaneous fast phase frequency, and other parameters. The slow phase velocity is obtained by differentiation of the calculated cumulative position rather than the original eye movement record. Also, a prototype analog device has been devised which calculates the velocity of the slow phase component during caloric testing. Examples of clinical and research eye movement records analyzed with these devices are shown.
Herasevich, Vitaly
2017-01-01
Background The new sepsis definition has increased the need for frequent sequential organ failure assessment (SOFA) score recalculation and the clerical burden of information retrieval makes this score ideal for automated calculation. Objective The aim of this study was to (1) estimate the clerical workload of manual SOFA score calculation through a time-motion analysis and (2) describe a user-centered design process for an electronic medical record (EMR) integrated, automated SOFA score calculator with subsequent usability evaluation study. Methods First, we performed a time-motion analysis by recording time-to-task-completion for the manual calculation of 35 baseline and 35 current SOFA scores by 14 internal medicine residents over a 2-month period. Next, we used an agile development process to create a user interface for a previously developed automated SOFA score calculator. The final user interface usability was evaluated by clinician end users with the Computer Systems Usability Questionnaire. Results The overall mean (standard deviation, SD) time-to-complete manual SOFA score calculation time was 61.6 s (33). Among the 24% (12/50) usability survey respondents, our user-centered user interface design process resulted in >75% favorability of survey items in the domains of system usability, information quality, and interface quality. Conclusions Early stakeholder engagement in our agile design process resulted in a user interface for an automated SOFA score calculator that reduced clinician workload and met clinicians’ needs at the point of care. Emerging interoperable platforms may facilitate dissemination of similarly useful clinical score calculators and decision support algorithms as “apps.” A user-centered design process and usability evaluation should be considered during creation of these tools. PMID:28526675
Aakre, Christopher Ansel; Kitson, Jaben E; Li, Man; Herasevich, Vitaly
2017-05-18
The new sepsis definition has increased the need for frequent sequential organ failure assessment (SOFA) score recalculation and the clerical burden of information retrieval makes this score ideal for automated calculation. The aim of this study was to (1) estimate the clerical workload of manual SOFA score calculation through a time-motion analysis and (2) describe a user-centered design process for an electronic medical record (EMR) integrated, automated SOFA score calculator with subsequent usability evaluation study. First, we performed a time-motion analysis by recording time-to-task-completion for the manual calculation of 35 baseline and 35 current SOFA scores by 14 internal medicine residents over a 2-month period. Next, we used an agile development process to create a user interface for a previously developed automated SOFA score calculator. The final user interface usability was evaluated by clinician end users with the Computer Systems Usability Questionnaire. The overall mean (standard deviation, SD) time-to-complete manual SOFA score calculation time was 61.6 s (33). Among the 24% (12/50) usability survey respondents, our user-centered user interface design process resulted in >75% favorability of survey items in the domains of system usability, information quality, and interface quality. Early stakeholder engagement in our agile design process resulted in a user interface for an automated SOFA score calculator that reduced clinician workload and met clinicians' needs at the point of care. Emerging interoperable platforms may facilitate dissemination of similarly useful clinical score calculators and decision support algorithms as "apps." A user-centered design process and usability evaluation should be considered during creation of these tools. ©Christopher Ansel Aakre, Jaben E Kitson, Man Li, Vitaly Herasevich. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 18.05.2017.
Beshears, David L.; Batsell, Stephen G.; Abercrombie, Robert K.; Scudiere, Matthew B.; White, Clifford P.
2007-12-04
An asset identification and information infrastructure management (AI3M) device having an automated identification technology system (AIT), a Transportation Coordinators' Automated Information for Movements System II (TC-AIMS II), a weigh-in-motion system (WIM-II), and an Automated Air Load Planning system (AALPS) all in electronic communication for measuring and calculating actual asset characteristics, either statically or in-motion, and further calculating an actual load plan.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Paramedir: A Tool for Programmable Performance Analysis
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Labarta, Jesus; Gimenez, Judit
2004-01-01
Performance analysis of parallel scientific applications is time consuming and requires great expertise in areas such as programming paradigms, system software, and computer hardware architectures. In this paper we describe a tool that facilitates the programmability of performance metric calculations thereby allowing the automation of the analysis and reducing the application development time. We demonstrate how the system can be used to capture knowledge and intuition acquired by advanced parallel programmers in order to be transferred to novice users.
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Krosel, S. M.; Bruton, W. M.
1982-01-01
A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.
Laboratory automation in a functional programming language.
Runciman, Colin; Clare, Amanda; Harkness, Rob
2014-12-01
After some years of use in academic and research settings, functional languages are starting to enter the mainstream as an alternative to more conventional programming languages. This article explores one way to use Haskell, a functional programming language, in the development of control programs for laboratory automation systems. We give code for an example system, discuss some programming concepts that we need for this example, and demonstrate how the use of functional programming allows us to express and verify properties of the resulting code. © 2014 Society for Laboratory Automation and Screening.
Ikeya, Teppei; Terauchi, Tsutomu; Güntert, Peter; Kainosho, Masatsune
2006-07-01
Recently we have developed the stereo-array isotope labeling (SAIL) technique to overcome the conventional molecular size limitation in NMR protein structure determination by employing complete stereo- and regiospecific patterns of stable isotopes. SAIL sharpens signals and simplifies spectra without the loss of requisite structural information, thus making large classes of proteins newly accessible to detailed solution structure determination. The automated structure calculation program CYANA can efficiently analyze SAIL-NOESY spectra and calculate structures without manual analysis. Nevertheless, the original SAIL method might not be capable of determining the structures of proteins larger than 50 kDa or membrane proteins, for which the spectra are characterized by many broadened and overlapped peaks. Here we have carried out simulations of new SAIL patterns optimized for minimal relaxation and overlap, to evaluate the combined use of SAIL and CYANA for solving the structures of larger proteins and membrane proteins. The modified approach reduces the number of peaks to nearly half of that observed with uniform labeling, while still yielding well-defined structures and is expected to enable NMR structure determinations of these challenging systems.
Oligo Design: a computer program for development of probes for oligonucleotide microarrays.
Herold, Keith E; Rasooly, Avraham
2003-12-01
Oligonucleotide microarrays have demonstrated potential for the analysis of gene expression, genotyping, and mutational analysis. Our work focuses primarily on the detection and identification of bacteria based on known short sequences of DNA. Oligo Design, the software described here, automates several design aspects that enable the improved selection of oligonucleotides for use with microarrays for these applications. Two major features of the program are: (i) a tiling algorithm for the design of short overlapping temperature-matched oligonucleotides of variable length, which are useful for the analysis of single nucleotide polymorphisms and (ii) a set of tools for the analysis of multiple alignments of gene families and related short DNA sequences, which allow for the identification of conserved DNA sequences for PCR primer selection and variable DNA sequences for the selection of unique probes for identification. Note that the program does not address the full genome perspective but, instead, is focused on the genetic analysis of short segments of DNA. The program is Internet-enabled and includes a built-in browser and the automated ability to download sequences from GenBank by specifying the GI number. The program also includes several utilities, including audio recital of a DNA sequence (useful for verifying sequences against a written document), a random sequence generator that provides insight into the relationship between melting temperature and GC content, and a PCR calculator.
Crew-integration and Automation Testbed (CAT)Program Overview and RUX06 Introduction
2006-09-20
unlimited Crew-integration and Automation Testbed ( CAT ) Program Overview and RUX06 Introduction 26-27 July 2006 Patrick Nunez, Terry Tierney, Brian Novak...3. DATES COVERED 4. TITLE AND SUBTITLE Crew-integration and Automation Testbed ( CAT )Program Overview and RUX06 Introduction 5a. CONTRACT...Experiment • Capstone CAT experiment – Evaluate effectiveness of CAT program in improving the performance and/or reducing the workload for a mounted
Automated registration of tail bleeding in rats.
Johansen, Peter B; Henriksen, Lars; Andresen, Per R; Lauritzen, Brian; Jensen, Kåre L; Juhl, Trine N; Tranholm, Mikael
2008-05-01
An automated system for registration of tail bleeding in rats using a camera and a user-designed PC-based software program has been developed. The live and processed images are displayed on the screen and are exported together with a text file for later statistical processing of the data allowing calculation of e.g. number of bleeding episodes, bleeding times and bleeding areas. Proof-of-principle was achieved when the camera captured the blood stream after infusion of rat whole blood into saline. Suitability was assessed by recording of bleeding profiles in heparin-treated rats, demonstrating that the system was able to capture on/off bleedings and that the data transfer and analysis were conducted successfully. Then, bleeding profiles were visually recorded by two independent observers simultaneously with the automated recordings after tail transection in untreated rats. Linear relationships were found in the number of bleedings, demonstrating, however, a statistically significant difference in the recording of bleeding episodes between observers. Also, the bleeding time was longer for visual compared to automated recording. No correlation was found between blood loss and bleeding time in untreated rats, but in heparinized rats a correlation was suggested. Finally, the blood loss correlated with the automated recording of bleeding area. In conclusion, the automated system has proven suitable for replacing visual recordings of tail bleedings in rats. Inter-observer differences can be eliminated, monotonous repetitive work avoided, and a higher through-put of animals in less time achieved. The automated system will lead to an increased understanding of the nature of bleeding following tail transection in different rodent models.
DOT National Transportation Integrated Search
1982-07-01
In order to examine specific automated guideway transit (AGT) developments and concepts, UMTA undertook a program of studies and technology investigations called Automated Guideway Transit Technology (AGTT) Program. The objectives of one segment of t...
DOT National Transportation Integrated Search
1981-07-01
In order to examine specific automated guideway transit (AGT) developments and concepts, UMTA undertook a program of studies and technology investigations called Automated Guideway Transit Technology (AGTT) Program. The objectives of one segment of t...
Spencer, Kirk T; Weinert, Lynn; Avi, Victor Mor; Decara, Jeanne; Lang, Roberto M
2002-12-01
The Tei index is a combined measurement of systolic and diastolic left ventricular (LV) performance and may be more useful for the diagnosis of global cardiac dysfunction than either systolic or diastolic measures alone. We sought to determine whether the Tei index could be accurately calculated from LV area waveforms generated with automated border detection. Twenty-four patients were studied in 3 groups: systolic dysfunction, diastolic dysfunction, and normal. The Tei index was calculated both from Doppler tracings and from analysis of LV area waveforms. Excellent agreement was found between Doppler-derived timing intervals and the Tei index with those obtained from averaged LV area waveforms. A significant difference was seen in the Tei index, computed with both Doppler and automated border detection techniques, between the normal group and those with LV systolic dysfunction and subjects with isolated diastolic dysfunction. This study validates the use of LV area waveforms for the automated calculation of the Tei index.
Automation of NMR structure determination of proteins.
Altieri, Amanda S; Byrd, R Andrew
2004-10-01
The automation of protein structure determination using NMR is coming of age. The tedious processes of resonance assignment, followed by assignment of NOE (nuclear Overhauser enhancement) interactions (now intertwined with structure calculation), assembly of input files for structure calculation, intermediate analyses of incorrect assignments and bad input data, and finally structure validation are all being automated with sophisticated software tools. The robustness of the different approaches continues to deal with problems of completeness and uniqueness; nevertheless, the future is very bright for automation of NMR structure generation to approach the levels found in X-ray crystallography. Currently, near completely automated structure determination is possible for small proteins, and the prospect for medium-sized and large proteins is good. Copyright 2004 Elsevier Ltd.
Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.
2012-01-01
An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
Aviation Safety/Automation Program Conference
NASA Technical Reports Server (NTRS)
Morello, Samuel A. (Compiler)
1990-01-01
The Aviation Safety/Automation Program Conference - 1989 was sponsored by the NASA Langley Research Center on 11 to 12 October 1989. The conference, held at the Sheraton Beach Inn and Conference Center, Virginia Beach, Virginia, was chaired by Samuel A. Morello. The primary objective of the conference was to ensure effective communication and technology transfer by providing a forum for technical interchange of current operational problems and program results to date. The Aviation Safety/Automation Program has as its primary goal to improve the safety of the national airspace system through the development and integration of human-centered automation technologies for aircraft crews and air traffic controllers.
An automated dose tracking system for adaptive radiation therapy.
Liu, Chang; Kim, Jinkoo; Kumarasiri, Akila; Mayyas, Essa; Brown, Stephen L; Wen, Ning; Siddiqui, Farzan; Chetty, Indrin J
2018-02-01
The implementation of adaptive radiation therapy (ART) into routine clinical practice is technically challenging and requires significant resources to perform and validate each process step. The objective of this report is to identify the key components of ART, to illustrate how a specific automated procedure improves efficiency, and to facilitate the routine clinical application of ART. Data was used from patient images, exported from a clinical database and converted to an intermediate format for point-wise dose tracking and accumulation. The process was automated using in-house developed software containing three modularized components: an ART engine, user interactive tools, and integration tools. The ART engine conducts computing tasks using the following modules: data importing, image pre-processing, dose mapping, dose accumulation, and reporting. In addition, custom graphical user interfaces (GUIs) were developed to allow user interaction with select processes such as deformable image registration (DIR). A commercial scripting application programming interface was used to incorporate automated dose calculation for application in routine treatment planning. Each module was considered an independent program, written in C++or C#, running in a distributed Windows environment, scheduled and monitored by integration tools. The automated tracking system was retrospectively evaluated for 20 patients with prostate cancer and 96 patients with head and neck cancer, under institutional review board (IRB) approval. In addition, the system was evaluated prospectively using 4 patients with head and neck cancer. Altogether 780 prostate dose fractions and 2586 head and neck cancer dose fractions went processed, including DIR and dose mapping. On average, daily cumulative dose was computed in 3 h and the manual work was limited to 13 min per case with approximately 10% of cases requiring an additional 10 min for image registration refinement. An efficient and convenient dose tracking system for ART in the clinical setting is presented. The software and automated processes were rigorously evaluated and validated using patient image datasets. Automation of the various procedures has improved efficiency significantly, allowing for the routine clinical application of ART for improving radiation therapy effectiveness. Copyright © 2017 Elsevier B.V. All rights reserved.
Pressure Ratio to Thermal Environments
NASA Technical Reports Server (NTRS)
Lopez, Pedro; Wang, Winston
2012-01-01
A pressure ratio to thermal environments (PRatTlE.pl) program is a Perl language code that estimates heating at requested body point locations by scaling the heating at a reference location times a pressure ratio factor. The pressure ratio factor is the ratio of the local pressure at the reference point and the requested point from CFD (computational fluid dynamics) solutions. This innovation provides pressure ratio-based thermal environments in an automated and traceable method. Previously, the pressure ratio methodology was implemented via a Microsoft Excel spreadsheet and macro scripts. PRatTlE is able to calculate heating environments for 150 body points in less than two minutes. PRatTlE is coded in Perl programming language, is command-line-driven, and has been successfully executed on both the HP and Linux platforms. It supports multiple concurrent runs. PRatTlE contains error trapping and input file format verification, which allows clear visibility into the input data structure and intermediate calculations.
NASA Astrophysics Data System (ADS)
Hayrapetyan, David B.; Hovhannisyan, Levon; Mantashyan, Paytsar A.
2013-04-01
The analysis of complex spectra is an actual problem for modern science. The work is devoted to the creation of a software package, which analyzes spectrum in the different formats, possesses by dynamic knowledge database and self-study mechanism, performs automated analysis of the spectra compound based on knowledge database by application of certain algorithms. In the software package as searching systems, hyper-spherical random search algorithms, gradient algorithms and genetic searching algorithms were used. The analysis of Raman and IR spectrum of diamond-like carbon (DLC) samples were performed by elaborated program. After processing the data, the program immediately displays all the calculated parameters of DLC.
Automated Reasoning CICT Program/Intelligent Systems Project ATAC-PRT Review
NASA Technical Reports Server (NTRS)
Morris, Robert; Smith, Ben
2003-01-01
An overview is presented of the Automated Reasoning CICT Program/Intelligent Systems project. Automated reasoning technology will help NASA missions by increasing the amount of science achieved, ensuring safety of spacecraft and surface explorers, and by enabling more robust mission operations.
NASA Systems Autonomy Demonstration Program - A step toward Space Station automation
NASA Technical Reports Server (NTRS)
Starks, S. A.; Rundus, D.; Erickson, W. K.; Healey, K. J.
1987-01-01
This paper addresses a multiyear NASA program, the Systems Autonomy Demonstration Program (SADP), whose main objectives include the development, integration, and demonstration of automation technology in Space Station flight and ground support systems. The role of automation in the Space Station is reviewed, and the main players in SADP and their roles are described. The core research and technology being promoted by SADP are discussed, and a planned 1988 milestone demonstration of the automated monitoring, operation, and control of a complete mission operations subsystem is addressed.
Development of an automated film-reading system for ballistic ranges
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1992-01-01
Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.
Automation and robotics for the National Space Program
NASA Technical Reports Server (NTRS)
1985-01-01
The emphasis on automation and robotics in the augmentation of the human centered systems as it concerns the space station is discussed. How automation and robotics can amplify the capabilities of humans is detailed. A detailed developmental program for the space station is outlined.
Automated Speech Rate Measurement in Dysarthria
ERIC Educational Resources Information Center
Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc
2015-01-01
Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…
Model-centric distribution automation: Capacity, reliability, and efficiency
Onen, Ahmet; Jung, Jaesung; Dilek, Murat; ...
2016-02-26
A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less
Model-centric distribution automation: Capacity, reliability, and efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onen, Ahmet; Jung, Jaesung; Dilek, Murat
A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less
NASA Astrophysics Data System (ADS)
Jiang, Luan; Ling, Shan; Li, Qiang
2016-03-01
Cardiovascular diseases are becoming a leading cause of death all over the world. The cardiac function could be evaluated by global and regional parameters of left ventricle (LV) of the heart. The purpose of this study is to develop and evaluate a fully automated scheme for segmentation of LV in short axis cardiac cine MR images. Our fully automated method consists of three major steps, i.e., LV localization, LV segmentation at end-diastolic phase, and LV segmentation propagation to the other phases. First, the maximum intensity projection image along the time phases of the midventricular slice, located at the center of the image, was calculated to locate the region of interest of LV. Based on the mean intensity of the roughly segmented blood pool in the midventricular slice at each phase, end-diastolic (ED) and end-systolic (ES) phases were determined. Second, the endocardial and epicardial boundaries of LV of each slice at ED phase were synchronously delineated by use of a dual dynamic programming technique. The external costs of the endocardial and epicardial boundaries were defined with the gradient values obtained from the original and enhanced images, respectively. Finally, with the advantages of the continuity of the boundaries of LV across adjacent phases, we propagated the LV segmentation from the ED phase to the other phases by use of dual dynamic programming technique. The preliminary results on 9 clinical cardiac cine MR cases show that the proposed method can obtain accurate segmentation of LV based on subjective evaluation.
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Hambright, R. N.; Nedungadi, A.; Mcfayden, G. M.; Tsuchida, M. S.
1989-01-01
A significant emphasis upon automation within the Space Biology Initiative hardware appears justified in order to conserve crew labor and crew training effort. Two generic forms of automation were identified: automation of data and information handling and decision making, and the automation of material handling, transfer, and processing. The use of automatic data acquisition, expert systems, robots, and machine vision will increase the volume of experiments and quality of results. The automation described may also influence efforts to miniaturize and modularize the large array of SBI hardware identified to date. The cost and benefit model developed appears to be a useful guideline for SBI equipment specifiers and designers. Additional refinements would enhance the validity of the model. Two NASA automation pilot programs, 'The Principal Investigator in a Box' and 'Rack Mounted Robots' were investigated and found to be quite appropriate for adaptation to the SBI program. There are other in-house NASA efforts that provide technology that may be appropriate for the SBI program. Important data is believed to exist in advanced medical labs throughout the U.S., Japan, and Europe. The information and data processing in medical analysis equipment is highly automated and future trends reveal continued progress in this area. However, automation of material handling and processing has progressed in a limited manner because the medical labs are not affected by the power and space constraints that Space Station medical equipment is faced with. Therefore, NASA's major emphasis in automation will require a lead effort in the automation of material handling to achieve optimal crew utilization.
Lee, Woonghee; Kim, Jin Hae; Westler, William M; Markley, John L
2011-06-15
PONDEROSA (Peak-picking Of Noe Data Enabled by Restriction of Shift Assignments) accepts input information consisting of a protein sequence, backbone and sidechain NMR resonance assignments, and 3D-NOESY ((13)C-edited and/or (15)N-edited) spectra, and returns assignments of NOESY crosspeaks, distance and angle constraints, and a reliable NMR structure represented by a family of conformers. PONDEROSA incorporates and integrates external software packages (TALOS+, STRIDE and CYANA) to carry out different steps in the structure determination. PONDEROSA implements internal functions that identify and validate NOESY peak assignments and assess the quality of the calculated three-dimensional structure of the protein. The robustness of the analysis results from PONDEROSA's hierarchical processing steps that involve iterative interaction among the internal and external modules. PONDEROSA supports a variety of input formats: SPARKY assignment table (.shifts) and spectrum file formats (.ucsf), XEASY proton file format (.prot), and NMR-STAR format (.star). To demonstrate the utility of PONDEROSA, we used the package to determine 3D structures of two proteins: human ubiquitin and Escherichia coli iron-sulfur scaffold protein variant IscU(D39A). The automatically generated structural constraints and ensembles of conformers were as good as or better than those determined previously by much less automated means. The program, in the form of binary code along with tutorials and reference manuals, is available at http://ponderosa.nmrfam.wisc.edu/.
Program for improved electrical harness documentation and fabrication
NASA Technical Reports Server (NTRS)
1971-01-01
Computer program provides automated print-out of harness interconnection table and automated cross-check of reciprocal pin/connector assignments, and improves accuracy and reliability of final documented data. Programs and corresponding library tapes are successfully and continuously employed on Nimbus spacecraft programs.
1981-06-30
manpower needs as to quantity, quality and timing; all the internal functions of the personnel service are tapped to help meet these ends. Manpower...Program ACOS - Automated Computation of Service ACQ - Acquisition ACSAC - Assistant Chief of Staff for Automation and Comunications ACT - Automated...ARSTAF - Army Staff ARSTAFF - Army Staff ARTEP - Army Training and Evaluation Program ASI - Additional Skill Identifier ASVAB - Armed Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This technical note describes the current capabilities and availability of the Automated Dredging and Disposal Alternatives Management System (ADDAMS). The technical note replaces the earlier Technical Note EEDP-06-12, which should be discarded. Planning, design, and management of dredging and dredged material disposal projects often require complex or tedious calculations or involve complex decision-making criteria. In addition, the evaluations often must be done for several disposal alternatives or disposal sites. ADDAMS is a personal computer (PC)-based system developed to assist in making such evaluations in a timely manner. ADDAMS contains a collection of computer programs (applications) designed to assist in managingmore » dredging projects. This technical note describes the system, currently available applications, mechanisms for acquiring and running the system, and provisions for revision and expansion.« less
Streamlining resummed QCD calculations using Monte Carlo integration
Farhi, David; Feige, Ilya; Freytsis, Marat; ...
2016-08-18
Some of the most arduous and error-prone aspects of precision resummed calculations are related to the partonic hard process, having nothing to do with the resummation. In particular, interfacing to parton-distribution functions, combining various channels, and performing the phase space integration can be limiting factors in completing calculations. Conveniently, however, most of these tasks are already automated in many Monte Carlo programs, such as MadGraph [1], Alpgen [2] or Sherpa [3]. In this paper, we show how such programs can be used to produce distributions of partonic kinematics with associated color structures representing the hard factor in a resummed distribution.more » These distributions can then be used to weight convolutions of jet, soft and beam functions producing a complete resummed calculation. In fact, only around 1000 unweighted events are necessary to produce precise distributions. A number of examples and checks are provided, including e +e – two- and four-jet event shapes, n-jettiness and jet-mass related observables at hadron colliders at next-to-leading-log (NLL) matched to leading order (LO). Furthermore, the attached code can be used to modify MadGraph to export the relevant LO hard functions and color structures for arbitrary processes.« less
1987-06-15
001 GENERAL DYNAMICS 00 FORT WORTH DIVISION INDUSTRIAL TECHNOLOGY MODERNIZATION PROGRAM Phase 2 Final Project Repc t JUNG 0 ?7 PROJECT 28 AUTOMATION...DYNAMICS FORT WORTH DIVISION INDUSTRIAL TECHNOLOGY MODERNIZATION PROGRAM Phase 2 Final Project Report PROJECT 28 AUTOMATION OF RECEIVING, RECEIVING...13 6 PROJECT ASSUMPTIONS 20 7 PRELIMINARY/FINAL DESIGN AND FINDINGS 21 8 SYSTEM/EQUIPMENT/MACHINING SPECIFICATIONS 37 9 VENDOR/ INDUSTRY ANALYSIS
41 CFR 102-194.25 - What is an automated Standard or Optional format?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 194-STANDARD AND OPTIONAL FORMS MANAGEMENT PROGRAM § 102-194.25 What is an automated Standard or... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What is an automated...
Process development for automated solar cell and module production. Task 4: Automated array assembly
NASA Technical Reports Server (NTRS)
Hagerty, J. J.
1981-01-01
Progress in the development of automated solar cell and module production is reported. The unimate robot is programmed for the final 35 cell pattern to be used in the fabrication of the deliverable modules. The mechanical construction of the automated lamination station and final assembly station phases are completed and the first operational testing is underway. The final controlling program is written and optimized. The glass reinforced concrete (GRC) panels to be used for testing and deliverables are in production. Test routines are grouped together and defined to produce the final control program.
An ultraviolet-visible spectrophotometer automation system. Part 3: Program documentation
NASA Astrophysics Data System (ADS)
Roth, G. S.; Teuschler, J. M.; Budde, W. L.
1982-07-01
The Ultraviolet-Visible Spectrophotometer (UVVIS) automation system accomplishes 'on-line' spectrophotometric quality assurance determinations, report generations, plot generations and data reduction for chlorophyll or color analysis. This system also has the capability to process manually entered data for the analysis of chlorophyll or color. For each program of the UVVIS system, this document contains a program description, flowchart, variable dictionary, code listing, and symbol cross-reference table. Also included are descriptions of file structures and of routines common to all automated analyses. The programs are written in Data General extended BASIC, Revision 4.3, under the RDOS operating systems, Revision 6.2. The BASIC code has been enhanced for real-time data acquisition, which is accomplished by CALLS to assembly language subroutines. Two other related publications are 'An Ultraviolet-Visible Spectrophotometer Automation System - Part I Functional Specifications,' and 'An Ultraviolet-Visible Spectrophotometer Automation System - Part II User's Guide.'
NASA Systems Autonomy Demonstration Project - Development of Space Station automation technology
NASA Technical Reports Server (NTRS)
Bull, John S.; Brown, Richard; Friedland, Peter; Wong, Carla M.; Bates, William
1987-01-01
A 1984 Congressional expansion of the 1958 National Aeronautics and Space Act mandated that NASA conduct programs, as part of the Space Station program, which will yield the U.S. material benefits, particularly in the areas of advanced automation and robotics systems. Demonstration programs are scheduled for automated systems such as the thermal control, expert system coordination of Station subsystems, and automation of multiple subsystems. The programs focus the R&D efforts and provide a gateway for transfer of technology to industry. The NASA Office of Aeronautics and Space Technology is responsible for directing, funding and evaluating the Systems Autonomy Demonstration Project, which will include simulated interactions between novice personnel and astronauts and several automated, expert subsystems to explore the effectiveness of the man-machine interface being developed. Features and progress on the TEXSYS prototype thermal control system expert system are outlined.
Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao
2017-08-01
Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.
The NASA automation and robotics technology program
NASA Technical Reports Server (NTRS)
Holcomb, Lee B.; Montemerlo, Melvin D.
1986-01-01
The development and objectives of the NASA automation and robotics technology program are reviewed. The objectives of the program are to utilize AI and robotics to increase the probability of mission success; decrease the cost of ground control; and increase the capability and flexibility of space operations. There is a need for real-time computational capability; an effective man-machine interface; and techniques to validate automated systems. Current programs in the areas of sensing and perception, task planning and reasoning, control execution, operator interface, and system architecture and integration are described. Programs aimed at demonstrating the capabilities of telerobotics and system autonomy are discussed.
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1992-01-01
Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.
Stokes, A F; Banich, M T; Elledge, V C
1991-08-01
The FAA has expressed concern that flight safety could be compromised by undetected cognitive impairment in pilots due to conditions such as substance abuse, mental illness, and neuropsychological problems. Interest has been shown in the possibility of adding a brief "mini-mental exam," or a simple automated test-battery to the standard flight medical to screen for such conditions. The research reported here involved the empirical evaluation of two "mini-mental exams," two paper-and-pencil test batteries, and a prototype version of an automated screening battery. Sensitivity, specificity, and positive predictive value were calculated for each sub-task in a discriminant study of 54 pilots and 62 individuals from a heterogeneous clinical population. Results suggest that the "mini-mental exams" are poor candidates for a screening test. The automated battery showed the best discrimination performance, in part because of the incorporation of dual-task tests of divided attention performance. These tests appear to be particularly sensitive to otherwise difficult-to-detect cognitive impairments of a mild or subtle nature. The use of an automated battery of tests as a screening instrument does appear to be feasible in principle, but the practical success of a screening program is heavily dependent upon the actual prevalence of cognitive impairment in the medical applicant population.
Automated Monitoring of Pipeline Rights-of-Way
NASA Technical Reports Server (NTRS)
Frost, Chard Ritchie
2010-01-01
NASA Ames Research Center and the Pipeline Research Council International, Inc. have partnered in the formation of a research program to identify and develop the key technologies required to enable automated detection of threats to gas and oil transmission and distribution pipelines. This presentation describes the Right-of-way Automated Monitoring (RAM) program and highlights research successes to date, continuing challenges to implementing the RAM objectives, and the program's ongoing work and plans.
An integrated tool for loop calculations: AITALC
NASA Astrophysics Data System (ADS)
Lorca, Alejandro; Riemann, Tord
2006-01-01
AITALC, a new tool for automating loop calculations in high energy physics, is described. The package creates Fortran code for two-fermion scattering processes automatically, starting from the generation and analysis of the Feynman graphs. We describe the modules of the tool, the intercommunication between them and illustrate its use with three examples. Program summaryTitle of the program:AITALC version 1.2.1 (9 August 2005) Catalogue identifier:ADWO Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWO Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC i386 Operating system:GNU/ LINUX, tested on different distributions SuSE 8.2 to 9.3, Red Hat 7.2, Debian 3.0, Ubuntu 5.04. Also on SOLARIS Programming language used:GNU MAKE, DIANA, FORM, FORTRAN77 Additional programs/libraries used:DIANA 2.35 ( QGRAF 2.0), FORM 3.1, LOOPTOOLS 2.1 ( FF) Memory required to execute with typical data:Up to about 10 MB No. of processors used:1 No. of lines in distributed program, including test data, etc.:40 926 No. of bytes in distributed program, including test data, etc.:371 424 Distribution format:tar gzip file High-speed storage required:from 1.5 to 30 MB, depending on modules present and unfolding of examples Nature of the physical problem:Calculation of differential cross sections for ee annihilation in one-loop approximation. Method of solution:Generation and perturbative analysis of Feynman diagrams with later evaluation of matrix elements and form factors. Restriction of the complexity of the problem:The limit of application is, for the moment, the 2→2 particle reactions in the electro-weak standard model. Typical running time:Few minutes, being highly depending on the complexity of the process and the FORTRAN compiler.
Correction of spin diffusion during iterative automated NOE assignment
NASA Astrophysics Data System (ADS)
Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael
2004-04-01
Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.
Assessment of Automated Analyses of Cell Migration on Flat and Nanostructured Surfaces
Grădinaru, Cristian; Łopacińska, Joanna M.; Huth, Johannes; Kestler, Hans A.; Flyvbjerg, Henrik; Mølhave, Kristian
2012-01-01
Motility studies of cells often rely on computer software that analyzes time-lapse recorded movies and establishes cell trajectories fully automatically. This raises the question of reproducibility of results, since different programs could yield significantly different results of such automated analysis. The fact that the segmentation routines of such programs are often challenged by nanostructured surfaces makes the question more pertinent. Here we illustrate how it is possible to track cells on bright field microscopy images with image analysis routines implemented in an open-source cell tracking program, PACT (Program for Automated Cell Tracking). We compare the automated motility analysis of three cell tracking programs, PACT, Autozell, and TLA, using the same movies as input for all three programs. We find that different programs track overlapping, but different subsets of cells due to different segmentation methods. Unfortunately, population averages based on such different cell populations, differ significantly in some cases. Thus, results obtained with one software package are not necessarily reproducible by other software. PMID:24688640
Fuzzy Control/Space Station automation
NASA Technical Reports Server (NTRS)
Gersh, Mark
1990-01-01
Viewgraphs on fuzzy control/space station automation are presented. Topics covered include: Space Station Freedom (SSF); SSF evolution; factors pointing to automation & robotics (A&R); astronaut office inputs concerning A&R; flight system automation and ground operations applications; transition definition program; and advanced automation software tools.
Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2006-09-01
This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.
Computer Programs For Automated Welding System
NASA Technical Reports Server (NTRS)
Agapakis, John E.
1993-01-01
Computer programs developed for use in controlling automated welding system described in MFS-28578. Together with control computer, computer input and output devices and control sensors and actuators, provide flexible capability for planning and implementation of schemes for automated welding of specific workpieces. Developed according to macro- and task-level programming schemes, which increases productivity and consistency by reducing amount of "teaching" of system by technician. System provides for three-dimensional mathematical modeling of workpieces, work cells, robots, and positioners.
Automated procedures for sizing aerospace vehicle structures /SAVES/
NASA Technical Reports Server (NTRS)
Giles, G. L.; Blackburn, C. L.; Dixon, S. C.
1972-01-01
Results from a continuing effort to develop automated methods for structural design are described. A system of computer programs presently under development called SAVES is intended to automate the preliminary structural design of a complete aerospace vehicle. Each step in the automated design process of the SAVES system of programs is discussed, with emphasis placed on use of automated routines for generation of finite-element models. The versatility of these routines is demonstrated by structural models generated for a space shuttle orbiter, an advanced technology transport,n hydrogen fueled Mach 3 transport. Illustrative numerical results are presented for the Mach 3 transport wing.
ERIC Educational Resources Information Center
FLINT, LANNING L.; HATCH, RICHARD S.
STUDENT PERFORMANCE VARIABLES UNDER AUTOMATED, CONVENTIONAL, AND A COMBINATION OF AUTOMATED AND CONVENTIONAL CONDITIONS OF INSTRUCTION WERE INVESTIGATED. RECOMMENDATIONS FOR THE INTEGRATION OF PROGRAMED MATERIAL INTO THE CLASSROOM WERE SOUGHT. THREE GROUPS OF JUNIOR COLLEGE STUDENTS WERE USED IN THE EXPERIMENT. THE GROUPS WERE CHOSEN AT RANDOM.…
ERIC Educational Resources Information Center
TONDOW, MURRAY
PAPERS ON THE PRESENT AND FUTURE USE OF TECHNOLOGY IN EDUCATION IS PRESENTED. HARRY F. SILBERMAN, IN "EVALUATIVE CRITERIA FOR AUTOMATED TEACHING PROGRAMS," PRESENTS COMMENTS, CRITERIA, AND TABLES ON AUTOMATED TEACHING PROGRAMS. HE DESCRIBES EXPERIMENTS ON THE EFFECTIVENESS OF BRANCHING AND FIXED SEQUENCE PROGRAMS, ON A FOLLOWUP…
NASA Technical Reports Server (NTRS)
Reichel, R. H.; Hague, D. S.; Jones, R. T.; Glatt, C. R.
1973-01-01
This computer program manual describes in two parts the automated combustor design optimization code AUTOCOM. The program code is written in the FORTRAN 4 language. The input data setup and the program outputs are described, and a sample engine case is discussed. The program structure and programming techniques are also described, along with AUTOCOM program analysis.
ERA 1103 UNIVAC 2 Calculating Machine
1955-09-21
The new 10-by 10-Foot Supersonic Wind Tunnel at the Lewis Flight Propulsion Laboratory included high tech data acquisition and analysis systems. The reliable gathering of pressure, speed, temperature, and other data from test runs in the facilities was critical to the research process. Throughout the 1940s and early 1950s female employees, known as computers, recorded all test data and performed initial calculations by hand. The introduction of punch card computers in the late 1940s gradually reduced the number of hands-on calculations. In the mid-1950s new computational machines were installed in the office building of the 10-by 10-Foot tunnel. The new systems included this UNIVAC 1103 vacuum tube computer—the lab’s first centralized computer system. The programming was done on paper tape and fed into the machine. The 10-by 10 computer center also included the Lewis-designed Computer Automated Digital Encoder (CADDE) and Digital Automated Multiple Pressure Recorder (DAMPR) systems which converted test data to binary-coded decimal numbers and recorded test pressures automatically, respectively. The systems primarily served the 10-by 10, but were also applied to the other large facilities. Engineering Research Associates (ERA) developed the initial UNIVAC computer for the Navy in the late 1940s. In 1952 the company designed a commercial version, the UNIVAC 1103. The 1103 was the first computer designed by Seymour Cray and the first commercially successful computer.
Automated liver elasticity calculation for 3D MRE
NASA Astrophysics Data System (ADS)
Dzyubak, Bogdan; Glaser, Kevin J.; Manduca, Armando; Ehman, Richard L.
2017-03-01
Magnetic Resonance Elastography (MRE) is a phase-contrast MRI technique which calculates quantitative stiffness images, called elastograms, by imaging the propagation of acoustic waves in tissues. It is used clinically to diagnose liver fibrosis. Automated analysis of MRE is difficult as the corresponding MRI magnitude images (which contain anatomical information) are affected by intensity inhomogeneity, motion artifact, and poor tissue- and edge-contrast. Additionally, areas with low wave amplitude must be excluded. An automated algorithm has already been successfully developed and validated for clinical 2D MRE. 3D MRE acquires substantially more data and, due to accelerated acquisition, has exacerbated image artifacts. Also, the current 3D MRE processing does not yield a confidence map to indicate MRE wave quality and guide ROI selection, as is the case in 2D. In this study, extension of the 2D automated method, with a simple wave-amplitude metric, was developed and validated against an expert reader in a set of 57 patient exams with both 2D and 3D MRE. The stiffness discrepancy with the expert for 3D MRE was -0.8% +/- 9.45% and was better than discrepancy with the same reader for 2D MRE (-3.2% +/- 10.43%), and better than the inter-reader discrepancy observed in previous studies. There were no automated processing failures in this dataset. Thus, the automated liver elasticity calculation (ALEC) algorithm is able to calculate stiffness from 3D MRE data with minimal bias and good precision, while enabling stiffness measurements to be fully reproducible and to be easily performed on the large 3D MRE datasets.
A Suggested Automated Branch Program for Foreign Languages.
ERIC Educational Resources Information Center
Barrutia, Richard
1964-01-01
Completely automated and operated by student feedback, this program teaches and tests foreign language recognition and retention, gives repeated audiolingual practice on model structures, and allows the student to tailor the program to his individual needs. The program is recorded on four tape tracks (track 1 for the most correct answer, etc.).…
Using Software Tools to Automate the Assessment of Student Programs.
ERIC Educational Resources Information Center
Jackson, David
1991-01-01
Argues that advent of computer-aided instruction (CAI) systems for teaching introductory computer programing makes it imperative that software be developed to automate assessment and grading of student programs. Examples of typical student programing problems are given, and application of the Unix tools Lex and Yacc to the automatic assessment of…
Ho, C L; Lui, C T; Tsui, K L; Kam, C W
2014-10-01
To evaluate the availability and accessibility of community automated external defibrillators in a territory in Hong Kong. Cross-sectional study. Two public hospitals in New Territories West Cluster in Hong Kong. Information about the locations of community automated external defibrillators was obtained from automated external defibrillator suppliers and through community search. Data on locations of out-of-hospital cardiac arrests from August 2010 to September 2013 were obtained from the local cardiac arrest registry of the emergency departments of two hospitals. Sites of both automated external defibrillators and out-of-hospital cardiac arrests were geographically coded and mapped. The number of out-of-hospital cardiac arrests within 100 m of automated external defibrillators per year and the proportion of out-of-hospital cardiac arrests with accessible automated external defibrillators (100 m) were calculated. The number of community automated external defibrillators per 10,000 population and public access defibrillation rate were also calculated and compared with those in other countries. There were a total of 207 community automated external defibrillators in the territory. The number of automated external defibrillators per 10,000 population was 1.942. All facilities with automated external defibrillators in this territory had more than 0.2 out-of-hospital cardiac arrests per automated external defibrillator per year within 100 m. Among all out-of-hospital cardiac arrests, 25.2% could have an automated external defibrillator reachable within 100 m. The public access defibrillation rate was 0.168%. The number and accessibility of community automated external defibrillators in this territory are comparable to those in other developed countries. The placement site of community automated external defibrillators is cost-effective. However, the public access defibrillation rate is low.
Automation of Space Processing Applications Shuttle payloads
NASA Technical Reports Server (NTRS)
Crosmer, W. E.; Neau, O. T.; Poe, J.
1975-01-01
The Space Processing Applications Program is examining the effect of weightlessness on key industrial materials processes, such as crystal growth, fine-grain casting of metals, and production of unique and ultra-pure glasses. Because of safety and in order to obtain optimum performance, some of these processes lend themselves to automation. Automation can increase the number of potential Space Shuttle flight opportunities and increase the overall productivity of the program. Five automated facility design concepts and overall payload combinations incorporating these facilities are presented.
Application of advanced technology to space automation
NASA Technical Reports Server (NTRS)
Schappell, R. T.; Polhemus, J. T.; Lowrie, J. W.; Hughes, C. A.; Stephens, J. R.; Chang, C. Y.
1979-01-01
Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits.
Timp, Sheila; Karssemeijer, Nico
2004-05-01
Mass segmentation plays a crucial role in computer-aided diagnosis (CAD) systems for classification of suspicious regions as normal, benign, or malignant. In this article we present a robust and automated segmentation technique--based on dynamic programming--to segment mass lesions from surrounding tissue. In addition, we propose an efficient algorithm to guarantee resulting contours to be closed. The segmentation method based on dynamic programming was quantitatively compared with two other automated segmentation methods (region growing and the discrete contour model) on a dataset of 1210 masses. For each mass an overlap criterion was calculated to determine the similarity with manual segmentation. The mean overlap percentage for dynamic programming was 0.69, for the other two methods 0.60 and 0.59, respectively. The difference in overlap percentage was statistically significant. To study the influence of the segmentation method on the performance of a CAD system two additional experiments were carried out. The first experiment studied the detection performance of the CAD system for the different segmentation methods. Free-response receiver operating characteristics analysis showed that the detection performance was nearly identical for the three segmentation methods. In the second experiment the ability of the classifier to discriminate between malignant and benign lesions was studied. For region based evaluation the area Az under the receiver operating characteristics curve was 0.74 for dynamic programming, 0.72 for the discrete contour model, and 0.67 for region growing. The difference in Az values obtained by the dynamic programming method and region growing was statistically significant. The differences between other methods were not significant.
Rossetti, Paolo; Vehí, Josep; Revert, Ana; Calm, Remei; Bondia, Jorge
2012-01-01
Since the early 2000s, there has been an exponentially increasing development of new diabetes-applied technology, such as continuous glucose monitoring, bolus calculators, and “smart” pumps, with the expectation of partially overcoming clinical inertia and low patient compliance. However, its long-term efficacy in glucose control has not been unequivocally proven. In this issue of Journal of Diabetes Science and Technology, Sussman and colleagues evaluated a tool for the calculation of the prandial insulin dose. A total of 205 insulin-treated patients were asked to compute a bolus dose in two simulated conditions either manually or with the bolus calculator built into the FreeStyle InsuLinx meter, revealing the high frequency of wrong calculations when performed manually. Although the clinical impact of this study is limited, it highlights the potential implications of low diabetes-related numeracy in poor glycemic control. Educational programs aiming to increase patients’ empowerment and caregivers’ knowledge are needed in order to get full benefit of the technology. PMID:22538145
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madrid, Gregory J.; Whitener, Dustin Heath; Folz, Wesley
2017-05-27
The Turbo FRMAC (TF) software program is the software implementation of the science and methodologies utilized in the Federal Radiological Monitoring and Assessment Center (FRMAC). The software automates the calculations described in volumes 1 of "The Federal Manual for Assessing Environmental Data during a Radiological Emergency" (2015 version). In the event of the intentional or accidental release of radioactive material, the software is used to guide and govern the response of the Federal, State, Local, and Tribal governments. The manual, upon which the software is based, is unclassified and freely available on the Internet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulton, John; Gallagher, Linda; Gonzales, Alejandro
The Turbo FRMAC (TF) software program is the software implementation of the science and methodologies utilized in the Federal Radiological Monitoring and Assessment Center (FRMAC). The software automates the calculations described in volumes 1 of "The Federal Manual for Assessing Environmental Data during a Radiological Emergency" (2015 version). In the event of the intentional or accidental release of radioactive material, the software is used to guide and govern the response of the Federal, State, Local, and Tribal governments. The manual, upon which the software is based, is unclassified and freely available on the Internet.
Turbo FRMAC 2016 Version 7.1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulton, John; Gallagher, Linda K.; Madrid, Gregory J.
2016-08-01
The Turbo FRMAC (TF) software program is the software implementation of the science and methodologies utilized in the Federal Radiological Monitoring and Assessment Center (FRMAC). The software automates the calculations described in volumes 1 of "The Federal Manual for Assessing Environmental Data during a Radiological Emergency" (2015 version). In the event of the intentional or accidental release of radioactive material, the software is used to guide and govern the response of the Federal, State, Local, and Tribal governments. The manual, upon which the software is based, is unclassified and freely available on the Internet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madrid, Gregory J.; Whitener, Dustin Heath; Folz, Wesley
2017-02-27
The Turbo FRMAC (TF) software program is the software implementation of the science and methodologies utilized in the Federal Radiological Monitoring and Assessment Center (FRMAC). The software automates the calculations described in volumes 1 of "The Federal Manual for Assessing Environmental Data during a Radiological Emergency" (2015 version). In the event of the intentional or accidental release of radioactive material, the software is used to guide and govern the response of the Federal, State, Local, and Tribal governments. The manual, upon which the software is based, is unclassified and freely available on the Internet.
Object-oriented software for evaluating measurement uncertainty
NASA Astrophysics Data System (ADS)
Hall, B. D.
2013-05-01
An earlier publication (Hall 2006 Metrologia 43 L56-61) introduced the notion of an uncertain number that can be used in data processing to represent quantity estimates with associated uncertainty. The approach can be automated, allowing data processing algorithms to be decomposed into convenient steps, so that complicated measurement procedures can be handled. This paper illustrates the uncertain-number approach using several simple measurement scenarios and two different software tools. One is an extension library for Microsoft Excel®. The other is a special-purpose calculator using the Python programming language.
Automated Writing Evaluation Program's Effect on Student Writing Achievement
ERIC Educational Resources Information Center
Holman, Lester Donnie
2011-01-01
In an ex post facto causal-comparative research design, this study investigated the effectiveness of Automated Writing Evaluation (AWE) programs on raising the student writing achievement. Tennessee Comprehensive Assessment Program (TCAP) writing achievement scores from the 2010 administration were utilized for this study. The independent variable…
DOT National Transportation Integrated Search
1997-03-06
This report responds to a request for information on the current goals and future direction of the Department of Transportations (DOT) Automated Highway System program. This program seeks to increase the capacity of the nations highways and to ...
CALCULATIONAL TOOL FOR SKIN CONTAMINATION DOSE ESTIMATE
DOE Office of Scientific and Technical Information (OSTI.GOV)
HILL, R.L.
2005-03-31
A spreadsheet calculational tool was developed to automate the calculations performed for estimating dose from skin contamination. This document reports on the design and testing of the spreadsheet calculational tool.
Fink, Christine; Uhlmann, Lorenz; Klose, Christina; Haenssle, Holger A
2018-05-17
Reliable and accurate assessment of severity in psoriasis is very important in order to meet indication criteria for initiation of systemic treatment or to evaluate treatment efficacy. The most acknowledged tool for measuring the extent of psoriatic skin changes is the Psoriasis Area and Severity Index (PASI). However, the calculation of PASI can be tedious and subjective and high intraobserver and interobserver variability is an important concern. Therefore, there is a great need for a standardised and objective method that guarantees a reproducible PASI calculation. Within this study we will investigate the precision and reproducibility of automated, computer-guided PASI measurements in comparison to trained physicians to address these limitations. Non-interventional analyses of PASI calculations by either physicians in a prospective versus retrospective setting or an automated computer-guided algorithm in 120 patients with plaque psoriasis. All retrospective PASI calculations by physicians or by the computer algorithm are based on total body digital images. The primary objective of this study is comparison of automated computer-guided PASI measurements by means of digital image analysis versus conventional, prospective or retrospective physicians' PASI assessments. Secondary endpoints include (1) the assessment of physicians' interobserver variance in PASI calculations, (2) the assessment of physicians' intraobserver variance in PASI assessments of the same patients' images after a time interval of at least 4 weeks, (3) the assessment of the deviation between physicians' prospective versus retrospective PASI calculations, and (4) the reproducibility of automated computer-guided PASI measurements by assessment of two sets of total body digital images of the same patients taken at one time point. Ethical approval was provided by the Ethics Committee of the Medical Faculty of the University of Heidelberg (ethics approval number S-379/2016). DRKS00011818; Results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Automating the parallel processing of fluid and structural dynamics calculations
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Cole, Gary L.
1987-01-01
The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.
A Computational Framework for Automation of Point Defect Calculations
NASA Astrophysics Data System (ADS)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; Lany, Stephan; Stevanovic, Vladan; National Renewable Energy Laboratory, Golden, Colorado 80401 Collaboration
A complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory has been developed. The framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. The package provides the capability to compute widely accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3as test examples, we demonstrate the package capabilities and validate the methodology. We believe that a robust automated tool like this will enable the materials by design community to assess the impact of point defects on materials performance. National Renewable Energy Laboratory, Golden, Colorado 80401.
The Automation-by-Expertise-by-Training Interaction.
Strauch, Barry
2017-03-01
I introduce the automation-by-expertise-by-training interaction in automated systems and discuss its influence on operator performance. Transportation accidents that, across a 30-year interval demonstrated identical automation-related operator errors, suggest a need to reexamine traditional views of automation. I review accident investigation reports, regulator studies, and literature on human computer interaction, expertise, and training and discuss how failing to attend to the interaction of automation, expertise level, and training has enabled operators to commit identical automation-related errors. Automated systems continue to provide capabilities exceeding operators' need for effective system operation and provide interfaces that can hinder, rather than enhance, operator automation-related situation awareness. Because of limitations in time and resources, training programs do not provide operators the expertise needed to effectively operate these automated systems, requiring them to obtain the expertise ad hoc during system operations. As a result, many do not acquire necessary automation-related system expertise. Integrating automation with expected operator expertise levels, and within training programs that provide operators the necessary automation expertise, can reduce opportunities for automation-related operator errors. Research to address the automation-by-expertise-by-training interaction is needed. However, such research must meet challenges inherent to examining realistic sociotechnical system automation features with representative samples of operators, perhaps by using observational and ethnographic research. Research in this domain should improve the integration of design and training and, it is hoped, enhance operator performance.
Study of Intelligent Secure Chemical Inventory Management System
NASA Astrophysics Data System (ADS)
Shukran, Mohd Afizi Mohd; Naim Abdullah, Muhammad; Nazri Ismail, Mohd; Maskat, Kamaruzaman; Isa, Mohd Rizal Mohd; Shahfee Ishak, Muhammad; Adib Khairuddin, Muhamad
2017-08-01
Chemical inventory management system has been experiencing a new revolution from traditional inventory system which is manual to an automated inventory management system. In this paper, some review of the classic and modern approaches to chemical inventory management system has been discussed. This paper also describe about both type of inventory management. After a comparative analysis of the traditional method and automated method, it can be said that both methods have some distinctive characteristics. Moreover, the automated inventory management method has higher accuracy of calculation because the calculations are handled by software, eliminating possible errors and saving time. The automated inventory system also allows users and administrators to track the availability, location and consumption of chemicals. The study of this paper can provide forceful review analysis support for the chemical inventory management related research.
A Python tool to set up relative free energy calculations in GROMACS
Klimovich, Pavel V.; Mobley, David L.
2015-01-01
Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper [14], recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge [16]. Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations. PMID:26487189
ERIC Educational Resources Information Center
Howrey, Mary M.
This study was funded by the Library Services and Construction Act (LSCA) to enable the Illinois School Library Media Association (ISLMA) to plan the automation of the state's school libraries. The research was intended to identify current national programs of interest to ISLMA, identify current automation programs within Illinois library systems,…
CFD Extraction Tool for TecPlot From DPLR Solutions
NASA Technical Reports Server (NTRS)
Norman, David
2013-01-01
This invention is a TecPlot macro of a computer program in the TecPlot programming language that processes data from DPLR solutions in TecPlot format. DPLR (Data-Parallel Line Relaxation) is a NASA computational fluid dynamics (CFD) code, and TecPlot is a commercial CFD post-processing tool. The Tec- Plot data is in SI units (same as DPLR output). The invention converts the SI units into British units. The macro modifies the TecPlot data with unit conversions, and adds some extra calculations. After unit conversions, the macro cuts a slice, and adds vectors on the current plot for output format. The macro can also process surface solutions. Existing solutions use manual conversion and superposition. The conversion is complicated because it must be applied to a range of inter-related scalars and vectors to describe a 2D or 3D flow field. It processes the CFD solution to create superposition/comparison of scalars and vectors. The existing manual solution is cumbersome, open to errors, slow, and cannot be inserted into an automated process. This invention is quick and easy to use, and can be inserted into an automated data-processing algorithm.
Automated Liver Elasticity Calculation for 3D MRE
Dzyubak, Bogdan; Glaser, Kevin J.; Manduca, Armando; Ehman, Richard L.
2017-01-01
Magnetic Resonance Elastography (MRE) is a phase-contrast MRI technique which calculates quantitative stiffness images, called elastograms, by imaging the propagation of acoustic waves in tissues. It is used clinically to diagnose liver fibrosis. Automated analysis of MRE is difficult as the corresponding MRI magnitude images (which contain anatomical information) are affected by intensity inhomogeneity, motion artifact, and poor tissue- and edge-contrast. Additionally, areas with low wave amplitude must be excluded. An automated algorithm has already been successfully developed and validated for clinical 2D MRE. 3D MRE acquires substantially more data and, due to accelerated acquisition, has exacerbated image artifacts. Also, the current 3D MRE processing does not yield a confidence map to indicate MRE wave quality and guide ROI selection, as is the case in 2D. In this study, extension of the 2D automated method, with a simple wave-amplitude metric, was developed and validated against an expert reader in a set of 57 patient exams with both 2D and 3D MRE. The stiffness discrepancy with the expert for 3D MRE was −0.8% ± 9.45% and was better than discrepancy with the same reader for 2D MRE (−3.2% ± 10.43%), and better than the inter-reader discrepancy observed in previous studies. There were no automated processing failures in this dataset. Thus, the automated liver elasticity calculation (ALEC) algorithm is able to calculate stiffness from 3D MRE data with minimal bias and good precision, while enabling stiffness measurements to be fully reproducible and to be easily performed on the large 3D MRE datasets. PMID:29033488
How a Fully Automated eHealth Program Simulates Three Therapeutic Processes: A Case Study.
Holter, Marianne T S; Johansen, Ayna; Brendryen, Håvar
2016-06-28
eHealth programs may be better understood by breaking down the components of one particular program and discussing its potential for interactivity and tailoring in regard to concepts from face-to-face counseling. In the search for the efficacious elements within eHealth programs, it is important to understand how a program using lapse management may simultaneously support working alliance, internalization of motivation, and behavior maintenance. These processes have been applied to fully automated eHealth programs individually. However, given their significance in face-to-face counseling, it may be important to simulate the processes simultaneously in interactive, tailored programs. We propose a theoretical model for how fully automated behavior change eHealth programs may be more effective by simulating a therapist's support of a working alliance, internalization of motivation, and managing lapses. We show how the model is derived from theory and its application to Endre, a fully automated smoking cessation program that engages the user in several "counseling sessions" about quitting. A descriptive case study based on tools from the intervention mapping protocol shows how each therapeutic process is simulated. The program supports the user's working alliance through alliance factors, the nonembodied relational agent Endre and computerized motivational interviewing. Computerized motivational interviewing also supports internalized motivation to quit, whereas a lapse management component responds to lapses. The description operationalizes working alliance, internalization of motivation, and managing lapses, in terms of eHealth support of smoking cessation. A program may simulate working alliance, internalization of motivation, and lapse management through interactivity and individual tailoring, potentially making fully automated eHealth behavior change programs more effective.
How a Fully Automated eHealth Program Simulates Three Therapeutic Processes: A Case Study
Johansen, Ayna; Brendryen, Håvar
2016-01-01
Background eHealth programs may be better understood by breaking down the components of one particular program and discussing its potential for interactivity and tailoring in regard to concepts from face-to-face counseling. In the search for the efficacious elements within eHealth programs, it is important to understand how a program using lapse management may simultaneously support working alliance, internalization of motivation, and behavior maintenance. These processes have been applied to fully automated eHealth programs individually. However, given their significance in face-to-face counseling, it may be important to simulate the processes simultaneously in interactive, tailored programs. Objective We propose a theoretical model for how fully automated behavior change eHealth programs may be more effective by simulating a therapist’s support of a working alliance, internalization of motivation, and managing lapses. Methods We show how the model is derived from theory and its application to Endre, a fully automated smoking cessation program that engages the user in several “counseling sessions” about quitting. A descriptive case study based on tools from the intervention mapping protocol shows how each therapeutic process is simulated. Results The program supports the user’s working alliance through alliance factors, the nonembodied relational agent Endre and computerized motivational interviewing. Computerized motivational interviewing also supports internalized motivation to quit, whereas a lapse management component responds to lapses. The description operationalizes working alliance, internalization of motivation, and managing lapses, in terms of eHealth support of smoking cessation. Conclusions A program may simulate working alliance, internalization of motivation, and lapse management through interactivity and individual tailoring, potentially making fully automated eHealth behavior change programs more effective. PMID:27354373
Automated Subsystem Control for Life Support System (ASCLSS)
NASA Technical Reports Server (NTRS)
Block, Roger F.
1987-01-01
The Automated Subsystem Control for Life Support Systems (ASCLSS) program has successfully developed and demonstrated a generic approach to the automation and control of space station subsystems. The automation system features a hierarchical and distributed real-time control architecture which places maximum controls authority at the lowest or process control level which enhances system autonomy. The ASCLSS demonstration system pioneered many automation and control concepts currently being considered in the space station data management system (DMS). Heavy emphasis is placed on controls hardware and software commonality implemented in accepted standards. The approach demonstrates successfully the application of real-time process and accountability with the subsystem or process developer. The ASCLSS system completely automates a space station subsystem (air revitalization group of the ASCLSS) which moves the crew/operator into a role of supervisory control authority. The ASCLSS program developed over 50 lessons learned which will aide future space station developers in the area of automation and controls..
A system-level approach to automation research
NASA Technical Reports Server (NTRS)
Harrison, F. W.; Orlando, N. E.
1984-01-01
Automation is the application of self-regulating mechanical and electronic devices to processes that can be accomplished with the human organs of perception, decision, and actuation. The successful application of automation to a system process should reduce man/system interaction and the perceived complexity of the system, or should increase affordability, productivity, quality control, and safety. The expense, time constraints, and risk factors associated with extravehicular activities have led the Automation Technology Branch (ATB), as part of the NASA Automation Research and Technology Program, to investigate the use of robots and teleoperators as automation aids in the context of space operations. The ATB program addresses three major areas: (1) basic research in autonomous operations, (2) human factors research on man-machine interfaces with remote systems, and (3) the integration and analysis of automated systems. This paper reviews the current ATB research in the area of robotics and teleoperators.
Generic and Automated Data Evaluation in Analytical Measurement.
Adam, Martin; Fleischer, Heidi; Thurow, Kerstin
2017-04-01
In the past year, automation has become more and more important in the field of elemental and structural chemical analysis to reduce the high degree of manual operation and processing time as well as human errors. Thus, a high number of data points are generated, which requires fast and automated data evaluation. To handle the preprocessed export data from different analytical devices with software from various vendors offering a standardized solution without any programming knowledge should be preferred. In modern laboratories, multiple users will use this software on multiple personal computers with different operating systems (e.g., Windows, Macintosh, Linux). Also, mobile devices such as smartphones and tablets have gained growing importance. The developed software, Project Analytical Data Evaluation (ADE), is implemented as a web application. To transmit the preevaluated data from the device software to the Project ADE, the exported XML report files are detected and the included data are imported into the entities database using the Data Upload software. Different calculation types of a sample within one measurement series (e.g., method validation) are identified using information tags inside the sample name. The results are presented in tables and diagrams on different information levels (general, detailed for one analyte or sample).
NASA Technical Reports Server (NTRS)
Hahn, Edward C.; Hansman, R. J., Jr.
1992-01-01
An experiment to study how automation, when used in conjunction with datalink for the delivery of ATC clearance amendments, affects the situational awareness of aircrews was conducted. The study was focused on the relationship of situational awareness to automated Flight Management System (FMS) programming of datalinked clearances and the readback of ATC clearances. Situational awareness was tested by issuing nominally unacceptable ATC clearances and measuring whether the error was detected by the subject pilots. The experiment also varied the mode of clearance delivery: Verbal, Textual, and Graphical. The error detection performance and pilot preference results indicate that the automated programming of the FMS may be superior to manual programming. It is believed that automated FMS programming may relieve some of the cognitive load, allowing pilots to concentrate on the strategic implications of a clearance amendment. Also, readback appears to have value, but the small sample size precludes a definite conclusion. Furthermore, because textual and graphical modes of delivery offer different but complementary advantages for cognitive processing, a combination of these modes of delivery may be advantageous in a datalink presentation.
NASA Technical Reports Server (NTRS)
Hahn, Edward C.; Hansman, R. John, Jr.
1992-01-01
An experiment to study how automation, when used in conjunction with datalink for the delivery of air traffic control (ATC) clearance amendments, affects the situational awareness of aircrews was conducted. The study was focused on the relationship of situational awareness to automated Flight Management System (FMS) programming and the readback of ATC clearances. Situational awareness was tested by issuing nominally unacceptable ATC clearances and measuring whether the error was detected by the subject pilots. The experiment also varied the mode of clearance delivery: Verbal, Textual, and Graphical. The error detection performance and pilot preference results indicate that the automated programming of the FMS may be superior to manual programming. It is believed that automated FMS programming may relieve some of the cognitive load, allowing pilots to concentrate on the strategic implications of a clearance amendment. Also, readback appears to have value, but the small sample size precludes a definite conclusion. Furthermore, because textual and graphical modes of delivery offer different but complementary advantages for cognitive processing, a combination of these modes of delivery may be advantageous in a datalink presentation.
An evaluation of NASA's program in human factors research: Aircrew-vehicle system interaction
NASA Technical Reports Server (NTRS)
1982-01-01
Research in human factors in the aircraft cockpit and a proposed program augmentation were reviewed. The dramatic growth of microprocessor technology makes it entirely feasible to automate increasingly more functions in the aircraft cockpit; the promise of improved vehicle performance, efficiency, and safety through automation makes highly automated flight inevitable. An organized data base and validated methodology for predicting the effects of automation on human performance and thus on safety are lacking and without such a data base and validated methodology for analyzing human performance, increased automation may introduce new risks. Efforts should be concentrated on developing methods and techniques for analyzing man machine interactions, including human workload and prediction of performance.
BioBlocks: Programming Protocols in Biology Made Easier.
Gupta, Vishal; Irimia, Jesús; Pau, Iván; Rodríguez-Patón, Alfonso
2017-07-21
The methods to execute biological experiments are evolving. Affordable fluid handling robots and on-demand biology enterprises are making automating entire experiments a reality. Automation offers the benefit of high-throughput experimentation, rapid prototyping, and improved reproducibility of results. However, learning to automate and codify experiments is a difficult task as it requires programming expertise. Here, we present a web-based visual development environment called BioBlocks for describing experimental protocols in biology. It is based on Google's Blockly and Scratch, and requires little or no experience in computer programming to automate the execution of experiments. The experiments can be specified, saved, modified, and shared between multiple users in an easy manner. BioBlocks is open-source and can be customized to execute protocols on local robotic platforms or remotely, that is, in the cloud. It aims to serve as a de facto open standard for programming protocols in Biology.
Automated data processing and radioassays.
Samols, E; Barrows, G H
1978-04-01
Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farhi, David; Feige, Ilya; Freytsis, Marat
Some of the most arduous and error-prone aspects of precision resummed calculations are related to the partonic hard process, having nothing to do with the resummation. In particular, interfacing to parton-distribution functions, combining various channels, and performing the phase space integration can be limiting factors in completing calculations. Conveniently, however, most of these tasks are already automated in many Monte Carlo programs, such as MadGraph [1], Alpgen [2] or Sherpa [3]. In this paper, we show how such programs can be used to produce distributions of partonic kinematics with associated color structures representing the hard factor in a resummed distribution.more » These distributions can then be used to weight convolutions of jet, soft and beam functions producing a complete resummed calculation. In fact, only around 1000 unweighted events are necessary to produce precise distributions. A number of examples and checks are provided, including e +e – two- and four-jet event shapes, n-jettiness and jet-mass related observables at hadron colliders at next-to-leading-log (NLL) matched to leading order (LO). Furthermore, the attached code can be used to modify MadGraph to export the relevant LO hard functions and color structures for arbitrary processes.« less
Automated computer grading of hardwood lumber
P. Klinkhachorn; J.P. Franklin; Charles W. McMillin; R.W. Conners; H.A. Huber
1988-01-01
This paper describes an improved computer program to grade hardwood lumber. The program was created as part of a system to automate various aspects of the hardwood manufacturing industry. It enhances previous efforts by considering both faces of the board and provides easy application of species dependent rules. The program can be readily interfaced with a computer...
Ability-Training-Oriented Automated Assessment in Introductory Programming Course
ERIC Educational Resources Information Center
Wang, Tiantian; Su, Xiaohong; Ma, Peijun; Wang, Yuying; Wang, Kuanquan
2011-01-01
Learning to program is a difficult process for novice programmers. AutoLEP, an automated learning and assessment system, was developed by us, to aid novice programmers to obtain programming skills. AutoLEP is ability-training-oriented. It adopts a novel assessment mechanism, which combines static analysis with dynamic testing to analyze student…
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
Automated Speech Rate Measurement in Dysarthria.
Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc
2015-06-01
In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. The new algorithm was trained and tested using Dutch speech samples of 36 speakers with no history of speech impairment and 40 speakers with mild to moderate dysarthria. We tested the algorithm under various conditions: according to speech task type (sentence reading, passage reading, and storytelling) and algorithm optimization method (speaker group optimization and individual speaker optimization). Correlations between automated and human SR determination were calculated for each condition. High correlations between automated and human SR determination were found in the various testing conditions. The new algorithm measures SR in a sufficiently reliable manner. It is currently being integrated in a clinical software tool for assessing and managing prosody in dysarthric speech. Further research is needed to fine-tune the algorithm to severely dysarthric speech, to make the algorithm less sensitive to background noise, and to evaluate how the algorithm deals with syllabic consonants.
ERIC Educational Resources Information Center
Brenner, Eliot; Freundlich, Madelyn
2006-01-01
The Adoption and Safe Families Act of 1997 has made child safety an explicit focus in child welfare. The authors describe an automated critical incident reporting program designed for use in foster care and family-support programs. The program, which is based in Lotus Notes and uses e-mail to route incident reports from direct service staff to…
Slepoy, A; Peters, M D; Thompson, A P
2007-11-30
Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.
Socio-Economic Impact Assessment of Automated Transit Information Systems Technology
DOT National Transportation Integrated Search
1984-03-01
This report is the final product of a program to assess the socio-economic impacts of automated transit information system (ATIS) technology deployments on the transit industry's telephone information/marketing function. In the course of this program...
Industrial Automation Mechanic Model Curriculum Project. Final Report.
ERIC Educational Resources Information Center
Toledo Public Schools, OH.
This document describes a demonstration program that developed secondary level competency-based instructional materials for industrial automation mechanics. Program activities included task list compilation, instructional materials research, learning activity packet (LAP) development, construction of lab elements, system implementation,…
Costs to Automate Demand Response - Taxonomy and Results from Field Studies and Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piette, Mary A.; Schetrit, Oren; Kiliccote, Sila
During the past decade, the technology to automate demand response (DR) in buildings and industrial facilities has advanced significantly. Automation allows rapid, repeatable, reliable operation. This study focuses on costs for DR automation in commercial buildings with some discussion on residential buildings and industrial facilities. DR automation technology relies on numerous components, including communication systems, hardware and software gateways, standards-based messaging protocols, controls and integration platforms, and measurement and telemetry systems. This report compares cost data from several DR automation programs and pilot projects, evaluates trends in the cost per unit of DR and kilowatts (kW) available from automated systems,more » and applies a standard naming convention and classification or taxonomy for system elements. Median costs for the 56 installed automated DR systems studied here are about $200/kW. The deviation around this median is large with costs in some cases being an order of magnitude great or less than the median. This wide range is a result of variations in system age, size of load reduction, sophistication, and type of equipment included in cost analysis. The costs to automate fast DR systems for ancillary services are not fully analyzed in this report because additional research is needed to determine the total cost to install, operate, and maintain these systems. However, recent research suggests that they could be developed at costs similar to those of existing hot-summer DR automation systems. This report considers installation and configuration costs and does include the costs of owning and operating DR automation systems. Future analysis of the latter costs should include the costs to the building or facility manager costs as well as utility or third party program manager cost.« less
Automated segmentation of foveal avascular zone in fundus fluorescein angiography.
Zheng, Yalin; Gandhi, Jagdeep Singh; Stangos, Alexandros N; Campa, Claudio; Broadbent, Deborah M; Harding, Simon P
2010-07-01
PURPOSE. To describe and evaluate the performance of a computerized automated segmentation technique for use in quantification of the foveal avascular zone (FAZ). METHODS. A computerized technique for automated segmentation of the FAZ using images from fundus fluorescein angiography (FFA) was applied to 26 transit-phase images obtained from patients with various grades of diabetic retinopathy. The area containing the FAZ zone was first extracted from the original image and smoothed by a Gaussian kernel (sigma = 1.5). An initializing contour was manually placed inside the FAZ of the smoothed image and iteratively moved by the segmentation program toward the FAZ boundary. Five tests with different initializing curves were run on each of 26 images to assess reproducibility. The accuracy of the program was also validated by comparing results obtained by the program with the FAZ boundaries manually delineated by medical retina specialists. Interobserver performance was then evaluated by comparing delineations from two of the experts. RESULTS. One-way analysis of variance indicated that the disparities between different tests were not statistically significant, signifying excellent reproducibility for the computer program. There was a statistically significant linear correlation between the results obtained by automation and manual delineations by experts. CONCLUSIONS. This automated segmentation program can produce highly reproducible results that are comparable to those made by clinical experts. It has the potential to assist in the detection and management of foveal ischemia and to be integrated into automated grading systems.
Design automation techniques for custom LSI arrays
NASA Technical Reports Server (NTRS)
Feller, A.
1975-01-01
The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.
Semi-Automated Identification of Rocks in Images
NASA Technical Reports Server (NTRS)
Bornstein, Benjamin; Castano, Andres; Anderson, Robert
2006-01-01
Rock Identification Toolkit Suite is a computer program that assists users in identifying and characterizing rocks shown in images returned by the Mars Explorer Rover mission. Included in the program are components for automated finding of rocks, interactive adjustments of outlines of rocks, active contouring of rocks, and automated analysis of shapes in two dimensions. The program assists users in evaluating the surface properties of rocks and soil and reports basic properties of rocks. The program requires either the Mac OS X operating system running on a G4 (or more capable) processor or a Linux operating system running on a Pentium (or more capable) processor, plus at least 128MB of random-access memory.
NASA Technical Reports Server (NTRS)
Tartt, David M.; Hewett, Marle D.; Duke, Eugene L.; Cooper, James A.; Brumbaugh, Randal W.
1989-01-01
The Automated Flight Test Management System (ATMS) is being developed as part of the NASA Aircraft Automation Program. This program focuses on the application of interdisciplinary state-of-the-art technology in artificial intelligence, control theory, and systems methodology to problems of operating and flight testing high-performance aircraft. The development of a Flight Test Engineer's Workstation (FTEWS) is presented, with a detailed description of the system, technical details, and future planned developments. The goal of the FTEWS is to provide flight test engineers and project officers with an automated computer environment for planning, scheduling, and performing flight test programs. The FTEWS system is an outgrowth of the development of ATMS and is an implementation of a component of ATMS on SUN workstations.
NASA Technical Reports Server (NTRS)
Campbell, B. H.
1974-01-01
A study is described which was initiated to identify and quantify the interrelationships between and within the performance, safety, cost, and schedule parameters for unmanned, automated payload programs. The result of the investigation was a systems cost/performance model which was implemented as a digital computer program and could be used to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses for mission model and advanced payload studies. Program objectives and results are described briefly.
Automated Flight Routing Using Stochastic Dynamic Programming
NASA Technical Reports Server (NTRS)
Ng, Hok K.; Morando, Alex; Grabbe, Shon
2010-01-01
Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.
Padilla, Luz A; Desmond, Renee A; Brooks, C Michael; Waterbor, John W
2018-06-01
A key outcome measure of cancer research training programs is the number of cancer-related peer-reviewed publications after training. Because program graduates do not routinely report their publications, staff must periodically conduct electronic literature searches on each graduate. The purpose of this study is to compare findings of an innovative computer-based automated search program versus repeated manual literature searches to identify post-training peer-reviewed publications. In late 2014, manual searches for publications by former R25 students identified 232 cancer-related articles published by 112 of 543 program graduates. In 2016, a research assistant was instructed in performing Scopus literature searches for comparison with individual PubMed searches on our 543 program graduates. Through 2014, Scopus found 304 cancer publications, 220 of that had been retrieved manually plus an additional 84 papers. However, Scopus missed 12 publications found manually. Together, both methods found 316 publications. The automated method found 96.2 % of the 316 publications while individual searches found only 73.4 %. An automated search method such as using the Scopus database is a key tool for conducting comprehensive literature searches, but it must be supplemented with periodic manual searches to find the initial publications of program graduates. A time-saving feature of Scopus is the periodic automatic alerts of new publications. Although a training period is needed and initial costs can be high, an automated search method is worthwhile due to its high sensitivity and efficiency in the long term.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Accounting and Information Management Div.
A study examined states' development of automated systems for the Job Opportunities and Basic Skills (JOBS) program administered by the states, with the Administration for Children and Families (ACF) responsible for program oversight and direction. Results indicated that ACF had not provided direction and focus in its systems development guidance…
Pavlov, Sergey S; Dmitriev, Andrey Yu; Frontasyeva, Marina V
The present status of development of software packages and equipment designed for automation of NAA at the reactor IBR-2 of FLNP, JINR, Dubna, RF, is described. The NAA database, construction of sample changers and software for automation of spectra measurement and calculation of concentrations are presented. Automation of QC procedures is integrated in the software developed. Details of the design are shown.
Python Scripts for Automation of Current-Voltage Testing of Semiconductor Devices (FY17)
2017-01-01
ARL-TR-7923 ● JAN 2017 US Army Research Laboratory Python Scripts for Automation of Current- Voltage Testing of Semiconductor...manual device-testing procedures is reduced or eliminated through automation. This technical report includes scripts written in Python , version 2.7, used ...nothing. 3.1.9 Exit Program The script exits the entire program. Line 505, sys.exit(), uses the sys package that comes with Python to exit system
A new automated method for the determination of cross-section limits in ephemeral gullies
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Ángel Campo-Bescós, Miguel; Casalí, Javier; Giménez, Rafael
2017-04-01
The assessment of gully erosion relies on the estimation of the soil volume enclosed by cross sections limits. Both 3D and 2D methods require the application of a methodology for the determination of the cross-section limits what has been traditionally carried out in two ways: a) by visual inspection of the cross-section by a certain expert operator; b) by the automated identification of thresholds for different geometrical variables such as elevation, slope or plan curvature obtained from the cross-section profile. However, for these last methods, typically, the thresholds are not of general application because they depend on absolute values valid only for the local gully conditions where they were derived. In this communication we evaluate an automated method for cross-section delimitation of ephemeral gullies and compare its performance with the visual assessment provided by five scientists experienced in gully erosion assessment, defining gully width, depth and area for a total of 60 ephemeral gullies cross-sections obtained from field surveys conducted on agricultural plots in Navarra (Spain). The automated method only depends on the calculation of a simple geometrical measurement, which is the bank trapezoid area for every point of each gully bank. This rectangle trapezoid (right-angled trapezoid) is defined by the elevation of a given point, the minimum elevation and the extremes of the cross-section. The gully limit for each bank is determined by the point in the bank with the maximum trapezoid area. The comparison of the estimates among the different expert operators showed large variation coefficients (up to 70%) in a number of cross-sections, larger for cross sections width and area and smaller for cross sections depth. The automated method produced comparable results to those obtained by the experts and was the procedure with the highest average correlation with the rest of the methods for the three dimensional parameters. The errors of the automated method when compared with the average estimate of the experts were occasionally high (up to 40%), in line with the variability found among experts. The automated method showed no apparent systematic errors which approximately followed a normal distribution, although these errors were slightly biased towards overestimation for the depth and area parameters. In conclusion, this study shows that there is not a single definition of gully limits even among gully experts where a large variability was found. The bank trapezoid method was found to be an automated, easy-to-use (readily implementable in a basic excel spread-sheet or programming scripts), threshold-independent procedure to determine consistently gully limits similar to expert-derived estimates. Gully width and area calculations were more prone to errors than gully depth, which was the least sensitive parameter.
Automated generation of weld path trajectories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sizemore, John M.; Hinman-Sweeney, Elaine Marie; Ames, Arlo Leroy
2003-06-01
AUTOmated GENeration of Control Programs for Robotic Welding of Ship Structure (AUTOGEN) is software that automates the planning and compiling of control programs for robotic welding of ship structure. The software works by evaluating computer representations of the ship design and the manufacturing plan. Based on this evaluation, AUTOGEN internally identifies and appropriately characterizes each weld. Then it constructs the robot motions necessary to accomplish the welds and determines for each the correct assignment of process control values. AUTOGEN generates these robot control programs completely without manual intervention or edits except to correct wrong or missing input data. Most shipmore » structure assemblies are unique or at best manufactured only a few times. Accordingly, the high cost inherent in all previous methods of preparing complex control programs has made robot welding of ship structures economically unattractive to the U.S. shipbuilding industry. AUTOGEN eliminates the cost of creating robot control programs. With programming costs eliminated, capitalization of robots to weld ship structures becomes economically viable. Robot welding of ship structures will result in reduced ship costs, uniform product quality, and enhanced worker safety. Sandia National Laboratories and Northrop Grumman Ship Systems worked with the National Shipbuilding Research Program to develop a means of automated path and process generation for robotic welding. This effort resulted in the AUTOGEN program, which has successfully demonstrated automated path generation and robot control. Although the current implementation of AUTOGEN is optimized for welding applications, the path and process planning capability has applicability to a number of industrial applications, including painting, riveting, and adhesive delivery.« less
Frattaroli, Shannon; Schulman, Eric; McDonald, Eileen M; Omaki, Elise C; Shields, Wendy C; Jones, Vanya; Brewer, William
2018-05-17
Innovative strategies are needed to improve the prevalence of working smoke alarms in homes. To our knowledge, this is the first study to report on the effectiveness of Facebook advertising and automated telephone calls as population-level strategies to encourage an injury prevention behavior. We examine the effectiveness of Facebook advertising and automated telephone calls as strategies to enroll individuals in Baltimore City's Fire Department's free smoke alarm installation program. We directed our advertising efforts toward Facebook users eligible for the Baltimore City Fire Department's free smoke alarm installation program and all homes with a residential phone line included in Baltimore City's automated call system. The Facebook campaign targeted Baltimore City residents 18 years of age and older. In total, an estimated 300 000 Facebook users met the eligibility criteria. Facebook advertisements were delivered to users' desktop and mobile device newsfeeds. A prerecorded message was sent to all residential landlines listed in the city's automated call system. By the end of the campaign, the 3 advertisements generated 456 666 impressions reaching 130 264 Facebook users. Of the users reached, 4367 individuals (1.3%) clicked the advertisement. The automated call system included approximately 90 000 residential phone numbers. Participants attributed 25 smoke alarm installation requests to Facebook and 458 to the automated call. Facebook advertisements are a novel approach to promoting smoke alarms and appear to be effective in exposing individuals to injury prevention messages. However, converting Facebook message recipients to users of a smoke alarm installation program occurred infrequently in this study. Residents who participated in the smoke alarm installation program were more likely to cite the automated call as the impetus for their participation. Additional research is needed to understand the circumstances and strategies to effectively use the social networking site as a tool to convert passive users into active participants.
Web-based automation of green building rating index and life cycle cost analysis
NASA Astrophysics Data System (ADS)
Shahzaib Khan, Jam; Zakaria, Rozana; Aminuddin, Eeydzah; IzieAdiana Abidin, Nur; Sahamir, Shaza Rina; Ahmad, Rosli; Nafis Abas, Darul
2018-04-01
Sudden decline in financial markets and economic meltdown has slow down adaptation and lowered interest of investors towards green certified buildings due to their higher initial costs. Similarly, it is essential to fetch investor’s attention towards more development of green buildings through automated tools for the construction projects. Though, historical dearth is found on the automation of green building rating tools that brings up an essential gap to develop an automated analog computerized programming tool. This paper present a proposed research aim to develop an integrated web-based automated analog computerized programming that applies green building rating assessment tool, green technology and life cycle cost analysis. It also emphasizes to identify variables of MyCrest and LCC to be integrated and developed in a framework then transformed into automated analog computerized programming. A mix methodology of qualitative and quantitative survey and its development portray the planned to carry MyCrest-LCC integration to an automated level. In this study, the preliminary literature review enriches better understanding of Green Building Rating Tools (GBRT) integration to LCC. The outcome of this research is a pave way for future researchers to integrate other efficient tool and parameters that contributes towards green buildings and future agendas.
Friedman, S N; Bambrough, P J; Kotsarini, C; Khandanpour, N; Hoggard, N
2012-12-01
Despite the established role of MRI in the diagnosis of brain tumours, histopathological assessment remains the clinically used technique, especially for the glioma group. Relative cerebral blood volume (rCBV) is a dynamic susceptibility-weighted contrast-enhanced perfusion MRI parameter that has been shown to correlate to tumour grade, but assessment requires a specialist and is time consuming. We developed analysis software to determine glioma gradings from perfusion rCBV scans in a manner that is quick, easy and does not require a specialist operator. MRI perfusion data from 47 patients with different histopathological grades of glioma were analysed with custom-designed software. Semi-automated analysis was performed with a specialist and non-specialist operator separately determining the maximum rCBV value corresponding to the tumour. Automated histogram analysis was performed by calculating the mean, standard deviation, median, mode, skewness and kurtosis of rCBV values. All values were compared with the histopathologically assessed tumour grade. A strong correlation between specialist and non-specialist observer measurements was found. Significantly different values were obtained between tumour grades using both semi-automated and automated techniques, consistent with previous results. The raw (unnormalised) data single-pixel maximum rCBV semi-automated analysis value had the strongest correlation with glioma grade. Standard deviation of the raw data had the strongest correlation of the automated analysis. Semi-automated calculation of raw maximum rCBV value was the best indicator of tumour grade and does not require a specialist operator. Both semi-automated and automated MRI perfusion techniques provide viable non-invasive alternatives to biopsy for glioma tumour grading.
Automatic translation of digraph to fault-tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.
1992-01-01
The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.
DOT National Transportation Integrated Search
1997-07-01
This report describes the work of the study, Systems Planning for Automated Commercial Vehicle Licensing and Permitting Systems. The objective of this study is to define the market, organization, and resource requirements for a national program...
24 CFR 7.13 - Responsibilities of the Assistant Secretary for Administration.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... The Assistant Secretary for Administration shall: (a) Provide leadership in developing and maintaining personnel management policies, programs, automated systems, and procedures that will promote and ensure... the personnel management policies, programs, automated systems, and EEO procedures; (c) Participate at...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Office grant program managers. (i) Automated systems referred to in this instruction refers to the loan accounting systems; e.g., Program Loan Accounting System, Automated Multi-Housing Accounting System, and Dedicated Loan Origination System, from which loan and grant disbursements are ordered. (j) This subpart...
NASA Astrophysics Data System (ADS)
Houben, Georg J.; Blümel, Martin
2017-11-01
Porosity is a fundamental parameter in hydrogeology. The empirical method of Beyer and Schweiger (1969) allows the calculation of hydraulic conductivity and both the total and effective porosity from granulometric data. However, due to its graphical nature with type curves, it is tedious to apply and prone to reading errors. In this work, the type curves were digitized and emulated by mathematical functions. The latter were implemented into a spreadsheet and a visual basic program, allowing the fast automated application of the method for any number of samples.
Lee, Woonghee; Kim, Jin Hae; Westler, William M.; Markley, John L.
2011-01-01
Summary: PONDEROSA (Peak-picking Of Noe Data Enabled by Restriction of Shift Assignments) accepts input information consisting of a protein sequence, backbone and sidechain NMR resonance assignments, and 3D-NOESY (13C-edited and/or 15N-edited) spectra, and returns assignments of NOESY crosspeaks, distance and angle constraints, and a reliable NMR structure represented by a family of conformers. PONDEROSA incorporates and integrates external software packages (TALOS+, STRIDE and CYANA) to carry out different steps in the structure determination. PONDEROSA implements internal functions that identify and validate NOESY peak assignments and assess the quality of the calculated three-dimensional structure of the protein. The robustness of the analysis results from PONDEROSA's hierarchical processing steps that involve iterative interaction among the internal and external modules. PONDEROSA supports a variety of input formats: SPARKY assignment table (.shifts) and spectrum file formats (.ucsf), XEASY proton file format (.prot), and NMR-STAR format (.star). To demonstrate the utility of PONDEROSA, we used the package to determine 3D structures of two proteins: human ubiquitin and Escherichia coli iron-sulfur scaffold protein variant IscU(D39A). The automatically generated structural constraints and ensembles of conformers were as good as or better than those determined previously by much less automated means. Availability: The program, in the form of binary code along with tutorials and reference manuals, is available at http://ponderosa.nmrfam.wisc.edu/. Contact: whlee@nmrfam.wisc.edu; markley@nmrfam.wisc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21511715
NASA Astrophysics Data System (ADS)
Rasmussen, John C.; Bautista, Merrick; Tan, I.-Chih; Adams, Kristen E.; Aldrich, Melissa; Marshall, Milton V.; Fife, Caroline E.; Maus, Erik A.; Smith, Latisha A.; Zhang, Jingdan; Xiang, Xiaoyan; Zhou, Shaohua Kevin; Sevick-Muraca, Eva M.
2011-02-01
Recently, we demonstrated near-infrared (NIR) fluorescence imaging for quantifying real-time lymphatic propulsion in humans following intradermal injections of microdose amounts of indocyanine green. However computational methods for image analysis are underdeveloped, hindering the translation and clinical adaptation of NIR fluorescent lymphatic imaging. In our initial work we used ImageJ and custom MatLab programs to manually identify lymphatic vessels and individual propulsion events using the temporal transit of the fluorescent dye. In addition, we extracted the apparent velocities of contractile propagation and time periods between propulsion events. Extensive time and effort were required to analyze the 6-8 gigabytes of NIR fluorescent images obtained for each subject. To alleviate this bottleneck, we commenced development of ALFIA, an integrated software platform which will permit automated, near real-time analysis of lymphatic function using NIR fluorescent imaging. However, prior to automation, the base algorithms calculating the apparent velocity and period must be validated to verify that they produce results consistent with the proof-of-concept programs. To do this, both methods were used to analyze NIR fluorescent images of two subjects and the number of propulsive events identified, the average apparent velocities, and the average periods for each subject were compared. Paired Student's t-tests indicate that the differences between their average results are not significant. With the base algorithms validated, further development and automation of ALFIA can be realized, significantly reducing the amount of user interaction required, and potentially enabling the near real-time, clinical evaluation of NIR fluorescent lymphatic imaging.
A computational framework for automation of point defect calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
A computational framework for automation of point defect calculations
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...
2017-01-13
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
The Pros and Cons of Army Automation
2007-11-13
The Pros and Cons of Army Automation 1 Running Head: THE PROS AND CONS OF ARMY AUTOMATION The Pros and Cons of Army Automation SGM...TITLE AND SUBTITLE The Pros and Cons of Army Automation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...Prescribed by ANSI Std Z39-18 The Pros and Cons of Army Automation 2 Outline I. Introduction (MSG (P) Dostie) II. Manual skills (MSG (P
Proceedings of the Second Texas Conference on Library Automation (Houston, March 27, 1969).
ERIC Educational Resources Information Center
Corbin, John B., Ed.
Four papers are included in these proceedings. The first three discuss specific on-going programs, including details of operation: (1) "Automation of Serials," by Shula Schwartz and Patricia A. Bottalico, reports a serials records automation at Texas Instruments Inc., Dallas, Texas; (2) "From Texana to Real-Time Automation," by…
Overview of automated enforcement in transportation
DOT National Transportation Integrated Search
1998-06-01
Automated enforcement is seen by some public agencies as a means to combat aggressive driving behaviors such as speeding or running red lights. Based upon a review of automated enforcement programs worldwide, several elements were found to be importa...
Lin, Steve; Turgulov, Anuar; Taher, Ahmed; Buick, Jason E; Byers, Adam; Drennan, Ian R; Hu, Samantha; J Morrison, Laurie
2016-10-01
Cardiopulmonary resuscitation (CPR) process measures research and quality assurance has traditionally been limited to the first 5 minutes of resuscitation due to significant costs in time, resources, and personnel from manual data abstraction. CPR performance may change over time during prolonged resuscitations, which represents a significant knowledge gap. Moreover, currently available commercial software output of CPR process measures are difficult to analyze. The objective was to develop and validate a software program to help automate the abstraction and transfer of CPR process measures data from electronic defibrillators for complete episodes of cardiac arrest resuscitation. We developed a software program to facilitate and help automate CPR data abstraction and transfer from electronic defibrillators for entire resuscitation episodes. Using an intermediary Extensible Markup Language export file, the automated software transfers CPR process measures data (electrocardiogram [ECG] number, CPR start time, number of ventilations, number of chest compressions, compression rate per minute, compression depth per minute, compression fraction, and end-tidal CO 2 per minute). We performed an internal validation of the software program on 50 randomly selected cardiac arrest cases with resuscitation durations between 15 and 60 minutes. CPR process measures were manually abstracted and transferred independently by two trained data abstractors and by the automated software program, followed by manual interpretation of raw ECG tracings, treatment interventions, and patient events. Error rates and the time needed for data abstraction, transfer, and interpretation were measured for both manual and automated methods, compared to an additional independent reviewer. A total of 9,826 data points were each abstracted by the two abstractors and by the software program. Manual data abstraction resulted in a total of six errors (0.06%) compared to zero errors by the software program. The mean ± SD time measured per case for manual data abstraction was 20.3 ± 2.7 minutes compared to 5.3 ± 1.4 minutes using the software program (p = 0.003). We developed and validated an automated software program that efficiently abstracts and transfers CPR process measures data from electronic defibrillators for complete cardiac arrest episodes. This software will enable future cardiac arrest studies and quality assurance programs to evaluate the impact of CPR process measures during prolonged resuscitations. © 2016 by the Society for Academic Emergency Medicine.
Automated Corrosion Detection Program
2001-10-01
More detailed explanations of the methodology development can be found in Hidden Corrosion Detection Technology Assessment, a paper presented at...Detection Program, a paper presented at the Fourth Joint DoD/FAA/NASA Conference on Aging Aircraft, 2000. AS&M PULSE. The PULSE system, developed...selection can be found in The Evaluation of Hidden Corrosion Detection Technologies on the Automated Corrosion Detection Program, a paper presented
Hirsch, Irl B; Parkin, Christopher G
2016-07-01
Manual calculation of bolus insulin dosages can be challenging for individuals treated with multiple daily insulin injections (MDI) therapy. Automated bolus calculator capability has recently been made available via enhanced blood glucose meters and smartphone apps. Use of this technology has been shown to improve glycemic control and reduce glycemic variability without changing hypoglycemia; however, the clinical utility of app-based bolus calculators has not been demonstrated. Moreover, recent evidence challenges the safety and efficacy of these smartphone apps. Although the ability to automatically calculate bolus insulin dosages addresses a critical need of MDI-treated individuals, this technology raises concerns about efficacy of treatment and the protection of patient safety. This article discusses key issues and considerations associated with automated bolus calculator use. © 2016 Diabetes Technology Society.
How to sharpen your automated tools.
DOT National Transportation Integrated Search
2014-12-01
New programs that claim to make flying more efficient have several things in common, new tasks for pilots, new flight deck displays, automated support tools, changes to ground automation, and displays for air traffic control. Training is one of the t...
MARC ES: a computer program for estimating medical information storage requirements.
Konoske, P J; Dobbins, R W; Gauker, E D
1998-01-01
During combat, documentation of medical treatment information is critical for maintaining continuity of patient care. However, knowledge of prior status and treatment of patients is limited to the information noted on a paper field medical card. The Multi-technology Automated Reader Card (MARC), a smart card, has been identified as a potential storage mechanism for casualty medical information. Focusing on data capture and storage technology, this effort developed a Windows program, MARC ES, to estimate storage requirements for the MARC. The program calculates storage requirements for a variety of scenarios using medical documentation requirements, casualty rates, and casualty flows and provides the user with a tool to estimate the space required to store medical data at each echelon of care for selected operational theaters. The program can also be used to identify the point at which data must be uploaded from the MARC if size constraints are imposed. Furthermore, this model can be readily extended to other systems that store or transmit medical information.
Romer, Katherine A.; Kayombya, Guy-Richard; Fraenkel, Ernest
2007-01-01
WebMOTIFS provides a web interface that facilitates the discovery and analysis of DNA-sequence motifs. Several studies have shown that the accuracy of motif discovery can be significantly improved by using multiple de novo motif discovery programs and using randomized control calculations to identify the most significant motifs or by using Bayesian approaches. WebMOTIFS makes it easy to apply these strategies. Using a single submission form, users can run several motif discovery programs and score, cluster and visualize the results. In addition, the Bayesian motif discovery program THEME can be used to determine the class of transcription factors that is most likely to regulate a set of sequences. Input can be provided as a list of gene or probe identifiers. Used with the default settings, WebMOTIFS accurately identifies biologically relevant motifs from diverse data in several species. WebMOTIFS is freely available at http://fraenkel.mit.edu/webmotifs. PMID:17584794
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, J; Shi, F; Hrycushko, B
2015-06-15
Purpose: For tandem and ovoid (T&O) HDR brachytherapy in our clinic, it is required that the planning physicist manually capture ∼10 images during planning, perform a secondary dose calculation and generate a report, combine them into a single PDF document, and upload it to a record- and-verify system to prove to an independent plan checker that the case was planned correctly. Not only does this slow down the already time-consuming clinical workflow, the PDF document also limits the number of parameters that can be checked. To solve these problems, we have developed a web-based automatic quality assurance (QA) program. Methods:more » We set up a QA server accessible through a web- interface. A T&O plan and CT images are exported as DICOMRT files and uploaded to the server. The software checks 13 geometric features, e.g. if the dwell positions are reasonable, and 10 dosimetric features, e.g. secondary dose calculations via TG43 formalism and D2cc to critical structures. A PDF report is automatically generated with errors and potential issues highlighted. It also contains images showing important geometric and dosimetric aspects to prove the plan was created following standard guidelines. Results: The program has been clinically implemented in our clinic. In each of the 58 T&O plans we tested, a 14- page QA report was automatically generated. It took ∼45 sec to export the plan and CT images and ∼30 sec to perform the QA tests and generate the report. In contrast, our manual QA document preparation tooks on average ∼7 minutes under optimal conditions and up to 20 minutes when mistakes were made during the document assembly. Conclusion: We have tested the efficiency and effectiveness of an automated process for treatment plan QA of HDR T&O cases. This software was shown to improve the workflow compared to our conventional manual approach.« less
An intelligent CNC machine control system architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.J.; Loucks, C.S.
1996-10-01
Intelligent, agile manufacturing relies on automated programming of digitally controlled processes. Currently, processes such as Computer Numerically Controlled (CNC) machining are difficult to automate because of highly restrictive controllers and poor software environments. It is also difficult to utilize sensors and process models for adaptive control, or to integrate machining processes with other tasks within a factory floor setting. As part of a Laboratory Directed Research and Development (LDRD) program, a CNC machine control system architecture based on object-oriented design and graphical programming has been developed to address some of these problems and to demonstrate automated agile machining applications usingmore » platform-independent software.« less
SU-F-P-53: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Rutel, I; Wu, D
Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semi-automated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less
The elaboration of motor programs for the automation of letter production.
Thibon, Laurence Séraphin; Gerber, Silvain; Kandel, Sonia
2018-01-01
We investigated how children learn to write letters. Letter writing evolves from stroke-by-stroke to whole-letter programming. Children of ages 6 to 9 (N=98) wrote letters of varying complexity on a digitizer. At ages 6 and 7 movement duration, dysfluency and trajectory increased with stroke number. This indicates that the motor program they activated mainly coded information on stroke production. Stroke number affected the older children's production much less, suggesting that they programmed stroke chunks or the whole letter. The fact that movement duration and dysfluency decreased from ages 6 to 8, and remained stable at ages 8 and 9 suggests that automation of letter writing begins at age 8. Automation seems to require the elaboration of stroke chunks and/or letter-sized motor programs. Copyright © 2017 Elsevier B.V. All rights reserved.
Automation Bias: Decision Making and Performance in High-Tech Cockpits
NASA Technical Reports Server (NTRS)
Mosier, Kathleen L.; Skitka, Linda J.; Heers, Susan; Burdick, Mark; Rosekind, Mark R. (Technical Monitor)
1997-01-01
Automated aids and decision support tools are rapidly becoming indispensible tools in high-technology cockpits, and are assuming increasing control of "cognitive" flight tasks, such as calculating fuel-efficient routes, navigating, or detecting and diagnosing system malfunctions and abnormalities. This study was designed to investigate "automation bias," a recently documented factor in the use of automated aids and decision support systems. The term refers to omission and commission errors resulting from the use of automated cues as a heuristic replacement for vigilant information seeking and processing. Glass-cockpit pilots flew flight scenarios involving automation "events," or opportunities for automation-related omission and commission errors. Pilots who perceived themselves as "accountable" for their performance and strategies of interaction with the automation were more likely to double-check automated functioning against other cues, and less likely to commit errors. Pilots were also likely to erroneously "remember" the presence of expected cues when describing their decision-making processes.
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
NASA Astrophysics Data System (ADS)
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-04-01
Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large previously existing earthquake catalogues and data sets.
The Electrolyte Genome project: A big data approach in battery materials discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu, Xiaohui; Jain, Anubhav; Rajput, Nav Nidhi
2015-06-01
We present a high-throughput infrastructure for the automated calculation of molecular properties with a focus on battery electrolytes. The infrastructure is largely open-source and handles both practical aspects (input file generation, output file parsing, and information management) as well as more complex problems (structure matching, salt complex generation, and failure recovery). Using this infrastructure, we have computed the ionization potential (IP) and electron affinities (EA) of 4830 molecules relevant to battery electrolytes (encompassing almost 55,000 quantum mechanics calculations) at the B3LYP/6-31+G(*) level. We describe automated workflows for computing redox potential, dissociation constant, and salt-molecule binding complex structure generation. We presentmore » routines for automatic recovery from calculation errors, which brings the failure rate from 9.2% to 0.8% for the QChem DFT code. Automated algorithms to check duplication between two arbitrary molecules and structures are described. We present benchmark data on basis sets and functionals on the G2-97 test set; one finding is that a IP/EA calculation method that combines PBE geometry optimization and B3LYP energy evaluation requires less computational cost and yields nearly identical results as compared to a full B3LYP calculation, and could be suitable for the calculation of large molecules. Our data indicates that among the 8 functionals tested, XYGJ-OS and B3LYP are the two best functionals to predict IP/EA with an RMSE of 0.12 and 0.27 eV, respectively. Application of our automated workflow to a large set of quinoxaline derivative molecules shows that functional group effect and substitution position effect can be separated for IP/EA of quinoxaline derivatives, and the most sensitive position is different for IP and EA. Published by Elsevier B.V« less
ERIC Educational Resources Information Center
Husung, William T., Jr.
Visits to 35 industries and professional offices throughout California and interviews of 219 draftsmen and supervisors were conducted to determine: (1) the effects of automation on the needs of industry for draftsmen with general versus specialized training, and (2) the curricular revisions in vocational drafting programs in junior colleges to…
Haas, Brian J; Salzberg, Steven L; Zhu, Wei; Pertea, Mihaela; Allen, Jonathan E; Orvis, Joshua; White, Owen; Buell, C Robin; Wortman, Jennifer R
2008-01-01
EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation. PMID:18190707
Automation and robotics technology for intelligent mining systems
NASA Technical Reports Server (NTRS)
Welsh, Jeffrey H.
1989-01-01
The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.
A Python tool to set up relative free energy calculations in GROMACS.
Klimovich, Pavel V; Mobley, David L
2015-11-01
Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper (LOMAP; Liu et al. in J Comput Aided Mol Des 27(9):755-770, 2013), recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge (Mobley et al. in J Comput Aided Mol Des 28(4):135-150, 2014). Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations.
Toward the automated analysis of plasma physics problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mynick, H.E.
1989-04-01
A program (CALC) is described, which carries out nontrivial plasma physics calculations, in a manner intended to emulate the approach of a human theorist. This includes the initial process of gathering the relevant equations from a plasma knowledge base, and then determining how to solve them. Solution of the sets of equations governing physics problems, which in general have a nonuniform,irregular structure, not amenable to solution by standardized algorithmic procedures, is facilitated by an analysis of the structure of the equations and the relations among them. This often permits decompositions of the full problem into subproblems, and other simplifications inmore » form, which renders the resultant subsystems soluble by more standardized tools. CALC's operation is illustrated by a detailed description of its treatment of a sample plasma calculation. 5 refs., 3 figs.« less
ELSA: An integrated, semi-automated nebular abundance package
NASA Astrophysics Data System (ADS)
Johnson, Matthew D.; Levitt, Jesse S.; Henry, Richard B. C.; Kwitter, Karen B.
We present ELSA, a new modular software package, written in C, to analyze and manage spectroscopic data from emission-line objects. In addition to calculating plasma diagnostics and abundances from nebular emission lines, the software provides a number of convenient features including the ability to ingest logs produced by IRAF's splot task, to semi-automatically merge spectra in different wavelength ranges, and to automatically generate various data tables in machine-readable or LaTeX format. ELSA features a highly sophisticated interstellar reddening correction scheme that takes into account temperature and density effects as well as He II contamination of the hydrogen Balmer lines. Abundance calculations are performed using a 5-level atom approximation with recent atomic data, based on R. Henry's ABUN program. Downloading and detailed documentation for all aspects of ELSA are available at the following URL:
2016-03-01
The program’s estimated program development cost increased by 45 percent over the original acquisition program baseline due to program scope changes ...DOD MAJOR AUTOMATED INFORMATION SYSTEMS Improvements Can Be Made in Reporting Critical Changes and Clarifying...INFORMATION SYSTEMS Improvements Can Be Made in Reporting Critical Changes and Clarifying Leadership Responsibility Why GAO Did This Study The National
ERIC Educational Resources Information Center
Donoghue, John R.
2015-01-01
At the heart of van der Linden's approach to automated test assembly (ATA) is a linear programming/integer programming (LP/IP) problem. A variety of IP solvers are available, ranging in cost from free to hundreds of thousands of dollars. In this paper, I compare several approaches to solving the underlying IP problem. These approaches range from…
DataForge: Modular platform for data storage and analysis
NASA Astrophysics Data System (ADS)
Nozik, Alexander
2018-04-01
DataForge is a framework for automated data acquisition, storage and analysis based on modern achievements of applied programming. The aim of the DataForge is to automate some standard tasks like parallel data processing, logging, output sorting and distributed computing. Also the framework extensively uses declarative programming principles via meta-data concept which allows a certain degree of meta-programming and improves results reproducibility.
Proteomics wants cRacker: automated standardized data analysis of LC-MS derived proteomic data.
Zauber, Henrik; Schulze, Waltraud X
2012-11-02
The large-scale analysis of thousands of proteins under various experimental conditions or in mutant lines has gained more and more importance in hypothesis-driven scientific research and systems biology in the past years. Quantitative analysis by large scale proteomics using modern mass spectrometry usually results in long lists of peptide ion intensities. The main interest for most researchers, however, is to draw conclusions on the protein level. Postprocessing and combining peptide intensities of a proteomic data set requires expert knowledge, and the often repetitive and standardized manual calculations can be time-consuming. The analysis of complex samples can result in very large data sets (lists with several 1000s to 100,000 entries of different peptides) that cannot easily be analyzed using standard spreadsheet programs. To improve speed and consistency of the data analysis of LC-MS derived proteomic data, we developed cRacker. cRacker is an R-based program for automated downstream proteomic data analysis including data normalization strategies for metabolic labeling and label free quantitation. In addition, cRacker includes basic statistical analysis, such as clustering of data, or ANOVA and t tests for comparison between treatments. Results are presented in editable graphic formats and in list files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S; Guerrero, M; Zhang, B
Purpose: To implement a comprehensive non-measurement-based verification program for patient-specific IMRT QA Methods: Based on published guidelines, a robust IMRT QA program should assess the following components: 1) accuracy of dose calculation, 2) accuracy of data transfer from the treatment planning system (TPS) to the record-and-verify (RV) system, 3) treatment plan deliverability, and 4) accuracy of plan delivery. Results: We have implemented an IMRT QA program that consist of four components: 1) an independent re-calculation of the dose distribution in the patient anatomy with a commercial secondary dose calculation program: Mobius3D (Mobius Medical Systems, Houston, TX), with dose accuracy evaluationmore » using gamma analysis, PTV mean dose, PTV coverage to 95%, and organ-at-risk mean dose; 2) an automated in-house-developed plan comparison system that compares all relevant plan parameters such as MU, MLC position, beam iso-center position, collimator, gantry, couch, field size settings, and bolus placement, etc. between the plan and the RV system; 3) use of the RV system to check the plan deliverability and further confirm using “mode-up” function on treatment console for plans receiving warning; and 4) implementation of a comprehensive weekly MLC QA, in addition to routine accelerator monthly and daily QA. Among 1200 verifications, there were 9 cases of suspicious calculations, 5 cases of delivery failure, no data transfer errors, and no failure of weekly MLC QA. These 9 suspicious cases were due to the PTV extending to the skin or to heterogeneity correction effects, which would not have been caught using phantom measurement-based QA. The delivery failure was due to the rounding variation of MLC position between the planning system and RV system. Conclusion: A very efficient, yet comprehensive, non-measurement-based patient-specific QA program has been implemented and used clinically for about 18 months with excellent results.« less
Speech and Language Therapy Under an Automated Stimulus Control System.
ERIC Educational Resources Information Center
Garrett, Edgar Ray
Programed instruction for speech and language therapy, based upon stimulus control programing and presented by a completely automated teaching machine, was evaluated with 32 mentally retarded children, 20 children with language disorders (childhood aphasia), six adult aphasics, and 60 normal elementary school children. Posttesting with the…
1991-09-01
103 A2352344 Layup Cover Sheets/Inspect ............................. 103 A2352345 Perform Automated Tape Laying operations...A2352345 Perform Automated Tape Laying operations/Inspect The tape is layed in 3-12 inch strips along the surface of the bond mold. The NC program is
ERIC Educational Resources Information Center
Bradley, Lucy K.; Cook, Jonneen; Cook, Chris
2011-01-01
North Carolina State University has incorporated many aspects of volunteer program administration and reporting into an on-line solution that integrates impact reporting into daily program management. The Extension Master Gardener Intranet automates many of the administrative tasks associated with volunteer management, increasing efficiency, and…
Performance Analysis of GAME: A Generic Automated Marking Environment
ERIC Educational Resources Information Center
Blumenstein, Michael; Green, Steve; Fogelman, Shoshana; Nguyen, Ann; Muthukkumarasamy, Vallipuram
2008-01-01
This paper describes the Generic Automated Marking Environment (GAME) and provides a detailed analysis of its performance in assessing student programming projects and exercises. GAME has been designed to automatically assess programming assignments written in a variety of languages based on the "structure" of the source code and the correctness…
Automated Assessment in a Programming Tools Course
ERIC Educational Resources Information Center
Fernandez Aleman, J. L.
2011-01-01
Automated assessment systems can be useful for both students and instructors. Ranking and immediate feedback can have a strongly positive effect on student learning. This paper presents an experience using automatic assessment in a programming tools course. The proposal aims at extending the traditional use of an online judging system with a…
Automated Manufacturing/Robotics Technology: Certificate and Associate Degree Programs.
ERIC Educational Resources Information Center
McQuay, Paul L.
A description is provided of the Automated Manufacturing/Robotics program to be offered at Delaware County Community College beginning in September 1984. Section I provides information on the use of reprogramable industrial robots in manufacturing and the rapid changes in production that can be effected through the application of automated…
Factors to Consider When Implementing Automated Software Testing
2016-11-10
programming, e.g., Java or Visual Basic. Subject Matter Experts (SME) with firm grasp of application being automated. 2. Additional costs for setup (e.g...Abilities (KSA) required (e.g., Test and Evaluation). 2. Analyze programming skills needed (e.g., Java , C, C++, Visual Basic). 3. Compose team – testers
Devine, Emily Beth; Van Eaton, Erik; Zadworny, Megan E; Symons, Rebecca; Devlin, Allison; Yanez, David; Yetisgen, Meliha; Keyloun, Katelyn R; Capurro, Daniel; Alfonso-Cristancho, Rafael; Flum, David R; Tarczy-Hornoch, Peter
2018-05-22
The availability of high fidelity electronic health record (EHR) data is a hallmark of the learning health care system. Washington State's Surgical Care Outcomes and Assessment Program (SCOAP) is a network of hospitals participating in quality improvement (QI) registries wherein data are manually abstracted from EHRs. To create the Comparative Effectiveness Research and Translation Network (CERTAIN), we semi-automated SCOAP data abstraction using a centralized federated data model, created a central data repository (CDR), and assessed whether these data could be used as real world evidence for QI and research. Describe the validation processes and complexities involved and lessons learned. Investigators installed a commercial CDR to retrieve and store data from disparate EHRs. Manual and automated abstraction systems were conducted in parallel (10/2012-7/2013) and validated in three phases using the EHR as the gold standard: 1) ingestion, 2) standardization, and 3) concordance of automated versus manually abstracted cases. Information retrieval statistics were calculated. Four unaffiliated health systems provided data. Between 6 and 15 percent of data elements were abstracted: 51 to 86 percent from structured data; the remainder using natural language processing (NLP). In phase 1, data ingestion from 12 out of 20 feeds reached 95 percent accuracy. In phase 2, 55 percent of structured data elements performed with 96 to 100 percent accuracy; NLP with 89 to 91 percent accuracy. In phase 3, concordance ranged from 69 to 89 percent. Information retrieval statistics were consistently above 90 percent. Semi-automated data abstraction may be useful, although raw data collected as a byproduct of health care delivery is not immediately available for use as real world evidence. New approaches to gathering and analyzing extant data are required.
AUTOMATION OF EXPERIMENTS WITH A HAND-HELD PROGRAMMABLE CALCULATOR
Technological developments have dramatically reduced the cost of data collection, experimental control and computation. Products are now available which allow automation of experiments both in the laboratory and in the field at substantially lower cost and with less technical exp...
Space missions for automation and robotics technologies (SMART) program
NASA Technical Reports Server (NTRS)
Ciffone, D. L.; Lum, H., Jr.
1985-01-01
The motivations, features and expected benefits and applications of the NASA SMART program are summarized. SMART is intended to push the state of the art in automation and robotics, a goal that Public Law 98-371 mandated be an inherent part of the Space Station program. The effort would first require tests of sensors, manipulators, computers and other subsystems as seeds for the evolution of flight-qualified subsystems. Consideration is currently being given to robotics systems as add-ons to the RMS, MMU and OMV and a self-contained automation and robotics module which would be tended by astronaut visits. Probable experimentation and development paths that would be pursued with the equipment are discussed, along with the management structure and procedures for the program. The first hardware flight is projected for 1989.
Automated external defibrillators in National Collegiate Athletic Association Division I Athletics.
Coris, Eric E; Sahebzamani, Frances; Walz, Steve; Ramirez, Arnold M
2004-01-01
Sudden cardiac death is the leading cause of death in athletes. Evidence on current sudden cardiac death prevention through preparticipation history, physicals, and noninvasive cardiovascular diagnostics has demonstrated a low sensitivity for detection of athletes at high risk of sudden cardiac death. Data are lacking on automated external defibrillator programs specifically initiated to respond to rare dysrhythmia in younger, relatively low-risk populations. Surveys were mailed to the head athletic trainers of all National Collegiate Athletic Association Division I athletics programs listed in the National Athletic Trainers' Association directory. In all, 303 surveys were mailed; 186 departments (61%) responded. Seventy-two percent (133) of responding National Collegiate Athletic Association Division I athletics programs have access to automated external defibrillator units; 54% (101) own their units. Proven medical benefit (55%), concern for liability (51%), and affordability (29%) ranked highest in frequency of reasons for automated external defibrillator purchase. Unit cost (odds ratio = 1.01; 95% confidence interval, 1.01-1.0), donated units (odds ratio = 1.92; confidence interval, 3.66-1.01), institution size (odds ratio =.0001; confidence interval, 1.3 E-4 to 2.2E-05), and proven medical benefit of automated external defibrillators (odds ratio = 24; confidence interval, 72-8.1) were the most significant predictors of departmental defibrillator ownership. Emergency medical service response time and sudden cardiac death event history were not significantly predictive of departmental defibrillator ownership. The majority of automated external defibrillator interventions occurred on nonathletes. Many athletics medicine programs are obtaining automated external defibrillators without apparent criteria for determination of need. Usage and maintenance policies vary widely among departments with unit ownership or access. Programs need to approach the issue of unit acquisition and implementation with knowledge of the surrounding emergency medical service system, geography of their individual sports medicine facilities, numbers and relative risk of their athletes, and budgetary constraints.
Challenges and Demands on Automated Software Revision
NASA Technical Reports Server (NTRS)
Bonakdarpour, Borzoo; Kulkarni, Sandeep S.
2008-01-01
In the past three decades, automated program verification has undoubtedly been one of the most successful contributions of formal methods to software development. However, when verification of a program against a logical specification discovers bugs in the program, manual manipulation of the program is needed in order to repair it. Thus, in the face of existence of numerous unverified and un- certified legacy software in virtually any organization, tools that enable engineers to automatically verify and subsequently fix existing programs are highly desirable. In addition, since requirements of software systems often evolve during the software life cycle, the issue of incomplete specification has become a customary fact in many design and development teams. Thus, automated techniques that revise existing programs according to new specifications are of great assistance to designers, developers, and maintenance engineers. As a result, incorporating program synthesis techniques where an algorithm generates a program, that is correct-by-construction, seems to be a necessity. The notion of manual program repair described above turns out to be even more complex when programs are integrated with large collections of sensors and actuators in hostile physical environments in the so-called cyber-physical systems. When such systems are safety/mission- critical (e.g., in avionics systems), it is essential that the system reacts to physical events such as faults, delays, signals, attacks, etc, so that the system specification is not violated. In fact, since it is impossible to anticipate all possible such physical events at design time, it is highly desirable to have automated techniques that revise programs with respect to newly identified physical events according to the system specification.
Peak picking multidimensional NMR spectra with the contour geometry based algorithm CYPICK.
Würz, Julia M; Güntert, Peter
2017-01-01
The automated identification of signals in multidimensional NMR spectra is a challenging task, complicated by signal overlap, noise, and spectral artifacts, for which no universally accepted method is available. Here, we present a new peak picking algorithm, CYPICK, that follows, as far as possible, the manual approach taken by a spectroscopist who analyzes peak patterns in contour plots of the spectrum, but is fully automated. Human visual inspection is replaced by the evaluation of geometric criteria applied to contour lines, such as local extremality, approximate circularity (after appropriate scaling of the spectrum axes), and convexity. The performance of CYPICK was evaluated for a variety of spectra from different proteins by systematic comparison with peak lists obtained by other, manual or automated, peak picking methods, as well as by analyzing the results of automated chemical shift assignment and structure calculation based on input peak lists from CYPICK. The results show that CYPICK yielded peak lists that compare in most cases favorably to those obtained by other automated peak pickers with respect to the criteria of finding a maximal number of real signals, a minimal number of artifact peaks, and maximal correctness of the chemical shift assignments and the three-dimensional structure obtained by fully automated assignment and structure calculation.
Automated size-specific CT dose monitoring program: assessing variability in CT dose.
Christianson, Olav; Li, Xiang; Frush, Donald; Samei, Ehsan
2012-11-01
The potential health risks associated with low levels of ionizing radiation have created a movement in the radiology community to optimize computed tomography (CT) imaging protocols to use the lowest radiation dose possible without compromising the diagnostic usefulness of the images. Despite efforts to use appropriate and consistent radiation doses, studies suggest that a great deal of variability in radiation dose exists both within and between institutions for CT imaging. In this context, the authors have developed an automated size-specific radiation dose monitoring program for CT and used this program to assess variability in size-adjusted effective dose from CT imaging. The authors radiation dose monitoring program operates on an independent health insurance portability and accountability act compliant dosimetry server. Digital imaging and communication in medicine routing software is used to isolate dose report screen captures and scout images for all incoming CT studies. Effective dose conversion factors (k-factors) are determined based on the protocol and optical character recognition is used to extract the CT dose index and dose-length product. The patient's thickness is obtained by applying an adaptive thresholding algorithm to the scout images and is used to calculate the size-adjusted effective dose (ED(adj)). The radiation dose monitoring program was used to collect data on 6351 CT studies from three scanner models (GE Lightspeed Pro 16, GE Lightspeed VCT, and GE Definition CT750 HD) and two institutions over a one-month period and to analyze the variability in ED(adj) between scanner models and across institutions. No significant difference was found between computer measurements of patient thickness and observer measurements (p = 0.17), and the average difference between the two methods was less than 4%. Applying the size correction resulted in ED(adj) that differed by up to 44% from effective dose estimates that were not adjusted by patient size. Additionally, considerable differences were noted in ED(adj) distributions between scanners, with scanners employing iterative reconstruction exhibiting significantly lower ED(adj) (range: 9%-64%). Finally, a significant difference (up to 59%) in ED(adj) distributions was observed between institutions, indicating the potential for dose reduction. The authors developed a robust automated size-specific radiation dose monitoring program for CT. Using this program, significant differences in ED(adj) were observed between scanner models and across institutions. This new dose monitoring program offers a unique tool for improving quality assurance and standardization both within and across institutions.
Automated size-specific CT dose monitoring program: Assessing variability in CT dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christianson, Olav; Li Xiang; Frush, Donald
2012-11-15
Purpose: The potential health risks associated with low levels of ionizing radiation have created a movement in the radiology community to optimize computed tomography (CT) imaging protocols to use the lowest radiation dose possible without compromising the diagnostic usefulness of the images. Despite efforts to use appropriate and consistent radiation doses, studies suggest that a great deal of variability in radiation dose exists both within and between institutions for CT imaging. In this context, the authors have developed an automated size-specific radiation dose monitoring program for CT and used this program to assess variability in size-adjusted effective dose from CTmore » imaging. Methods: The authors radiation dose monitoring program operates on an independent health insurance portability and accountability act compliant dosimetry server. Digital imaging and communication in medicine routing software is used to isolate dose report screen captures and scout images for all incoming CT studies. Effective dose conversion factors (k-factors) are determined based on the protocol and optical character recognition is used to extract the CT dose index and dose-length product. The patient's thickness is obtained by applying an adaptive thresholding algorithm to the scout images and is used to calculate the size-adjusted effective dose (ED{sub adj}). The radiation dose monitoring program was used to collect data on 6351 CT studies from three scanner models (GE Lightspeed Pro 16, GE Lightspeed VCT, and GE Definition CT750 HD) and two institutions over a one-month period and to analyze the variability in ED{sub adj} between scanner models and across institutions. Results: No significant difference was found between computer measurements of patient thickness and observer measurements (p= 0.17), and the average difference between the two methods was less than 4%. Applying the size correction resulted in ED{sub adj} that differed by up to 44% from effective dose estimates that were not adjusted by patient size. Additionally, considerable differences were noted in ED{sub adj} distributions between scanners, with scanners employing iterative reconstruction exhibiting significantly lower ED{sub adj} (range: 9%-64%). Finally, a significant difference (up to 59%) in ED{sub adj} distributions was observed between institutions, indicating the potential for dose reduction. Conclusions: The authors developed a robust automated size-specific radiation dose monitoring program for CT. Using this program, significant differences in ED{sub adj} were observed between scanner models and across institutions. This new dose monitoring program offers a unique tool for improving quality assurance and standardization both within and across institutions.« less
Study 2.5 final report. DORCA computer program. Volume 5: Analysis report
NASA Technical Reports Server (NTRS)
Campbell, N.
1972-01-01
A modification of the Dynamic Operational Requirements and Cost Analysis Program to perform traffic analyses of the automated satellite program is described. Inherent in the analyses of the automated satellite program was the assumption that a number of vehicles were available to perform any or all of the missions within the satellite program. The objective of the modification was to select a vehicle or group of vehicles for performing all of the missions at the lowest possible cost. A vehicle selection routine and the capability to simulate ground based vehicle operational modes were incorporated into the program.
NASA Technical Reports Server (NTRS)
1993-01-01
The information required by a programmer using the Minimum Hamiltonian AScent Trajectory Evaluation (MASTRE) Program is provided. This document enables the programmer to either modify the program or convert the program to computers other than the VAX computer. Documentation for each subroutine or function based on providing the definitions of the variables and a source listing are included. Questions concerning the equations, techniques, or input requirements should be answered by either the Engineering or User's manuals. Three appendices are also included which provide a listing of the Root-Sum-Square (RSS) program, a listing of subroutine names and definitions used in the MASTRE User Friendly Interface Program, and listing of the subroutine names and definitions used in the Mass Properties Program. The RSS Program is used to aid in the performance of dispersion analyses. The RSS program reads a file generated by the MASTRE Program, calculates dispersion parameters, and generates output tables and output plot files. UFI Program provides a screen user interface to aid the user in providing input to the model. The Mass Properties Program defines the mass properties data for the MASTRE program through the use of user interface software.
Robotics/Automated Systems Technicians.
ERIC Educational Resources Information Center
Doty, Charles R.
Major resources exist that can be used to develop or upgrade programs in community colleges and technical institutes that educate robotics/automated systems technicians. The first category of resources is Economic, Social, and Education Issues. The Office of Technology Assessment (OTA) report, "Automation and the Workplace," presents analyses of…
Duo, Jia; Dong, Huijin; DeSilva, Binodh; Zhang, Yan J
2013-07-01
Sample dilution and reagent pipetting are time-consuming steps in ligand-binding assays (LBAs). Traditional automation-assisted LBAs use assay-specific scripts that require labor-intensive script writing and user training. Five major script modules were developed on Tecan Freedom EVO liquid handling software to facilitate the automated sample preparation and LBA procedure: sample dilution, sample minimum required dilution, standard/QC minimum required dilution, standard/QC/sample addition, and reagent addition. The modular design of automation scripts allowed the users to assemble an automated assay with minimal script modification. The application of the template was demonstrated in three LBAs to support discovery biotherapeutic programs. The results demonstrated that the modular scripts provided the flexibility in adapting to various LBA formats and the significant time saving in script writing and scientist training. Data generated by the automated process were comparable to those by manual process while the bioanalytical productivity was significantly improved using the modular robotic scripts.
Mira, Adriana; Bretón-López, Juana; García-Palacios, Azucena; Quero, Soledad; Baños, Rosa María; Botella, Cristina
2017-01-01
Purpose The purpose of this study was to analyze the efficacy of an Internet-based program for depressive symptoms using automated support by information and communication technologies (ICTs) and human support. Patients and methods An Internet-based program was used to teach adaptive ways to cope with depressive symptoms and daily problems. A total of 124 participants who were experiencing at least one stressful event that caused interference in their lives, many of whom had clinically significant depressive symptoms, were randomly assigned into either an intervention group with ICT support (automated mobile phone messages, automated emails, and continued feedback through the program); an intervention group with ICT support plus human support (brief weekly support phone call without clinical content); or a waiting-list control. At pre-, post-, and 12-month follow-up, they completed depression, anxiety, positive and negative effect, and perceived stress measures. Results were analyzed using both intention-to-treat and completers data. The majority were women (67.7%), with a mean age of 35.6 years (standard deviation =9.7). Results The analysis showed that the two intervention groups improved significantly pre- to posttreatment, compared with the control group. Furthermore, improvements were maintained at the 12-month follow-up. Adherence and satisfaction with the program was high in both conditions. Conclusion The Internet-based program was effective and well accepted, with and without human support, showing that ICT-based automated support may be useful. It is essential to continue to study other ICT strategies for providing support. PMID:28408833
Nishino, Kazuaki; Yoshida, Fujiko; Nitta, Akari; Saito, Mieko; Saito, Kazuuchi
2013-12-01
To evaluate retrospectively seasonal fluctuations of transient intraocular pressure (IOP) elevation after automated visual field examination in patients with primary open-angle glaucoma (POAG). We reviewed 53 consecutive patients with POAG who visited Kaimeido Ophthalmic and Dental Clinic from January 2011 to March 2013, 21 men and 32 women aged 67.7 +/- 11.2 years. The patients were divided into 4 groups, spring, summer, autumn, and winter according to the month of automated visual field examination and both eyes of each patient were enrolled. IOP was measured immediately after automated visual field examination (vf IOP) and compared with the average IOP from the previous 3 months (pre IOP) and with the average IOP from the following 3 months (post IOP) in each season. IOP elevation rate was defined as (vf IOP- pre IOP)/pre IOP x 100% and calculated for each season (paired t test). Additionally, the correlation between mean deviation (MD) and IOP elevation rate was evaluated (single regression analysis). Exclusion criteria were patients who received cataract surgery during this study or had a history of any previous glaucoma surgery. The automated visual field test was performed with a Humphrey field analyzer and the Central 30-2 FASTPAC threshold program. The average vf IOP was 14.5 +/- 2.5 mmHg, higher than pre IOP 13.8 +/- 2.4 mmHg (p < 0.0001) and the post IOP 13.8 +/- 2.2 mmHg (p < 0.0001). IOP elevation rate in each season was 4.1 11.6% in spring (n = 22, p = 0.18), 0.1 +/- 9.9% in summer (n = 16, p = 1.0), 5.0 +/- 13.8% in autumn (n = 30. p = 0.11), 10.6 +/- 8.8% in winter (n = 38, p < 0.0001). The MD was not correlated with IOP elevation rate (p = 0.17). Patients with POAG show a transient IOP elevation after automated visual field examination, especially in the winter but not in the summer.
NASA Astrophysics Data System (ADS)
Vasuki, Yathunanthan; Holden, Eun-Jung; Kovesi, Peter; Micklethwaite, Steven
2014-08-01
Recent advances in data acquisition technologies, such as Unmanned Aerial Vehicles (UAVs), have led to a growing interest in capturing high-resolution rock surface images. However, due to the large volumes of data that can be captured in a short flight, efficient analysis of this data brings new challenges, especially the time it takes to digitise maps and extract orientation data. We outline a semi-automated method that allows efficient mapping of geological faults using photogrammetric data of rock surfaces, which was generated from aerial photographs collected by a UAV. Our method harnesses advanced automated image analysis techniques and human data interaction to rapidly map structures and then calculate their dip and dip directions. Geological structures (faults, joints and fractures) are first detected from the primary photographic dataset and the equivalent three dimensional (3D) structures are then identified within a 3D surface model generated by structure from motion (SfM). From this information the location, dip and dip direction of the geological structures are calculated. A structure map generated by our semi-automated method obtained a recall rate of 79.8% when compared against a fault map produced using expert manual digitising and interpretation methods. The semi-automated structure map was produced in 10 min whereas the manual method took approximately 7 h. In addition, the dip and dip direction calculation, using our automated method, shows a mean±standard error of 1.9°±2.2° and 4.4°±2.6° respectively with field measurements. This shows the potential of using our semi-automated method for accurate and efficient mapping of geological structures, particularly from remote, inaccessible or hazardous sites.
Automated Grading System for Evaluation of Superficial Punctate Keratitis Associated With Dry Eye.
Rodriguez, John D; Lane, Keith J; Ousler, George W; Angjeli, Endri; Smith, Lisa M; Abelson, Mark B
2015-04-01
To develop an automated method of grading fluorescein staining that accurately reproduces the clinical grading system currently in use. From the slit lamp photograph of the fluorescein-stained cornea, the region of interest was selected and punctate dot number calculated using software developed with the OpenCV computer vision library. Images (n = 229) were then divided into six incremental severity categories based on computed scores. The final selection of 54 photographs represented the full range of scores: nine images from each of six categories. These were then evaluated by three investigators using a clinical 0 to 4 corneal staining scale. Pearson correlations were calculated to compare investigator scores, and mean investigator and automated scores. Lin's Concordance Correlation Coefficients (CCC) and Bland-Altman plots were used to assess agreement between methods and between investigators. Pearson's correlation between investigators was 0.914; mean CCC between investigators was 0.882. Bland-Altman analysis indicated that scores assessed by investigator 3 were significantly higher than those of investigators 1 and 2 (paired t-test). The predicted grade was calculated to be: Gpred = 1.48log(Ndots) - 0.206. The two-point Pearson's correlation coefficient between the methods was 0.927 (P < 0.0001). The CCC between predicted automated score Gpred and mean investigator score was 0.929, 95% confidence interval (0.884-0.957). Bland-Altman analysis did not indicate bias. The difference in SD between clinical and automated methods was 0.398. An objective, automated analysis of corneal staining provides a quality assurance tool to be used to substantiate clinical grading of key corneal staining endpoints in multicentered clinical trials of dry eye.
Zhang, Lijun; Sy, Mary Ellen; Mai, Harry; Yu, Fei; Hamilton, D Rex
2015-01-01
To compare the prediction error after toric intraocular lens (IOL) (Acrysof IQ) implantation using corneal astigmatism measurements obtained with an IOLMaster automated keratometer and a Galilei dual rotating camera Scheimpflug-Placido tomographer. Jules Stein Eye Institute, University of California Los Angeles, Los Angeles, California, USA. Retrospective case series. The predicted residual astigmatism after toric IOL implantation was calculated using preoperative astigmatism values from an automated keratometer and the total corneal power (TCP) determined by ray tracing through the measured anterior and posterior corneal surfaces using dual Scheimpflug-Placido tomography. The prediction error was calculated as the difference between the predicted astigmatism and the manifest astigmatism at least 1 month postoperatively. The calculations included vector analysis. The study evaluated 35 eyes (35 patients). The preoperative corneal posterior astigmatism mean magnitude was 0.33 diopter (D) ± 0.16 (SD) (vector mean 0.23 × 176). Twenty-six eyes (74.3%) had with-the-rule (WTR) posterior astigmatism. The postoperative manifest refractive astigmatism mean magnitude was 0.38 ± 0.18 D (vector mean 0.26 × 171). There was no statistically significant difference in the mean magnitude prediction error between the automated keratometer and TCP techniques. However, the automated keratometer method tended to overcorrect WTR astigmatism and undercorrect against-the-rule (ATR) astigmatism. The TCP technique lacked these biases. The automated keratometer and TCP methods for estimating the magnitude of corneal astigmatism gave similar results. However, the automated keratometer method tended to overcorrect WTR astigmatism and undercorrect ATR astigmatism. Dr. Hamilton has received honoraria for educational lectures from Ziemer Ophthalmic Systems. No other author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Exploring the Lived Experiences of Program Managers Regarding an Automated Logistics Environment
ERIC Educational Resources Information Center
Allen, Ronald Timothy
2014-01-01
Automated Logistics Environment (ALE) is a new term used by Navy and aerospace industry executives to describe the aggregate of logistics-related information systems that support modern aircraft weapon systems. The development of logistics information systems is not always well coordinated among programs, often resulting in solutions that cannot…
Kevlar: Transitioning Helix for Research to Practice
2016-03-01
entropy randomization techniques, automated program repairs leveraging highly-optimized virtual machine technology, and developing a novel framework...attacker from exploiting residual vulnerabilities in a wide variety of classes. Helix/Kevlar uses novel, fine-grained, high- entropy diversification...the Air Force, and IARPA). Salient features of Helix/Kevlar include developing high- entropy randomization techniques, automated program repairs
Automated Simultaneous Assembly of Multistage Testlets for a High-Stakes Licensing Examination
ERIC Educational Resources Information Center
Breithaupt, Krista; Hare, Donovan R.
2007-01-01
Many challenges exist for high-stakes testing programs offering continuous computerized administration. The automated assembly of test questions to exactly meet content and other requirements, provide uniformity, and control item exposure can be modeled and solved by mixed-integer programming (MIP) methods. A case study of the computerized…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-06
... DEPARTMENT OF STATE [Public Notice 7515] 30-Day Notice of Proposed Information Collection: DS-573, DS-574, DS-575, and DS-576, Overseas Schools--Grant Request Automated Submissions Program (GRASP.... Title of Information Collection: Grant Request Automated Submissions Program (GRASP). OMB Control Number...
Supporting Studies on QDRI Project Plan. Report No. 2. Conversion Procedures for Automation.
ERIC Educational Resources Information Center
Peirce, James G.; And Others
This report describes the tasks and related efforts necessary to convert the present Qualitative Developments Requirements Information (QDRI) Program from a clerical operation into an automated information processing system. A description of the QDRI Program is presented whereby current operations, files, procedures, etc. are contrasted to planned…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-05
... Part of the Fast Automated Transfer Program March 30, 2010. I. Introduction On January 19, 2010, The... made unsponsored ADRs eligible for DTC's Fast Automated Securities Transfer Program (``FAST'').\\4\\ \\3... associated with safekeeping, transfer, shipping and insurance costs. \\4\\ FAST was designed to eliminate some...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-22
... Fast Automated Securities Transfer Program February 5, 2010. Pursuant to Section 19(b)(1) of the... American Depositary Receipts (``ADRs'') that are a part of DTC's Fast Automated Securities Transfer Program (``FAST''). II. Self-Regulatory Organization's Statement of the Purpose of, and Statutory Basis for, the...
Automating an integrated spatial data-mining model for landfill site selection
NASA Astrophysics Data System (ADS)
Abujayyab, Sohaib K. M.; Ahamad, Mohd Sanusi S.; Yahya, Ahmad Shukri; Ahmad, Siti Zubaidah; Aziz, Hamidi Abdul
2017-10-01
An integrated programming environment represents a robust approach to building a valid model for landfill site selection. One of the main challenges in the integrated model is the complicated processing and modelling due to the programming stages and several limitations. An automation process helps avoid the limitations and improve the interoperability between integrated programming environments. This work targets the automation of a spatial data-mining model for landfill site selection by integrating between spatial programming environment (Python-ArcGIS) and non-spatial environment (MATLAB). The model was constructed using neural networks and is divided into nine stages distributed between Matlab and Python-ArcGIS. A case study was taken from the north part of Peninsular Malaysia. 22 criteria were selected to utilise as input data and to build the training and testing datasets. The outcomes show a high-performance accuracy percentage of 98.2% in the testing dataset using 10-fold cross validation. The automated spatial data mining model provides a solid platform for decision makers to performing landfill site selection and planning operations on a regional scale.
Kuo, Phillip Hsin; Avery, Ryan; Krupinski, Elizabeth; Lei, Hong; Bauer, Adam; Sherman, Scott; McMillan, Natalie; Seibyl, John; Zubal, George
2013-03-01
A fully automated objective striatal analysis (OSA) program that quantitates dopamine transporter uptake in subjects with suspected Parkinson's disease was applied to images from clinical (123)I-ioflupane studies. The striatal binding ratios or alternatively the specific binding ratio (SBR) of the lowest putamen uptake was computed, and receiver-operating-characteristic (ROC) analysis was applied to 94 subjects to determine the best discriminator using this quantitative method. Ninety-four (123)I-ioflupane SPECT scans were analyzed from patients referred to our clinical imaging department and were reconstructed using the manufacturer-supplied reconstruction and filtering parameters for the radiotracer. Three trained readers conducted independent visual interpretations and reported each case as either normal or showing dopaminergic deficit (abnormal). The same images were analyzed using the OSA software, which locates the striatal and occipital structures and places regions of interest on the caudate and putamen. Additionally, the OSA places a region of interest on the occipital region that is used to calculate the background-subtracted SBR. The lower SBR of the 2 putamen regions was taken as the quantitative report. The 33 normal (bilateral comma-shaped striata) and 61 abnormal (unilateral or bilateral dopaminergic deficit) studies were analyzed to generate ROC curves. Twenty-nine of the scans were interpreted as normal and 59 as abnormal by all 3 readers. For 12 scans, the 3 readers did not unanimously agree in their interpretations (discordant). The ROC analysis, which used the visual-majority-consensus interpretation from the readers as the gold standard, yielded an area under the curve of 0.958 when using 1.08 as the threshold SBR for the lowest putamen. The sensitivity and specificity of the automated quantitative analysis were 95% and 89%, respectively. The OSA program delivers SBR quantitative values that have a high sensitivity and specificity, compared with visual interpretations by trained nuclear medicine readers. Such a program could be a helpful aid for readers not yet experienced with (123)I-ioflupane SPECT images and if further adapted and validated may be useful to assess disease progression during pharmaceutical testing of therapies.
Almatani, Turki; Hugtenburg, Richard P; Lewis, Ryan D; Barley, Susan E; Edwards, Mark A
2016-10-01
Cone beam CT (CBCT) images contain more scatter than a conventional CT image and therefore provide inaccurate Hounsfield units (HUs). Consequently, CBCT images cannot be used directly for radiotherapy dose calculation. The aim of this study is to enable dose calculations to be performed with the use of CBCT images taken during radiotherapy and evaluate the necessity of replanning. A patient with prostate cancer with bilateral metallic prosthetic hip replacements was imaged using both CT and CBCT. The multilevel threshold (MLT) algorithm was used to categorize pixel values in the CBCT images into segments of homogeneous HU. The variation in HU with position in the CBCT images was taken into consideration. This segmentation method relies on the operator dividing the CBCT data into a set of volumes where the variation in the relationship between pixel values and HUs is small. An automated MLT algorithm was developed to reduce the operator time associated with the process. An intensity-modulated radiation therapy plan was generated from CT images of the patient. The plan was then copied to the segmented CBCT (sCBCT) data sets with identical settings, and the doses were recalculated and compared. Gamma evaluation showed that the percentage of points in the rectum with γ < 1 (3%/3 mm) were 98.7% and 97.7% in the sCBCT using MLT and the automated MLT algorithms, respectively. Compared with the planning CT (pCT) plan, the MLT algorithm showed -0.46% dose difference with 8 h operator time while the automated MLT algorithm showed -1.3%, which are both considered to be clinically acceptable, when using collapsed cone algorithm. The segmentation of CBCT images using the method in this study can be used for dose calculation. For a patient with prostate cancer with bilateral hip prostheses and the associated issues with CT imaging, the MLT algorithms achieved a sufficient dose calculation accuracy that is clinically acceptable. The automated MLT algorithm reduced the operator time associated with implementing the MLT algorithm to achieve clinically acceptable accuracy. This saved time makes the automated MLT algorithm superior and easier to implement in the clinical setting. The MLT algorithm has been extended to the complex example of a patient with bilateral hip prostheses, which with the introduction of automation is feasible for use in adaptive radiotherapy, as an alternative to obtaining a new pCT and reoutlining the structures.
Advancing automation and robotics technology for the Space Station Freedom and for the US economy
NASA Technical Reports Server (NTRS)
1990-01-01
The progress made by levels 1, 2, and 3 of the Office of Space Station in developing and applying advanced automation and robotics technology is described. Emphasis is placed upon the Space Station Freedom Program responses to specific recommendations made in the Advanced Technology Advisory Committee (ATAC) progress report 10, the flight telerobotic servicer, and the Advanced Development Program. Assessments are presented for these and other areas as they apply to the advancement of automation and robotics technology for the Space Station Freedom.
Automated verification of flight software. User's manual
NASA Technical Reports Server (NTRS)
Saib, S. H.
1982-01-01
(Automated Verification of Flight Software), a collection of tools for analyzing source programs written in FORTRAN and AED is documented. The quality and the reliability of flight software are improved by: (1) indented listings of source programs, (2) static analysis to detect inconsistencies in the use of variables and parameters, (3) automated documentation, (4) instrumentation of source code, (5) retesting guidance, (6) analysis of assertions, (7) symbolic execution, (8) generation of verification conditions, and (9) simplification of verification conditions. Use of AVFS in the verification of flight software is described.
Process development for automated solar cell and module production. Task 4: Automated array assembly
NASA Technical Reports Server (NTRS)
Hagerty, J. J.
1981-01-01
The Unimate robot was programmed for the final 35 cell pattern to be used in the fabrication of the deliverable modules. Mechanical construction of the Automated Lamination Station and Final Assembly Station were completed on schedule. All final wiring and interconnect cables were also completed and the first operational testing began. The final controlling program was written. A local fabricator was contracted to produce the glass reinforced concrete panels to be used for testing and deliverables. A video tape showing all three stations in operation was produced.
NASA Technical Reports Server (NTRS)
1991-01-01
Viewgraphs of briefings presented at the SSTAC/ARTS review of the draft Integrated Technology Plan (ITP) on aerothermodynamics, automation and robotics systems, sensors, and high-temperature superconductivity are included. Topics covered include: aerothermodynamics; aerobraking; aeroassist flight experiment; entry technology for probes and penetrators; automation and robotics; artificial intelligence; NASA telerobotics program; planetary rover program; science sensor technology; direct detector; submillimeter sensors; laser sensors; passive microwave sensing; active microwave sensing; sensor electronics; sensor optics; coolers and cryogenics; and high temperature superconductivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Viewgraphs of briefings presented at the SSTAC/ARTS review of the draft Integrated Technology Plan (ITP) on aerothermodynamics, automation and robotics systems, sensors, and high-temperature superconductivity are included. Topics covered include: aerothermodynamics; aerobraking; aeroassist flight experiment; entry technology for probes and penetrators; automation and robotics; artificial intelligence; NASA telerobotics program; planetary rover program; science sensor technology; direct detector; submillimeter sensors; laser sensors; passive microwave sensing; active microwave sensing; sensor electronics; sensor optics; coolers and cryogenics; and high temperature superconductivity.
Benn, Neil; Turlais, Fabrice; Clark, Victoria; Jones, Mike; Clulow, Stephen
2007-03-01
The authors describe a system for collecting usage metrics from widely distributed automation systems. An application that records and stores usage data centrally, calculates run times, and charts the data was developed. Data were collected over 20 months from at least 28 workstations. The application was used to plot bar charts of date versus run time for individual workstations, the automation in a specific laboratory, or automation of a specified type. The authors show that revised user training, redeployment of equipment, and running complimentary processes on one workstation can increase the average number of runs by up to 20-fold and run times by up to 450%. Active monitoring of usage leads to more effective use of automation. Usage data could be used to determine whether purchasing particular automation was a good investment.
Programming methodology for a general purpose automation controller
NASA Technical Reports Server (NTRS)
Sturzenbecker, M. C.; Korein, J. U.; Taylor, R. H.
1987-01-01
The General Purpose Automation Controller is a multi-processor architecture for automation programming. A methodology has been developed whose aim is to simplify the task of programming distributed real-time systems for users in research or manufacturing. Programs are built by configuring function blocks (low-level computations) into processes using data flow principles. These processes are activated through the verb mechanism. Verbs are divided into two classes: those which support devices, such as robot joint servos, and those which perform actions on devices, such as motion control. This programming methodology was developed in order to achieve the following goals: (1) specifications for real-time programs which are to a high degree independent of hardware considerations such as processor, bus, and interconnect technology; (2) a component approach to software, so that software required to support new devices and technologies can be integrated by reconfiguring existing building blocks; (3) resistance to error and ease of debugging; and (4) a powerful command language interface.
Knowledge-based segmentation of pediatric kidneys in CT for measuring parenchymal volume
NASA Astrophysics Data System (ADS)
Brown, Matthew S.; Feng, Waldo C.; Hall, Theodore R.; McNitt-Gray, Michael F.; Churchill, Bernard M.
2000-06-01
The purpose of this work was to develop an automated method for segmenting pediatric kidneys in contrast-enhanced helical CT images and measuring the volume of the renal parenchyma. An automated system was developed to segment the abdomen, spine, aorta and kidneys. The expected size, shape, topology an X-ray attenuation of anatomical structures are stored as features in an anatomical model. These features guide 3-D threshold-based segmentation and then matching of extracted image regions to anatomical structures in the model. Following segmentation, the kidney volumes are calculated by summing included voxels. To validate the system, the kidney volumes of 4 swine were calculated using our approach and compared to the 'true' volumes measured after harvesting the kidneys. Automated volume calculations were also performed retrospectively in a cohort of 10 children. The mean difference between the calculated and measured values in the swine kidneys was 1.38 (S.D. plus or minus 0.44) cc. For the pediatric cases, calculated volumes ranged from 41.7 - 252.1 cc/kidney, and the mean ratio of right to left kidney volume was 0.96 (S.D. plus or minus 0.07). These results demonstrate the accuracy of the volumetric technique that may in the future provide an objective assessment of renal damage.
Systems autonomy technology: Executive summary and program plan
NASA Technical Reports Server (NTRS)
Bull, John S (Editor)
1987-01-01
The National Space Strategy approved by the President and Congress in 1984 sets for NASA a major goal of conducting effective and productive space applications and technology programs which contribute materially toward United States leadership and security. To contribute to this goal, OAST supports the Nation's civil and defense space programs and overall economic growth. OAST objectives are to ensure timely provision of new concepts and advanced technologies, to support both the development of NASA missions in space and the space activities of industry and other organizations, to utilize the strengths of universities in conducting the NASA space research and technology program, and to maintain the NASA centers in positions of strength in critical space technology areas. In line with these objectives, NASA has established a new program in space automation and robotics that will result in the development and transfer and automation technology to increase the capabilities, productivity, and safety of NASA space programs including the Space Station, automated space platforms, lunar bases, Mars missions, and other deep space ventures. The NASA/OAST Automation and Robotics program is divided into two parts. Ames Research Center has the lead role in developing and demonstrating System Autonomy capabilities for space systems that need to make their own decisions and do their own planning. The Jet Propulsion Laboratory has the lead role for Telerobotics (that portion of the program that has a strong human operator component in the control loop and some remote handling requirement in space). This program is intended to be a working document for NASA Headquarters, Program Offices, and implementing Project Management.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Automation U.S.A.: Overcoming Barriers to Automation.
ERIC Educational Resources Information Center
Brody, Herb
1985-01-01
Although labor unions and inadequate technology play minor roles, the principal barrier to factory automation is "fear of change." Related problems include long-term benefits, nontechnical executives, and uncertainty of factory cost accounting. Industry support for university programs is helping to educate engineers to design, implement, and…
ERIC Educational Resources Information Center
Zhang, Mo
2013-01-01
Many testing programs use automated scoring to grade essays. One issue in automated essay scoring that has not been examined adequately is population invariance and its causes. The primary purpose of this study was to investigate the impact of sampling in model calibration on population invariance of automated scores. This study analyzed scores…
Automated Engineering Design (AED); An approach to automated documentation
NASA Technical Reports Server (NTRS)
Mcclure, C. W.
1970-01-01
The automated engineering design (AED) is reviewed, consisting of a high level systems programming language, a series of modular precoded subroutines, and a set of powerful software machine tools that effectively automate the production and design of new languages. AED is used primarily for development of problem and user-oriented languages. Software production phases are diagramed, and factors which inhibit effective documentation are evaluated.
Adaptive Algorithms for Automated Processing of Document Images
2011-01-01
ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University
Automatic Dissection Of Plantlets
NASA Astrophysics Data System (ADS)
Batchelor, B. G.; Harris, I. P.; Marchant, J. A.; Tillett, R. D.
1989-03-01
Micropropagation is a technique used in horticulture for generating a monoclonal colony of plants. A tiny plantlet is cut into several parts, each of which is then replanted. At the moment, the cutting is performed manually. Automating this task would have significant economic benefits. A robot designed to dissect plants would need to be equipped with intelligent visual sensing. This article is concerned with the image acquisition and processing techniques which such a machine might use. A program, which can calculate where to cut a plant with an "open" structure, is presented. This is expressed in the ProVision language, which is described in another article presented at this conference. (Article 1002-65)
Lommen, Arjen
2009-04-15
Hyphenated full-scan MS technology creates large amounts of data. A versatile easy to handle automation tool aiding in the data analysis is very important in handling such a data stream. MetAlign softwareas described in this manuscripthandles a broad range of accurate mass and nominal mass GC/MS and LC/MS data. It is capable of automatic format conversions, accurate mass calculations, baseline corrections, peak-picking, saturation and mass-peak artifact filtering, as well as alignment of up to 1000 data sets. A 100 to 1000-fold data reduction is achieved. MetAlign software output is compatible with most multivariate statistics programs.
Means of storage and automated monitoring of versions of text technical documentation
NASA Astrophysics Data System (ADS)
Leonovets, S. A.; Shukalov, A. V.; Zharinov, I. O.
2018-03-01
The paper presents automation of the process of preparation, storage and monitoring of version control of a text designer, and program documentation by means of the specialized software is considered. Automation of preparation of documentation is based on processing of the engineering data which are contained in the specifications and technical documentation or in the specification. Data handling assumes existence of strictly structured electronic documents prepared in widespread formats according to templates on the basis of industry standards and generation by an automated method of the program or designer text document. Further life cycle of the document and engineering data entering it are controlled. At each stage of life cycle, archive data storage is carried out. Studies of high-speed performance of use of different widespread document formats in case of automated monitoring and storage are given. The new developed software and the work benches available to the developer of the instrumental equipment are described.
Automation of a DXA-based finite element tool for clinical assessment of hip fracture risk.
Luo, Yunhua; Ahmed, Sharif; Leslie, William D
2018-03-01
Finite element analysis of medical images is a promising tool for assessing hip fracture risk. Although a number of finite element models have been developed for this purpose, none of them have been routinely used in clinic. The main reason is that the computer programs that implement the finite element models have not been completely automated, and heavy training is required before clinicians can effectively use them. By using information embedded in clinical dual energy X-ray absorptiometry (DXA), we completely automated a DXA-based finite element (FE) model that we previously developed for predicting hip fracture risk. The automated FE tool can be run as a standalone computer program with the subject's raw hip DXA image as input. The automated FE tool had greatly improved short-term precision compared with the semi-automated version. To validate the automated FE tool, a clinical cohort consisting of 100 prior hip fracture cases and 300 matched controls was obtained from a local community clinical center. Both the automated FE tool and femoral bone mineral density (BMD) were applied to discriminate the fracture cases from the controls. Femoral BMD is the gold standard reference recommended by the World Health Organization for screening osteoporosis and for assessing hip fracture risk. The accuracy was measured by the area under ROC curve (AUC) and odds ratio (OR). Compared with femoral BMD (AUC = 0.71, OR = 2.07), the automated FE tool had a considerably improved accuracy (AUC = 0.78, OR = 2.61 at the trochanter). This work made a large step toward applying our DXA-based FE model as a routine clinical tool for the assessment of hip fracture risk. Furthermore, the automated computer program can be embedded into a web-site as an internet application. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Bonnie; Boddy, Mark; Doyle, Frank
This report presents the results of an expert study to identify research opportunities for Sensors & Automation, a sub-program of the U.S. Department of Energy (DOE) Industrial Technologies Program (ITP). The research opportunities are prioritized by realizable energy savings. The study encompasses the technology areas of industrial controls, information processing, automation, and robotics. These areas have been central areas of focus of many Industries of the Future (IOF) technology roadmaps. This report identifies opportunities for energy savings as a direct result of advances in these areas and also recognizes indirect means of achieving energy savings, such as product quality improvement,more » productivity improvement, and reduction of recycle.« less
AELAS: Automatic ELAStic property derivations via high-throughput first-principles computation
NASA Astrophysics Data System (ADS)
Zhang, S. H.; Zhang, R. F.
2017-11-01
The elastic properties are fundamental and important for crystalline materials as they relate to other mechanical properties, various thermodynamic qualities as well as some critical physical properties. However, a complete set of experimentally determined elastic properties is only available for a small subset of known materials, and an automatic scheme for the derivations of elastic properties that is adapted to high-throughput computation is much demanding. In this paper, we present the AELAS code, an automated program for calculating second-order elastic constants of both two-dimensional and three-dimensional single crystal materials with any symmetry, which is designed mainly for high-throughput first-principles computation. Other derivations of general elastic properties such as Young's, bulk and shear moduli as well as Poisson's ratio of polycrystal materials, Pugh ratio, Cauchy pressure, elastic anisotropy and elastic stability criterion, are also implemented in this code. The implementation of the code has been critically validated by a lot of evaluations and tests on a broad class of materials including two-dimensional and three-dimensional materials, providing its efficiency and capability for high-throughput screening of specific materials with targeted mechanical properties. Program Files doi:http://dx.doi.org/10.17632/f8fwg4j9tw.1 Licensing provisions: BSD 3-Clause Programming language: Fortran Nature of problem: To automate the calculations of second-order elastic constants and the derivations of other elastic properties for two-dimensional and three-dimensional materials with any symmetry via high-throughput first-principles computation. Solution method: The space-group number is firstly determined by the SPGLIB code [1] and the structure is then redefined to unit cell with IEEE-format [2]. Secondly, based on the determined space group number, a set of distortion modes is automatically specified and the distorted structure files are generated. Afterwards, the total energy for each distorted structure is calculated by the first-principles codes, e.g. VASP [3]. Finally, the second-order elastic constants are determined from the quadratic coefficients of the polynomial fitting of the energies vs strain relationships and other elastic properties are accordingly derived. References [1] http://atztogo.github.io/spglib/. [2] A. Meitzler, H.F. Tiersten, A.W. Warner, D. Berlincourt, G.A. Couqin, F.S. Welsh III, IEEE standard on piezoelectricity, Society, 1988. [3] G. Kresse, J. Furthmüller, Phys. Rev. B 54 (1996) 11169.
Sheng, Xi
2012-07-01
The thesis aims to study the automation replenishment algorithm in hospital on medical supplies supplying chain. The mathematical model and algorithm of medical supplies automation replenishment are designed through referring to practical data form hospital on the basis of applying inventory theory, greedy algorithm and partition algorithm. The automation replenishment algorithm is proved to realize automatic calculation of the medical supplies distribution amount and optimize medical supplies distribution scheme. A conclusion could be arrived that the model and algorithm of inventory theory, if applied in medical supplies circulation field, could provide theoretical and technological support for realizing medical supplies automation replenishment of hospital on medical supplies supplying chain.
System for Computer Automated Typesetting (SCAT) of Computer Authored Texts.
ERIC Educational Resources Information Center
Keeler, F. Laurence
This description of the System for Automated Typesetting (SCAT), an automated system for typesetting text and inserting special graphic symbols in programmed instructional materials created by the computer aided authoring system AUTHOR, provides an outline of the design architecture of the system and an overview including the component…
The Historical Evolution of Educational Software.
ERIC Educational Resources Information Center
Troutner, Joanne
This paper establishes the roots of computers and automated teaching in the field of psychology and describes Dr. S. L. Pressey's presentation of the teaching machine; B. F. Skinner's teaching machine; Meyer's steps in composing a program for the automated teaching machine; IBM's beginning research on automated courses and the development of the…
National Aeronautics and Space Administration's (NASA) Automated Information Security Handbook
NASA Technical Reports Server (NTRS)
Roback, E.
1991-01-01
The NASA Automated Information Security Handbook provides NASA's overall approach to automated information systems security including discussions of such aspects as: program goals and objectives, assignment of responsibilities, risk assessment, foreign national access, contingency planning and disaster recovery, awareness training, procurement, certification, planning, and special considerations for microcomputers.
23 CFR 1200.13 - Special funding conditions for Section 402 grants.
Code of Federal Regulations, 2013 CFR
2013-04-01
.... (b) Automated traffic enforcement systems prohibition. The State may not expend funds apportioned to the State under 23 U.S.C. 402 to carry out a program to purchase, operate, or maintain an automated traffic enforcement system. The term “automated traffic enforcement system” includes any camera which...
23 CFR 1200.13 - Special funding conditions for Section 402 grants.
Code of Federal Regulations, 2014 CFR
2014-04-01
.... (b) Automated traffic enforcement systems prohibition. The State may not expend funds apportioned to the State under 23 U.S.C. 402 to carry out a program to purchase, operate, or maintain an automated traffic enforcement system. The term “automated traffic enforcement system” includes any camera which...
Management of information in a research and development agency
NASA Technical Reports Server (NTRS)
Keene, Wallace O.
1990-01-01
The NASA program for managing scientific and technical information (STI) is examined, noting the technological, managerial, educational, and legal aspects of transferring and disseminating information. A definition of STI is introduced and NASA's STI-related management programs are outlined. Consideration is given to the role of STI management in NASA mission programs, research efforts supporting the management and use of STI, STI program interfaces, and the Automated Information Management Program to eliminate redundant automation efforts in common administrative functions. The infrastructure needed to manage the broad base of NASA information and the interfaces between NASA's STI management and external organizations are described.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-20
... Commissioner of CBP with authority to conduct limited test programs or procedures designed to evaluate planned.... Specifically, CBP is looking for test participants to include: 2-3 Ocean Carriers. At least one must be filing... their software ready to test with CBP once CBP begins the certification process. CBP will post the...
JPLEX: Java Simplex Implementation with Branch-and-Bound Search for Automated Test Assembly
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Dodd, Barbara G.; Chung, Hyewon
2011-01-01
JPLEX, short for Java simPLEX, is an automated test assembly (ATA) program. It is a mixed integer linear programming (MILP) solver written in Java. It reads in a configuration file, solves the minimization problem, and produces an output file for postprocessing. It implements the simplex algorithm to create a fully relaxed solution and…
An automated system for global atmospheric sampling using B-747 airliners
NASA Technical Reports Server (NTRS)
Lew, K. Q.; Gustafsson, U. R. C.; Johnson, R. E.
1981-01-01
The global air sampling program utilizes commercial aircrafts in scheduled service to measure atmospheric constituents. A fully automated system designed for the 747 aircraft is described. Airline operational constraints and data and control subsystems are treated. The overall program management, system monitoring, and data retrieval from four aircraft in global service is described.
Perceptions of L1 Glossed Feedback in Automated Writing Evaluation: A Case Study
ERIC Educational Resources Information Center
Wilken, Jayme Lynn
2018-01-01
Learner perceptions toward and utilization of L1 glossed feedback in an automated writing evaluation (AWE) program were investigated in an Intensive English Program (IEP) class. This small case study focused on two Chinese students who responded to weekly surveys, semi-structured interviews, and screen capture videos of their revisions over a…
ERIC Educational Resources Information Center
Federal Information Processing Standards Publication, 1976
1976-01-01
These guidelines provide a basis for determining the content and extent of documentation for computer programs and automated data systems. Content descriptions of ten document types plus examples of how management can determine when to use the various types are included. The documents described are (1) functional requirements documents, (2) data…
NASA Astrophysics Data System (ADS)
Stepanov, Sergey
2013-03-01
X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.
NASA Technical Reports Server (NTRS)
Kim, Jonnathan H.
1995-01-01
Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).
Automated multi-lesion detection for referable diabetic retinopathy in indigenous health care.
Pires, Ramon; Carvalho, Tiago; Spurling, Geoffrey; Goldenstein, Siome; Wainer, Jacques; Luckie, Alan; Jelinek, Herbert F; Rocha, Anderson
2015-01-01
Diabetic Retinopathy (DR) is a complication of diabetes mellitus that affects more than one-quarter of the population with diabetes, and can lead to blindness if not discovered in time. An automated screening enables the identification of patients who need further medical attention. This study aimed to classify retinal images of Aboriginal and Torres Strait Islander peoples utilizing an automated computer-based multi-lesion eye screening program for diabetic retinopathy. The multi-lesion classifier was trained on 1,014 images from the São Paulo Eye Hospital and tested on retinal images containing no DR-related lesion, single lesions, or multiple types of lesions from the Inala Aboriginal and Torres Strait Islander health care centre. The automated multi-lesion classifier has the potential to enhance the efficiency of clinical practice delivering diabetic retinopathy screening. Our program does not necessitate image samples for training from any specific ethnic group or population being assessed and is independent of image pre- or post-processing to identify retinal lesions. In this Aboriginal and Torres Strait Islander population, the program achieved 100% sensitivity and 88.9% specificity in identifying bright lesions, while detection of red lesions achieved a sensitivity of 67% and specificity of 95%. When both bright and red lesions were present, 100% sensitivity with 88.9% specificity was obtained. All results obtained with this automated screening program meet WHO standards for diabetic retinopathy screening.
Automated Multi-Lesion Detection for Referable Diabetic Retinopathy in Indigenous Health Care
Pires, Ramon; Carvalho, Tiago; Spurling, Geoffrey; Goldenstein, Siome; Wainer, Jacques; Luckie, Alan; Jelinek, Herbert F.; Rocha, Anderson
2015-01-01
Diabetic Retinopathy (DR) is a complication of diabetes mellitus that affects more than one-quarter of the population with diabetes, and can lead to blindness if not discovered in time. An automated screening enables the identification of patients who need further medical attention. This study aimed to classify retinal images of Aboriginal and Torres Strait Islander peoples utilizing an automated computer-based multi-lesion eye screening program for diabetic retinopathy. The multi-lesion classifier was trained on 1,014 images from the São Paulo Eye Hospital and tested on retinal images containing no DR-related lesion, single lesions, or multiple types of lesions from the Inala Aboriginal and Torres Strait Islander health care centre. The automated multi-lesion classifier has the potential to enhance the efficiency of clinical practice delivering diabetic retinopathy screening. Our program does not necessitate image samples for training from any specific ethnic group or population being assessed and is independent of image pre- or post-processing to identify retinal lesions. In this Aboriginal and Torres Strait Islander population, the program achieved 100% sensitivity and 88.9% specificity in identifying bright lesions, while detection of red lesions achieved a sensitivity of 67% and specificity of 95%. When both bright and red lesions were present, 100% sensitivity with 88.9% specificity was obtained. All results obtained with this automated screening program meet WHO standards for diabetic retinopathy screening. PMID:26035836
NASA Technical Reports Server (NTRS)
Lum, Henry, Jr.
1991-01-01
In April 1985, as required by Public Law 98-371, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on Space Station Freedom. This material was documented in the initial report (NASA Technical Memorandum 87566). A further requirement of the law was that ATAC follow NASA's progress in this area and report to Congress semiannually. The report describes the progress made by Levels 1, 2 and 3 of the Office Space Station in developing and applying advanced automation and robotics technology. Emphasis has been placed upon the Space Station Freedom Program responses to specific recommendations made in ATAC Progress Report 11, the status of the Flight Telerobotic Servicer, and the status of the Advanced Development Program. In addition, an assessment is provided of the automation and robotics status of the Canadian Space Station Program.
On the Automation of the MarkIII Data Analysis System.
NASA Astrophysics Data System (ADS)
Schwegmann, W.; Schuh, H.
1999-03-01
A faster and semiautomatic data analysis is an important contribution to the acceleration of the VLBI procedure. A concept for the automation of one of the most widely used VLBI software packages the MarkIII Data Analysis System was developed. Then, the program PWXCB, which extracts weather and cable calibration data from the station log-files, was automated supplementing the existing Fortran77 program-code. The new program XLOG and its results will be presented. Most of the tasks in the VLBI data analysis are very complex and their automation requires typical knowledge-based techniques. Thus, a knowledge-based system (KBS) for support and guidance of the analyst is being developed using the AI-workbench BABYLON, which is based on methods of artificial intelligence (AI). The advantages of a KBS for the MarkIII Data Analysis System and the required steps to build a KBS will be demonstrated. Examples about the current status of the project will be given, too.
A Novel ImageJ Macro for Automated Cell Death Quantitation in the Retina
Maidana, Daniel E.; Tsoka, Pavlina; Tian, Bo; Dib, Bernard; Matsumoto, Hidetaka; Kataoka, Keiko; Lin, Haijiang; Miller, Joan W.; Vavvas, Demetrios G.
2015-01-01
Purpose TUNEL assay is widely used to evaluate cell death. Quantification of TUNEL-positive (TUNEL+) cells in tissue sections is usually performed manually, ideally by two masked observers. This process is time consuming, prone to measurement errors, and not entirely reproducible. In this paper, we describe an automated quantification approach to address these difficulties. Methods We developed an ImageJ macro to quantitate cell death by TUNEL assay in retinal cross-section images. The script was coded using IJ1 programming language. To validate this tool, we selected a dataset of TUNEL assay digital images, calculated layer area and cell count manually (done by two observers), and compared measurements between observers and macro results. Results The automated macro segmented outer nuclear layer (ONL) and inner nuclear layer (INL) successfully. Automated TUNEL+ cell counts were in-between counts of inexperienced and experienced observers. The intraobserver coefficient of variation (COV) ranged from 13.09% to 25.20%. The COV between both observers was 51.11 ± 25.83% for the ONL and 56.07 ± 24.03% for the INL. Comparing observers' results with macro results, COV was 23.37 ± 15.97% for the ONL and 23.44 ± 18.56% for the INL. Conclusions We developed and validated an ImageJ macro that can be used as an accurate and precise quantitative tool for retina researchers to achieve repeatable, unbiased, fast, and accurate cell death quantitation. We believe that this standardized measurement tool could be advantageous to compare results across different research groups, as it is freely available as open source. PMID:26469755
Automated sizing of large structures by mixed optimization methods
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.; Loendorf, D.
1973-01-01
A procedure for automating the sizing of wing-fuselage airframes was developed and implemented in the form of an operational program. The program combines fully stressed design to determine an overall material distribution with mass-strength and mathematical programming methods to design structural details accounting for realistic design constraints. The practicality and efficiency of the procedure is demonstrated for transport aircraft configurations. The methodology is sufficiently general to be applicable to other large and complex structures.
NASA Technical Reports Server (NTRS)
Montogomery, Leslie D.; Ku, Yu-Tsuan E.; Webbon, Bruce W. (Technical Monitor)
1995-01-01
We have prepared a computer program (RHEOSYS:RHEOencephalographic impedance trace scanning SyStem) that can be used to automate the analysis of segmental impedance blood flow waveforms. This program was developed to assist in the post test analysis of recorded impedance traces from multiple segments of the body. It incorporates many of the blood flow, segmental volume, and vascular state indices reported in the world literature. As it is currently programmed, seven points are selected from each blood flow pulse and associated ECG waveforrn: 1. peak of the first ECG QRS complex, 2. start of systolic slope on the blood flow trace, 3. maximum amplitude of the impedance pulse, 4. position of the dicrotic notch, 5. maximum amplitude of the postdicrotic segment, 6. peak of the second ECG QRS complex, and 7. start of the next blood flow pulse. These points we used to calculate various geometric, area, and time-related values associated with the impedance pulse morphology. RHEOSYS then calculates a series of 34 impedance and cardiac cycle parameters which include pulse amplitudes; areas; pulse propagation times; cardiac cycle times; and various measures of arterial and various tone, contractility, and pulse volume. We used this program to calculate the scalp and intracranial blood flow responses to head and neck cooling as it may be applied to lower the body temperatures of multiple sclerosis patients. Twelve women and twelve men were tested using a commercially available head and neck cooling system operated at its maximum cooling capacity for a period of 30 minutes. Head and neck cooling produced a transient change in scalp blood flow and a significant, (P<0.05) decrease of approx. 30% in intracranial blood flow. Results of this experiment will illustrate how REG and RHEOSYS can be used in biomedical applications.
76 FR 6112 - Federal Benefit Payments Under Certain District of Columbia Retirement Plans
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-03
... completion of Treasury's new automated retirement system, ``System to Administer Retirement'' (STAR), which replaced the District's legacy automated retirement system. While the new system has been completed, the... calculation scenarios, the need for which was identified during systems development. The amendments to subpart...
Improving Learning Object Quality: Moodle HEODAR Implementation
ERIC Educational Resources Information Center
Munoz, Carlos; Garcia-Penalvo, Francisco J.; Morales, Erla Mariela; Conde, Miguel Angel; Seoane, Antonio M.
2012-01-01
Automation toward efficiency is the aim of most intelligent systems in an educational context in which results calculation automation that allows experts to spend most of their time on important tasks, not on retrieving, ordering, and interpreting information. In this paper, the authors provide a tool that easily evaluates Learning Objects quality…
Selected Streamflow Statistics for Streamgaging Stations in Delaware, 2003
Ries, Kernell G.
2004-01-01
Flow-duration and low-flow frequency statistics were calculated for 15 streamgaging stations in Delaware, in cooperation with the Delaware Geological Survey. The flow-duration statistics include the 1-, 2-, 5-, 10-, 20-, 30-, 40-, 50-, 60-, 70-, 80-, 90-, 95-, 98-, and 99-percent duration discharges. The low-flow frequency statistics include the average discharges for 1, 7, 14, 30, 60, 90, and 120 days that recur, on average, once in 1.01, 2, 5, 10, 20, 50, and 100 years. The statistics were computed using U.S. Geological Survey computer programs that can be downloaded from the World Wide Web at no cost. The computer programs automate standard U.S. Geological Survey methods for computing the statistics. Documentation is provided at the Web sites for the individual programs. The computed statistics are presented in tabular format on a separate page for each station, along with the station name, station number, the location, the period of record, and remarks.
NASA Technical Reports Server (NTRS)
Dowden, Donald J.; Bessette, Denis E.
1987-01-01
The AFTI F-16 Automated Maneuvering Attack System has undergone developmental and demonstration flight testing over a total of 347.3 flying hours in 237 sorties. The emphasis of this phase of the flight test program was on the development of automated guidance and control systems for air-to-air and air-to-ground weapons delivery, using a digital flight control system, dual avionics multiplex buses, an advanced FLIR sensor with laser ranger, integrated flight/fire-control software, advanced cockpit display and controls, and modified core Multinational Stage Improvement Program avionics.
NASA Technical Reports Server (NTRS)
Allen, Bradley P.; Holtzman, Peter L.
1987-01-01
An overview is presented of the Automated Software Development Workstation Project, an effort to explore knowledge-based approaches to increasing software productivity. The project focuses on applying the concept of domain specific automatic programming systems (D-SAPSs) to application domains at NASA's Johnson Space Center. A version of a D-SAPS developed in Phase 1 of the project for the domain of space station momentum management is described. How problems encountered during its implementation led researchers to concentrate on simplifying the process of building and extending such systems is discussed. Researchers propose to do this by attacking three observed bottlenecks in the D-SAPS development process through the increased automation of the acquisition of programming knowledge and the use of an object oriented development methodology at all stages of the program design. How these ideas are being implemented in the Bauhaus, a prototype workstation for D-SAPS development is discussed.
NASA Technical Reports Server (NTRS)
Allen, Bradley P.; Holtzman, Peter L.
1988-01-01
An overview is presented of the Automated Software Development Workstation Project, an effort to explore knowledge-based approaches to increasing software productivity. The project focuses on applying the concept of domain specific automatic programming systems (D-SAPSs) to application domains at NASA's Johnson Space Flight Center. A version of a D-SAPS developed in Phase 1 of the project for the domain of space station momentum management is described. How problems encountered during its implementation led researchers to concentrate on simplifying the process of building and extending such systems is discussed. Researchers propose to do this by attacking three observed bottlenecks in the D-SAPS development process through the increased automation of the acquisition of programming knowledge and the use of an object oriented development methodology at all stages of the program design. How these ideas are being implemented in the Bauhaus, a prototype workstation for D-SAPS development is discussed.
Adapting for Scalability: Automating the Video Assessment of Instructional Learning
ERIC Educational Resources Information Center
Roberts , Amy M.; LoCasale-Crouch, Jennifer; Hamre, Bridget K.; Buckrop, Jordan M.
2017-01-01
Although scalable programs, such as online courses, have the potential to reach broad audiences, they may pose challenges to evaluating learners' knowledge and skills. Automated scoring offers a possible solution. In the current paper, we describe the process of creating and testing an automated means of scoring a validated measure of teachers'…
Automated Test-Form Generation
ERIC Educational Resources Information Center
van der Linden, Wim J.; Diao, Qi
2011-01-01
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Aviation safety/automation program overview
NASA Technical Reports Server (NTRS)
Morello, Samuel A.
1990-01-01
The goal is to provide a technology base leading to improved safety of the national airspace system through the development and integration of human-centered automation technologies for aircraft crews and air traffic controllers. Information on the problems, specific objectives, human-automation interaction, intelligent error-tolerant systems, and air traffic control/cockpit integration is given in viewgraph form.
ERIC Educational Resources Information Center
Kiratsov, P.
1983-01-01
Discusses the design and organization of the Automated Information Centre, a centralized automated scientific and technical information service established within the main organ of Bulgaria's National System for Scientific and Technical Information, with UNESCO and United Nations Development Program assistance. Problems and perspectives for…
A survey of life support system automation and control
NASA Technical Reports Server (NTRS)
Finn, Cory K.
1993-01-01
The level of automation and control necessary to support advanced life support systems for use in the manned space program is steadily increasing. As the length and complexity of manned missions increase, life support systems must be able to meet new space challenges. Longer, more complex missions create new demands for increased automation, improved sensors, and improved control systems. It is imperative that research in these key areas keep pace with current and future developments in regenerative life support technology. This paper provides an overview of past and present research in the areas of sensor development, automation, and control of life support systems for the manned space program, and it discusses the impact continued research in several key areas will have on the feasibility, operation, and design of future life support systems.
Advancing automation and robotics technology for the Space Station Freedom and for the US economy
NASA Technical Reports Server (NTRS)
1990-01-01
In April 1985, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on Space Station Freedom. This material was documented in the initial report (NASA Technical Memorandum 87566). The progress made by Levels 1, 2, and 3 of the Office of Space Station in developing and applying advanced automation and robotics technology are described. Emphasis was placed upon the Space Station Freedom Program responses to specific recommendations made in ATAC Progress Report 9, the Flight Telerobotic Servicer, the Advanced Development Program, and the Data Management System. Assessments are presented for these and other areas as they apply to the advancement of automation and robotics technology for the Space Station Freedom.
A computer program for automated flutter solution and matched point determination
NASA Technical Reports Server (NTRS)
Bhatia, K. G.
1973-01-01
The use of a digital computer program (MATCH) for automated determination of the flutter velocity and the matched-point flutter density is described. The program is based on the use of the modified Laguerre iteration formula to converge to a flutter crossing or a matched-point density. A general description of the computer program is included and the purpose of all subroutines used is stated. The input required by the program and various input options are detailed, and the output description is presented. The program can solve flutter equations formulated with up to 12 vibration modes and obtain flutter solutions for up to 10 air densities. The program usage is illustrated by a sample run, and the FORTRAN program listing is included.
Automated data collection based on RoboDiff at the ESRF beamline MASSIF-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nurizzo, Didier, E-mail: Didier.nurizzo@esrf.fr; Guichard, Nicolas; McSweeney, Sean
2016-07-27
The European Synchrotron Radiation Facility has a long standing history in the automation of experiments in Macromolecular Crystallography. MASSIF-1 (Massively Automated Sample Screening and evaluation Integrated Facility), a beamline constructed as part of the ESRF Upgrade Phase I program, has been open to the external user community since July 2014 and offers a unique completely automated data collection service to both academic and industrial structural biologists.
ERIC Educational Resources Information Center
Texas State Technical Coll. System, Waco.
This package consists of course syllabi, an instructor's handbook, and a student laboratory manual for a 1-year vocational training program to prepare students for entry-level employment as automated equipment repair technicians. The program was developed through a modification of the DACUM (Developing a Curriculum) technique. The course syllabi…
Automated Test Assembly Using lp_Solve Version 5.5 in R
ERIC Educational Resources Information Center
Diao, Qi; van der Linden, Wim J.
2011-01-01
This article reviews the use of the software program lp_solve version 5.5 for solving mixed-integer automated test assembly (ATA) problems. The program is freely available under Lesser General Public License 2 (LGPL2). It can be called from the statistical language R using the lpSolveAPI interface. Three empirical problems are presented to…
ERIC Educational Resources Information Center
Fridge, Evorell; Bagui, Sikha
2016-01-01
The goal of this research was to investigate the effects of automated testing software on levels of student reflection and student performance. This was a self-selecting, between subjects design that examined the performance of students in introductory computer programming classes. Participants were given the option of using the Web-CAT…
Expert System for Automated Design Synthesis
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Barthelemy, Jean-Francois M.
1987-01-01
Expert-system computer program EXADS developed to aid users of Automated Design Synthesis (ADS) general-purpose optimization program. EXADS aids engineer in determining best combination based on knowledge of specific problem and expert knowledge stored in knowledge base. Available in two interactive machine versions. IBM PC version (LAR-13687) written in IQ-LISP. DEC VAX version (LAR-13688) written in Franz-LISP.
Computer program CDCID: an automated quality control program using CDC update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, G.L.; Aguilar, F.
1984-04-01
A computer program, CDCID, has been developed in coordination with a quality control program to provide a highly automated method of documenting changes to computer codes at EG and G Idaho, Inc. The method uses the standard CDC UPDATE program in such a manner that updates and their associated documentation are easily made and retrieved in various formats. The method allows each card image of a source program to point to the document which describes it, who created the card, and when it was created. The method described is applicable to the quality control of computer programs in general. Themore » computer program described is executable only on CDC computing systems, but the program could be modified and applied to any computing system with an adequate updating program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, G.; Mansur, D.L.; Ruhter, W.D.
1994-10-01
This report presents the details of the Lawrence Livermore National Laboratory safeguards and securities program. This program is focused on developing new technology, such as x- and gamma-ray spectrometry, for measurement of special nuclear materials. This program supports the Office of Safeguards and Securities in the following five areas; safeguards technology, safeguards and decision support, computer security, automated physical security, and automated visitor access control systems.
Data base to compare calculations and observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tichler, J.L.
Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed. (PSB)
NASA Astrophysics Data System (ADS)
Mitchell, Timothy J.
Preterm infants are particularly susceptible to cerebral injury, and electroencephalographic (EEG) recordings provide an important diagnostic tool for determining cerebral health. However, interpreting these EEG recordings is challenging and requires the skills of a trained electroencephalographer. Because these EEG specialists are rare, an automated interpretation of newborn EEG recordings would increase access to an important diagnostic tool for physicians. To automate this procedure, we employ a novel Bayesian approach to compute the probability of EEG features (waveforms) including suppression, delta brushes, and delta waves. The power of this approach lies not only in its ability to closely mimic the techniques used by EEG specialists, but also its ability to be generalized to identify other waveforms that may be of interest for future work. The results of these calculations are used in a program designed to output simple statistics related to the presence or absence of such features. Direct comparison of the software with expert human readers has indicated satisfactory performance, and the algorithm has shown promise in its ability to distinguish between infants with normal neurodevelopmental outcome and those with poor neurodevelopmental outcome.
NASA Astrophysics Data System (ADS)
Rose, R.; Aizenman, H.; Mei, E.; Choudhury, N.
2013-12-01
High School students interested in the STEM fields benefit most when actively participating, so I created a series of learning modules on how to analyze complex systems using machine-learning that give automated feedback to students. The automated feedbacks give timely responses that will encourage the students to continue testing and enhancing their programs. I have designed my modules to take the tactical learning approach in conveying the concepts behind correlation, linear regression, and vector distance based classification and clustering. On successful completion of these modules, students will learn how to calculate linear regression, Pearson's correlation, and apply classification and clustering techniques to a dataset. Working on these modules will allow the students to take back to the classroom what they've learned and then apply it to the Earth Science curriculum. During my research this summer, we applied these lessons to analyzing river deltas; we looked at trends in the different variables over time, looked for similarities in NDVI, precipitation, inundation, runoff and discharge, and attempted to predict floods based on the precipitation, waves mean, area of discharge, NDVI, and inundation.
NASA Technical Reports Server (NTRS)
Wolfe, R. R.
1975-01-01
Space servicing automated payloads was studied for potential cost benefits for future payload operations. Background information is provided on space servicing in general, and on a pilot flight test program in particular. An fight test is recommended to demonstrate space servicing. An overall program plan is provided which builds upon the pilot program through an interim servicing capability. A multipayload servicing concept for the time when the full capability tug becomes operational is presented. The space test program is specifically designed to provide low-cost booster vehicles and a flight test platform for several experiments on a single flight.
Vocal activity as a low cost and scalable index of seabird colony size
Borker, Abraham L.; McKown, Matthew W.; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Tershy, Bernie R.; Croll, Donald A.
2014-01-01
Although wildlife conservation actions have increased globally in number and complexity, the lack of scalable, cost-effective monitoring methods limits adaptive management and the evaluation of conservation efficacy. Automated sensors and computer-aided analyses provide a scalable and increasingly cost-effective tool for conservation monitoring. A key assumption of automated acoustic monitoring of birds is that measures of acoustic activity at colony sites are correlated with the relative abundance of nesting birds. We tested this assumption for nesting Forster's terns (Sterna forsteri) in San Francisco Bay for 2 breeding seasons. Sensors recorded ambient sound at 7 colonies that had 15–111 nests in 2009 and 2010. Colonies were spaced at least 250 m apart and ranged from 36 to 2,571 m2. We used spectrogram cross-correlation to automate the detection of tern calls from recordings. We calculated mean seasonal call rate and compared it with mean active nest count at each colony. Acoustic activity explained 71% of the variation in nest abundance between breeding sites and 88% of the change in colony size between years. These results validate a primary assumption of acoustic indices; that is, for terns, acoustic activity is correlated to relative abundance, a fundamental step toward designing rigorous and scalable acoustic monitoring programs to measure the effectiveness of conservation actions for colonial birds and other acoustically active wildlife.
Automation bias: decision making and performance in high-tech cockpits.
Mosier, K L; Skitka, L J; Heers, S; Burdick, M
1997-01-01
Automated aids and decision support tools are rapidly becoming indispensable tools in high-technology cockpits and are assuming increasing control of"cognitive" flight tasks, such as calculating fuel-efficient routes, navigating, or detecting and diagnosing system malfunctions and abnormalities. This study was designed to investigate automation bias, a recently documented factor in the use of automated aids and decision support systems. The term refers to omission and commission errors resulting from the use of automated cues as a heuristic replacement for vigilant information seeking and processing. Glass-cockpit pilots flew flight scenarios involving automation events or opportunities for automation-related omission and commission errors. Although experimentally manipulated accountability demands did not significantly impact performance, post hoc analyses revealed that those pilots who reported an internalized perception of "accountability" for their performance and strategies of interaction with the automation were significantly more likely to double-check automated functioning against other cues and less likely to commit errors than those who did not share this perception. Pilots were also lilkely to erroneously "remember" the presence of expected cues when describing their decision-making processes.
Nokami, Toshiki; Isoda, Yuta; Sasaki, Norihiko; Takaiso, Aki; Hayase, Shuichi; Itoh, Toshiyuki; Hayashi, Ryutaro; Shimizu, Akihiro; Yoshida, Jun-ichi
2015-03-20
The anomeric arylthio group and the hydroxyl-protecting groups of thioglycosides were optimized to construct carbohydrate building blocks for automated electrochemical solution-phase synthesis of oligoglucosamines having 1,4-β-glycosidic linkages. The optimization study included density functional theory calculations, measurements of the oxidation potentials, and the trial synthesis of the chitotriose trisaccharide. The automated synthesis of the protected potential N,N,N-trimethyl-d-glucosaminylchitotriomycin precursor was accomplished by using the optimized building block.
Automated scoring of regional lung perfusion in children from contrast enhanced 3D MRI
NASA Astrophysics Data System (ADS)
Heimann, Tobias; Eichinger, Monika; Bauman, Grzegorz; Bischoff, Arved; Puderbach, Michael; Meinzer, Hans-Peter
2012-03-01
MRI perfusion images give information about regional lung function and can be used to detect pulmonary pathologies in cystic fibrosis (CF) children. However, manual assessment of the percentage of pathologic tissue in defined lung subvolumes features large inter- and intra-observer variation, making it difficult to determine disease progression consistently. We present an automated method to calculate a regional score for this purpose. First, lungs are located based on thresholding and morphological operations. Second, statistical shape models of left and right children's lungs are initialized at the determined locations and used to precisely segment morphological images. Segmentation results are transferred to perfusion maps and employed as masks to calculate perfusion statistics. An automated threshold to determine pathologic tissue is calculated and used to determine accurate regional scores. We evaluated the method on 10 MRI images and achieved an average surface distance of less than 1.5 mm compared to manual reference segmentations. Pathologic tissue was detected correctly in 9 cases. The approach seems suitable for detecting early signs of CF and monitoring response to therapy.
NASA Astrophysics Data System (ADS)
Kishimoto, Naoki; Nishi, Yuito
2017-04-01
Isomerization and dissociation pathways after the photoionization or electron attachment of (E)-1,2-dichloroethene were calculated with an automated exploration method utilizing a scaled hypersphere search of the anharmonic downward distortion following algorithm at the UB3LYP/6-311G(2d,d,p) level of theory. The potential energies of transition states and dissociation channels were calculated by a composite method ((RO)CBS-QB3) and compared with the breakdown diagrams and electron attachment spectra observed in previous spectroscopic studies. The results of single point calculations with several DFT and post-SCF methods are compared using the root mean square deviations from the (RO)CBS-QB3 energies for six states of anionic dichloroethene.
Kraus, Jodi; Gupta, Rupal; Yehl, Jenna; Lu, Manman; Case, David A; Gronenborn, Angela M; Akke, Mikael; Polenova, Tatyana
2018-03-22
Magic angle spinning NMR spectroscopy is uniquely suited to probe the structure and dynamics of insoluble proteins and protein assemblies at atomic resolution, with NMR chemical shifts containing rich information about biomolecular structure. Access to this information, however, is problematic, since accurate quantum mechanical calculation of chemical shifts in proteins remains challenging, particularly for 15 N H . Here we report on isotropic chemical shift predictions for the carbohydrate recognition domain of microcrystalline galectin-3, obtained from using hybrid quantum mechanics/molecular mechanics (QM/MM) calculations, implemented using an automated fragmentation approach, and using very high resolution (0.86 Å lactose-bound and 1.25 Å apo form) X-ray crystal structures. The resolution of the X-ray crystal structure used as an input into the AF-NMR program did not affect the accuracy of the chemical shift calculations to any significant extent. Excellent agreement between experimental and computed shifts is obtained for 13 C α , while larger scatter is observed for 15 N H chemical shifts, which are influenced to a greater extent by electrostatic interactions, hydrogen bonding, and solvation.
Clarity: An Open Source Manager for Laboratory Automation
Delaney, Nigel F.; Echenique, José Rojas; Marx, Christopher J.
2013-01-01
Software to manage automated laboratories interfaces with hardware instruments, gives users a way to specify experimental protocols, and schedules activities to avoid hardware conflicts. In addition to these basics, modern laboratories need software that can run multiple different protocols in parallel and that can be easily extended to interface with a constantly growing diversity of techniques and instruments. We present Clarity: a laboratory automation manager that is hardware agnostic, portable, extensible and open source. Clarity provides critical features including remote monitoring, robust error reporting by phone or email, and full state recovery in the event of a system crash. We discuss the basic organization of Clarity; demonstrate an example of its implementation for the automated analysis of bacterial growth; and describe how the program can be extended to manage new hardware. Clarity is mature; well documented; actively developed; written in C# for the Common Language Infrastructure; and is free and open source software. These advantages set Clarity apart from currently available laboratory automation programs. PMID:23032169
Folks, Russell D; Garcia, Ernest V; Taylor, Andrew T
2007-03-01
Quantitative nuclear renography has numerous potential sources of error. We previously reported the initial development of a computer software module for comprehensively addressing the issue of quality control (QC) in the analysis of radionuclide renal images. The objective of this study was to prospectively test the QC software. The QC software works in conjunction with standard quantitative renal image analysis using a renal quantification program. The software saves a text file that summarizes QC findings as possible errors in user-entered values, calculated values that may be unreliable because of the patient's clinical condition, and problems relating to acquisition or processing. To test the QC software, a technologist not involved in software development processed 83 consecutive nontransplant clinical studies. The QC findings of the software were then tabulated. QC events were defined as technical (study descriptors that were out of range or were entered and then changed, unusually sized or positioned regions of interest, or missing frames in the dynamic image set) or clinical (calculated functional values judged to be erroneous or unreliable). Technical QC events were identified in 36 (43%) of 83 studies. Clinical QC events were identified in 37 (45%) of 83 studies. Specific QC events included starting the camera after the bolus had reached the kidney, dose infiltration, oversubtraction of background activity, and missing frames in the dynamic image set. QC software has been developed to automatically verify user input, monitor calculation of renal functional parameters, summarize QC findings, and flag potentially unreliable values for the nuclear medicine physician. Incorporation of automated QC features into commercial or local renal software can reduce errors and improve technologist performance and should improve the efficiency and accuracy of image interpretation.
2006 Automation Survey: The Systems Are Changing. But School Libraries Aren't
ERIC Educational Resources Information Center
Fuller, Daniel
2006-01-01
This article presents the findings of the 2006 School Library Journal-San Jose State University Automation Survey. The study takes a close look at the systems that media specialists are using, how they are using them, and what librarians want from their future automation programs. The findings reveal that while respondents were satisfied with…
Integrating Test-Form Formatting into Automated Test Assembly
ERIC Educational Resources Information Center
Diao, Qi; van der Linden, Wim J.
2013-01-01
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Code of Federal Regulations, 2010 CFR
2010-07-01
... automated payment systems restrictions based on the size and timing of the drawdown request subject to this... EFFICIENT FEDERAL-STATE FUNDS TRANSFERS Rules Applicable to Federal Assistance Programs Included in a Treasury-State Agreement § 205.17 Are funds transfers delayed by automated payment systems restrictions...
ERIC Educational Resources Information Center
Majchrzak, Ann
A study was conducted of the training programs used by plants with Computer Automated Design/Computer Automated Manufacturing (CAD/CAM) to help their employees adapt to automated manufacturing. The study sought to determine the relative priorities of manufacturing establishments for training certain workers in certain skills; the status of…
Utility in a Fallible Tool: A Multi-Site Case Study of Automated Writing Evaluation
ERIC Educational Resources Information Center
Grimes, Douglas; Warschauer, Mark
2010-01-01
Automated writing evaluation (AWE) software uses artificial intelligence (AI) to score student essays and support revision. We studied how an AWE program called MY Access![R] was used in eight middle schools in Southern California over a three-year period. Although many teachers and students considered automated scoring unreliable, and teachers'…
The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring
ERIC Educational Resources Information Center
Haberman, Shelby J.; Sinharay, Sandip
2010-01-01
Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a…
A Program Certification Assistant Based on Fully Automated Theorem Provers
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2005-01-01
We describe a certification assistant to support formal safety proofs for programs. It is based on a graphical user interface that hides the low-level details of first-order automated theorem provers while supporting limited interactivity: it allows users to customize and control the proof process on a high level, manages the auxiliary artifacts produced during this process, and provides traceability between the proof obligations and the relevant parts of the program. The certification assistant is part of a larger program synthesis system and is intended to support the deployment of automatically generated code in safety-critical applications.
ARES: automated response function code. Users manual. [HPGAM and LSQVM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maung, T.; Reynolds, G.M.
This ARES user's manual provides detailed instructions for a general understanding of the Automated Response Function Code and gives step by step instructions for using the complete code package on a HP-1000 system. This code is designed to calculate response functions of NaI gamma-ray detectors, with cylindrical or rectangular geometries.
NASA Tech Briefs, May 1994. Volume 18, No. 5
NASA Technical Reports Server (NTRS)
1994-01-01
Topics covered include: Robotics/Automation; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery/Automation; Manufacturing/Fabrication; Mathematics and Information Sciences; Life Sciences; Books and Reports.
Urech, Tracy H.; Woodard, LeChauncy D.; Virani, Salim S.; Dudley, R. Adams; Lutschg, Meghan Z.; Petersen, Laura A.
2015-01-01
Background Hospital report cards and financial incentives linked to performance require clinical data that are reliable, appropriate, timely, and cost-effective to process. Pay-for-performance plans are transitioning to automated electronic health record (EHR) data as an efficient method to generate data needed for these programs. Objective To determine how well data from automated processing of structured EHR fields (AP-EHR) reflect data from manual chart review and the impact of these data on performance rewards. Research Design Cross-sectional analysis of performance measures used in a cluster randomized trial assessing the impact of financial incentives on guideline-recommended care for hypertension. Subjects A total of 2,840 patients with hypertension assigned to participating physicians at 12 Veterans Affairs hospital-based outpatient clinics. Fifty-two physicians and 33 primary care personnel received incentive payments. Measures Overall, positive and negative agreement indices and Cohen's kappa were calculated for assessments of guideline-recommended antihypertensive medication use, blood pressure (BP) control, and appropriate response to uncontrolled BP. Pearson's correlation coefficient was used to assess how similar participants’ calculated earnings were between the data sources. Results By manual chart review data, 72.3% of patients were considered to have received guideline-recommended antihypertensive medications compared to 65.0% by AP-EHR review (k=0.51). Manual review indicated 69.5% of patients had controlled BP compared to 66.8% by AP-EHR review (k=0.87). Compared to 52.2% of patients per the manual review, 39.8% received an appropriate response by AP-EHR review (k=0.28). Participants’ incentive payments calculated using the two methods were highly correlated (r≥0.98). Using the AP-EHR data to calculate earnings, participants’ payment changes ranged from a decrease of $91.00 (−30.3%) to an increase of $18.20 (+7.4%) for medication use (IQR, −14.4% to 0%) and a decrease of $100.10 (−31.4%) to an increase of $36.40 (+15.4%) for BP control or appropriate response to uncontrolled BP (IQR, −11.9% to −6.1%). Conclusions Pay-for-performance plans that use only EHR data should carefully consider the measures and the structure of the EHR before data collection and financial incentive disbursement. For this study, we feel that a 10% difference in the total amount of incentive earnings disbursed based on AP-EHR data compared to manual review is acceptable given the time and resources required to abstract data from medical records. PMID:26340661
Inadvertent Intruder Calculatios for F Tank Farm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koffman, L
2005-09-12
Savannah River National Laboratory (SRNL) has been providing radiological performance assessment analysis for Savannah River Site (SRS) solid waste disposal facilities (McDowell-Boyer 2000). The performance assessment considers numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. An Automated Intruder Analysis application was developed by SRNL (Koffman 2004) that simplifies the inadvertent intruder analysis into a routine, automated calculation. Based on SRNL's experience, personnel from Planning Integration & Technology of Closure Business Unitmore » asked SRNL to assist with inadvertent intruder calculations for F Tank Farm to support the development of the Tank Closure Waste Determination Document. Meetings were held to discuss the scenarios to be calculated and the assumptions to be used in the calculations. As a result of the meetings, SRNL was asked to perform four scenario calculations. Two of the scenarios are the same as those calculated by the Automated Intruder Analysis application and these can be calculated directly by providing appropriate inputs. The other two scenarios involve use of groundwater by the intruder and the Automated Intruder Analysis application was adapted to perform these calculations. The four calculations to be performed are: (1) A post-drilling scenario in which the drilling penetrates a transfer line. (2) A calculation of internal exposure due to drinking water from a well located near a waste tank. (3) A post-drilling calculation in which waste is introduced by irrigation of the garden with water from a well located near a waste tank. (4) A resident scenario where a house is built above transfer lines. Note that calculations 1 and 4 use sources from the waste inventory in the transfer line (given in Table 1) whereas calculations 2 and 3 use sources from groundwater beneath the waste tank (given in Appendix B). It is important to recognize that there are two different sources in the calculations. In these calculations, assumptions are made for parameter values. Three key parameters are the size of the garden, the amount of vegetables eaten, and the distance of the well from the waste tank. For these three parameters, different values are considered in the calculations to determine the impact of the change in these parameters. Another key parameter is the length of time of institutional control, which determines when an inadvertent intruder could first be exposed. The standard length of time for institutional control is 100 years from the time of closure. In this analysis, waste inventory values are used from year 2005 but tanks will not be closed until year 2020. Thus, the effective length of time of institutional control used in the calculations is 115 years from year 2005, which is taken to be time zero for radiological decay calculations. All calculations are carried out for a period of 10,000 years.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, X; Gao, H; Sharp, G
Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
Automated Student and Adult Learner Follow-up System. Final Report for Program Year 1993-94.
ERIC Educational Resources Information Center
Texas State Occupational Information Coordinating Committee, Austin.
The Texas Automated Student and Adult Learner Follow-Up System was developed as part of a larger effort to improve and coordinate the delivery of education and training of a skilled work force. The primary task of the Follow-Up System in Program Year 1993-94 was to obtain outcome information on the former students and participants of the work…
Asou, Hiroya; Imada, N; Sato, T
2010-06-20
On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.
7 CFR 247.25 - Allowable uses of administrative funds and other funds.
Code of Federal Regulations, 2013 CFR
2013-01-01
... equipment include automated information systems, automated data processing equipment, and other computer... sale of packing containers or pallets, and the salvage of commodities. Program income does not include...
7 CFR 247.25 - Allowable uses of administrative funds and other funds.
Code of Federal Regulations, 2014 CFR
2014-01-01
... equipment include automated information systems, automated data processing equipment, and other computer... sale of packing containers or pallets, and the salvage of commodities. Program income does not include...
7 CFR 247.25 - Allowable uses of administrative funds and other funds.
Code of Federal Regulations, 2011 CFR
2011-01-01
... equipment include automated information systems, automated data processing equipment, and other computer... sale of packing containers or pallets, and the salvage of commodities. Program income does not include...
7 CFR 247.25 - Allowable uses of administrative funds and other funds.
Code of Federal Regulations, 2012 CFR
2012-01-01
... equipment include automated information systems, automated data processing equipment, and other computer... sale of packing containers or pallets, and the salvage of commodities. Program income does not include...
Postimplant dosimetry using a Monte Carlo dose calculation engine: a new clinical standard.
Carrier, Jean-François; D'Amours, Michel; Verhaegen, Frank; Reniers, Brigitte; Martin, André-Guy; Vigneault, Eric; Beaulieu, Luc
2007-07-15
To use the Monte Carlo (MC) method as a dose calculation engine for postimplant dosimetry. To compare the results with clinically approved data for a sample of 28 patients. Two effects not taken into account by the clinical calculation, interseed attenuation and tissue composition, are being specifically investigated. An automated MC program was developed. The dose distributions were calculated for the target volume and organs at risk (OAR) for 28 patients. Additional MC techniques were developed to focus specifically on the interseed attenuation and tissue effects. For the clinical target volume (CTV) D(90) parameter, the mean difference between the clinical technique and the complete MC method is 10.7 Gy, with cases reaching up to 17 Gy. For all cases, the clinical technique overestimates the deposited dose in the CTV. This overestimation is mainly from a combination of two effects: the interseed attenuation (average, 6.8 Gy) and tissue composition (average, 4.1 Gy). The deposited dose in the OARs is also overestimated in the clinical calculation. The clinical technique systematically overestimates the deposited dose in the prostate and in the OARs. To reduce this systematic inaccuracy, the MC method should be considered in establishing a new standard for clinical postimplant dosimetry and dose-outcome studies in a near future.
Domain specific software architectures: Command and control
NASA Technical Reports Server (NTRS)
Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave
1992-01-01
GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.
Automating spectral measurements
NASA Astrophysics Data System (ADS)
Goldstein, Fred T.
2008-09-01
This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.
Interactive Spectral Analysis and Computation (ISAAC)
NASA Technical Reports Server (NTRS)
Lytle, D. M.
1992-01-01
Isaac is a task in the NSO external package for IRAF. A descendant of a FORTRAN program written to analyze data from a Fourier transform spectrometer, the current implementation has been generalized sufficiently to make it useful for general spectral analysis and other one dimensional data analysis tasks. The user interface for Isaac is implemented as an interpreted mini-language containing a powerful, programmable vector calculator. Built-in commands provide much of the functionality needed to produce accurate line lists from input spectra. These built-in functions include automated spectral line finding, least squares fitting of Voigt profiles to spectral lines including equality constraints, various filters including an optimal filter construction tool, continuum fitting, and various I/O functions.
Methodology for Prototyping Increased Levels of Automation for Spacecraft Rendezvous Functions
NASA Technical Reports Server (NTRS)
Hart, Jeremy J.; Valasek, John
2007-01-01
The Crew Exploration Vehicle necessitates higher levels of automation than previous NASA vehicles, due to program requirements for automation, including Automated Rendezvous and Docking. Studies of spacecraft development often point to the locus of decision-making authority between humans and computers (i.e. automation) as a prime driver for cost, safety, and mission success. Therefore, a critical component in the Crew Exploration Vehicle development is the determination of the correct level of automation. To identify the appropriate levels of automation and autonomy to design into a human space flight vehicle, NASA has created the Function-specific Level of Autonomy and Automation Tool. This paper develops a methodology for prototyping increased levels of automation for spacecraft rendezvous functions. This methodology is used to evaluate the accuracy of the Function-specific Level of Autonomy and Automation Tool specified levels of automation, via prototyping. Spacecraft rendezvous planning tasks are selected and then prototyped in Matlab using Fuzzy Logic techniques and existing Space Shuttle rendezvous trajectory algorithms.
Dippon, Matthias; Petzl, Wolfram; Lange, Dorothee; Zerbe, Holm
2017-02-09
Perinatal calf mortality is a current problem in dairy farming with regards to ethics and economic losses. Optimizing calving management by frequent monitoring helps increasing the survival rate. The objective of this study was to evaluate the breed and parity dependent applicability of a recently introduced automated parturition control system with regards to its reliability in the field. Seven days prior to the calculated calving date the automated parturition control system was applied intravaginally in 23 primiparous and 31 multiparous cows in a Holstein-Friesian (HF) and Simmental (FV) crossbred herd. In the case of three consecutive false alarms the animal was removed from the study and was rated as false positive (FP). The statistical significant interdependence of FP alarms and the genetic proportion of HF was calculated using the Mann-Whitney-U test. The automated parturition control system could successfully be applied in all animals with a genetic HF proportion > 66%. Animals with a predominant FV proportion (> 66%) frequently showed FP alarms (31.6%). Furthermore, multiparous cows lost the intravaginal sender more frequently than primiparous cows (29.0% vs. 8.7%). In 72.2% heavily pregnant cows purulent vaginal discharge was observed. The automated parturition control system can successfully be applied in HF cows. Due to frequent losses of the intravaginal sender we cannot recommend its use in cows with a genetic FV proportion > 66%. Future developments of intravaginal automated parturition control systems should incorporate the influence of different breeds on its applicability.
Toward a human-centered aircraft automation philosophy
NASA Technical Reports Server (NTRS)
Billings, Charles E.
1989-01-01
The evolution of automation in civil aircraft is examined in order to discern trends in the respective roles and functions of automation technology and the humans who operate these aircraft. The effects of advances in automation technology on crew reaction is considered and it appears that, though automation may well have decreased the frequency of certain types of human errors in flight, it may also have enabled new categories of human errors, some perhaps less obvious and therefore more serious than those it has alleviated. It is suggested that automation could be designed to keep the pilot closer to the control of the vehicle, while providing an array of information management and aiding functions designed to provide the pilot with data regarding flight replanning, degraded system operation, and the operational status and limits of the aircraft, its systems, and the physical and operational environment. The automation would serve as the pilot's assistant, providing and calculating data, watching for the unexpected, and keeping track of resources and their rate of expenditure.
Prototype space station automation system delivered and demonstrated at NASA
NASA Technical Reports Server (NTRS)
Block, Roger F.
1987-01-01
The Automated Subsystem Control for Life Support System (ASCLSS) program has successfully developed and demonstrated a generic approach to the automation and control of Space Station subsystems. The hierarchical and distributed real time controls system places the required controls authority at every level of the automation system architecture. As a demonstration of the automation technique, the ASCLSS system automated the Air Revitalization Group (ARG) of the Space Station regenerative Environmental Control and Life Support System (ECLSS) using real-time, high fidelity simulators of the ARG processess. This automation system represents an early flight prototype and an important test bed for evaluating Space Station controls technology including future application of ADA software in real-time control and the development and demonstration of embedded artificial intelligence and expert systems (AI/ES) in distributed automation and controls systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This paper reports on an automated metering/proving system for custody transfer of crude oil at the Phillips 66 Co. tanker unloading terminal in Freeport, Texas. It is described as one of the most sophisticated systems developed. The menu-driven, one-button automation removes the proving sequence entirely from manual control. The system also is the to be cost-effective and versatile compared to a dedicated flow computer with API calculation capabilities. Developed by Puffer-Sweiven, systems integrators, the new technology additionally is thought to be the first custody transfer system to employ a programmable logic controller (PLC). The PLC provides the automation, gathers andmore » stores all raw data, and prints alarms. Also the system uses a personal computer operator interface (OI) that runs on the Intel iRMX real time operating system. The OI is loaded with Puffer-Sweiven application software that performs API meter factor and volume correction calculations as well as present color graphics and generate reports.« less
NASA Astrophysics Data System (ADS)
Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.
2018-05-01
The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.
"First generation" automated DNA sequencing technology.
Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M
2011-10-01
Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.
Translations from Kommunist, Number 13, September 1978
1978-10-30
programmed machine tool here is merely a component of a more complex reprogrammable technological system. This includes the robot machine tools with...sufficient possibilities for changing technological operations and processes and automated technological lines. 52 The reprogrammable automated sets will...simulate the possibilities of such sets. A new technological level will be developed in industry related to reprogrammable automated sets, their design
Space station automation study: Autonomous systems and assembly, volume 2
NASA Technical Reports Server (NTRS)
Bradford, K. Z.
1984-01-01
This final report, prepared by Martin Marietta Denver Aerospace, provides the technical results of their input to the Space Station Automation Study, the purpose of which is to develop informed technical guidance in the use of autonomous systems to implement space station functions, many of which can be programmed in advance and are well suited for automated systems.
Space power subsystem automation technology
NASA Technical Reports Server (NTRS)
Graves, J. R. (Compiler)
1982-01-01
The technology issues involved in power subsystem automation and the reasonable objectives to be sought in such a program were discussed. The complexities, uncertainties, and alternatives of power subsystem automation, along with the advantages from both an economic and a technological perspective were considered. Whereas most spacecraft power subsystems now use certain automated functions, the idea of complete autonomy for long periods of time is almost inconceivable. Thus, it seems prudent that the technology program for power subsystem automation be based upon a growth scenario which should provide a structured framework of deliberate steps to enable the evolution of space power subsystems from the current practice of limited autonomy to a greater use of automation with each step being justified on a cost/benefit basis. Each accomplishment should move toward the objectives of decreased requirement for ground control, increased system reliability through onboard management, and ultimately lower energy cost through longer life systems that require fewer resources to operate and maintain. This approach seems well-suited to the evolution of more sophisticated algorithms and eventually perhaps even the use of some sort of artificial intelligence. Multi-hundred kilowatt systems of the future will probably require an advanced level of autonomy if they are to be affordable and manageable.
Long-Term Pavement Performance Automated Faulting Measurement
DOT National Transportation Integrated Search
2015-02-01
This study focused on identifying transverse joint locations on jointed plain concrete pavements using an automated joint detection algorithm and computing faulting at these locations using Long-Term Pavement Performance (LTPP) Program profile data c...
46 CFR 62.50-20 - Additional requirements for minimally attended machinery plants.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Automated Self-propelled Vessel Manning § 62.50-20 Additional... program must be functioning prior to the completion of the evaluation period for reduced manning required...
46 CFR 62.50-20 - Additional requirements for minimally attended machinery plants.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Automated Self-propelled Vessel Manning § 62.50-20 Additional... program must be functioning prior to the completion of the evaluation period for reduced manning required...
46 CFR 62.50-20 - Additional requirements for minimally attended machinery plants.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Automated Self-propelled Vessel Manning § 62.50-20 Additional... program must be functioning prior to the completion of the evaluation period for reduced manning required...
46 CFR 62.50-20 - Additional requirements for minimally attended machinery plants.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Automated Self-propelled Vessel Manning § 62.50-20 Additional... program must be functioning prior to the completion of the evaluation period for reduced manning required...
Automated validation of a computer operating system
NASA Technical Reports Server (NTRS)
Dervage, M. M.; Milberg, B. A.
1970-01-01
Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.
Advanced Technologies and Methodology for Automated Ultrasonic Testing Systems Quantification
DOT National Transportation Integrated Search
2011-04-29
For automated ultrasonic testing (AUT) detection and sizing accuracy, this program developed a methodology for quantification of AUT systems, advancing and quantifying AUT systems imagecapture capabilities, quantifying the performance of multiple AUT...
Benchmarking Glucose Results through Automation: The 2009 Remote Automated Laboratory System Report
Anderson, Marcy; Zito, Denise; Kongable, Gail
2010-01-01
Background Hyperglycemia in the adult inpatient population remains a topic of intense study in U.S. hospitals. Most hospitals have established glycemic control programs but are unable to determine their impact. The 2009 Remote Automated Laboratory System (RALS) Report provides trends in glycemic control over 4 years to 576 U.S. hospitals to support their effort to manage inpatient hyperglycemia. Methods A proprietary software application feeds de-identified patient point-of-care blood glucose (POC-BG) data from the Medical Automation Systems RALS-Plus data management system to a central server. Analyses include the number of tests and the mean and median BG results for intensive care unit (ICU), non-ICU, and each hospital compared to the aggregate of the other hospitals. Results More than 175 million BG results were extracted from 2006–2009; 25% were from the ICU. Mean range of BG results for all inpatients in 2006, 2007, 2008, and 2009 was 142.2–201.9, 145.6–201.2, 140.6–205.7, and 140.7–202.4 mg/dl, respectively. The range for ICU patients was 128–226.5, 119.5–219.8, 121.6–226.0, and 121.1–217 mg/dl, respectively. The range for non-ICU patients was 143.4–195.5, 148.6–199.8, 145.2–201.9, and 140.7–203.6 mg/dl, respectively. Hyperglycemia rates of >180 mg/dl in 2008 and 2009 were examined, and hypoglycemia rates of <40 mg/dl (severe) and <70 mg/dl (moderate) in both 2008 and 2009 were calculated. Conclusions From these data, hospitals can determine the current state of glycemic control in their hospital and in comparison to other hospitals. For many, glycemic control has improved. Automated POC-BG data management software can assist in this effort. PMID:21129348
Workload Capacity: A Response Time-Based Measure of Automation Dependence.
Yamani, Yusuke; McCarley, Jason S
2016-05-01
An experiment used the workload capacity measure C(t) to quantify the processing efficiency of human-automation teams and identify operators' automation usage strategies in a speeded decision task. Although response accuracy rates and related measures are often used to measure the influence of an automated decision aid on human performance, aids can also influence response speed. Mean response times (RTs), however, conflate the influence of the human operator and the automated aid on team performance and may mask changes in the operator's performance strategy under aided conditions. The present study used a measure of parallel processing efficiency, or workload capacity, derived from empirical RT distributions as a novel gauge of human-automation performance and automation dependence in a speeded task. Participants performed a speeded probabilistic decision task with and without the assistance of an automated aid. RT distributions were used to calculate two variants of a workload capacity measure, COR(t) and CAND(t). Capacity measures gave evidence that a diagnosis from the automated aid speeded human participants' responses, and that participants did not moderate their own decision times in anticipation of diagnoses from the aid. Workload capacity provides a sensitive and informative measure of human-automation performance and operators' automation dependence in speeded tasks. © 2016, Human Factors and Ergonomics Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christianson, O; Winslow, J; Samei, E
2014-06-15
Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using opticalmore » character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image quality across CT vendors.« less
Traceability Through Automatic Program Generation
NASA Technical Reports Server (NTRS)
Richardson, Julian; Green, Jeff
2003-01-01
Program synthesis is a technique for automatically deriving programs from specifications of their behavior. One of the arguments made in favour of program synthesis is that it allows one to trace from the specification to the program. One way in which traceability information can be derived is to augment the program synthesis system so that manipulations and calculations it carries out during the synthesis process are annotated with information on what the manipulations and calculations were and why they were made. This information is then accumulated throughout the synthesis process, at the end of which, every artifact produced by the synthesis is annotated with a complete history relating it to every other artifact (including the source specification) which influenced its construction. This approach requires modification of the entire synthesis system - which is labor-intensive and hard to do without influencing its behavior. In this paper, we introduce a novel, lightweight technique for deriving traceability from a program specification to the corresponding synthesized code. Once a program has been successfully synthesized from a specification, small changes are systematically made to the specification and the effects on the synthesized program observed. We have partially automated the technique and applied it in an experiment to one of our program synthesis systems, AUTOFILTER, and to the GNU C compiler, GCC. The results are promising: 1. Manual inspection of the results indicates that most of the connections derived from the source (a specification in the case of AUTOFILTER, C source code in the case of GCC) to its generated target (C source code in the case of AUTOFILTER, assembly language code in the case of GCC) are correct. 2. Around half of the lines in the target can be traced to at least one line of the source. 3. Small changes in the source often induce only small changes in the target.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johanna Oxstrand; Katya Le Blanc
The Human-Automation Collaboration (HAC) research effort is a part of the Department of Energy (DOE) sponsored Advanced Small Modular Reactor (AdvSMR) program conducted at Idaho National Laboratory (INL). The DOE AdvSMR program focuses on plant design and management, reduction of capital costs as well as plant operations and maintenance costs (O&M), and factory production costs benefits.
Identifying and locating surface defects in wood: Part of an automated lumber processing system
Richard W. Conners; Charles W. McMillin; Kingyao Lin; Ramon E. Vasquez-Espinosa
1983-01-01
Continued increases in the cost of materials and labor make it imperative for furniture manufacturers to control costs by improved yield and increased productivity. This paper describes an Automated Lumber Processing System (ALPS) that employs computer tomography, optical scanning technology, the calculation of an optimum cutting strategy, and 1 computer-driven laser...
Dick Stanley; Bruce Jackson
1995-01-01
The cost-effectiveness of park operations is often neglected because information is laborious to compile. The information, however, is critical if we are to derive maximum benefit from scarce resources. This paper describes an automated system for calculating cost-effectiveness ratios with minimum effort using data from existing data bases.
A system of automated processing of deep water hydrological information
NASA Technical Reports Server (NTRS)
Romantsov, V. A.; Dyubkin, I. A.; Klyukbin, L. N.
1974-01-01
An automated system for primary and scientific analysis of deep water hydrological information is presented. Primary processing of the data in this system is carried out on a drifting station, which also calculates the parameters of vertical stability of the sea layers, as well as their depths and altitudes. Methods of processing the raw data are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orimoto, Yuuichi, E-mail: orimoto.yuuichi.888@m.kyushu-u.ac.jp; Aoki, Yuriko; Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012
An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method,more » and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.« less
Orimoto, Yuuichi; Aoki, Yuriko
2016-07-14
An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between "choose-maximum" (choose a base pair giving the maximum β for each step) and "choose-minimum" (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.
Testing an automated method to estimate ground-water recharge from streamflow records
Rutledge, A.T.; Daniel, C.C.
1994-01-01
The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.
Integrated flexible manufacturing program for manufacturing automation and rapid prototyping
NASA Technical Reports Server (NTRS)
Brooks, S. L.; Brown, C. W.; King, M. S.; Simons, W. R.; Zimmerman, J. J.
1993-01-01
The Kansas City Division of Allied Signal Inc., as part of the Integrated Flexible Manufacturing Program (IFMP), is developing an integrated manufacturing environment. Several systems are being developed to produce standards and automation tools for specific activities within the manufacturing environment. The Advanced Manufacturing Development System (AMDS) is concentrating on information standards (STEP) and product data transfer; the Expert Cut Planner system (XCUT) is concentrating on machining operation process planning standards and automation capabilities; the Advanced Numerical Control system (ANC) is concentrating on NC data preparation standards and NC data generation tools; the Inspection Planning and Programming Expert system (IPPEX) is concentrating on inspection process planning, coordinate measuring machine (CMM) inspection standards and CMM part program generation tools; and the Intelligent Scheduling and Planning System (ISAPS) is concentrating on planning and scheduling tools for a flexible manufacturing system environment. All of these projects are working together to address information exchange, standardization, and information sharing to support rapid prototyping in a Flexible Manufacturing System (FMS) environment.
van den Hoven, Allard T; Mc-Ghie, Jackie S; Chelu, Raluca G; Duijnhouwer, Anthonie L; Baggen, Vivan J M; Coenen, Adriaan; Vletter, Wim B; Dijkshoorn, Marcel L; van den Bosch, Annemien E; Roos-Hesselink, Jolien W
2017-12-01
Integration of volumetric heart chamber quantification by 3D echocardiography into clinical practice has been hampered by several factors which a new fully automated algorithm (Left Heart Model, (LHM)) may help overcome. This study therefore aims to evaluate the feasibility and accuracy of the LHM software in quantifying left atrial and left ventricular volumes and left ventricular ejection fraction in a cohort of patients with a bicuspid aortic valve. Patients with a bicuspid aortic valve were prospectively included. All patients underwent 2D and 3D transthoracic echocardiography and computed tomography. Left atrial and ventricular volumes were obtained using the automated program, which did not require manual contour detection. For comparison manual and semi-automated measurements were performed using conventional 2D and 3D datasets. 53 patients were included, in four of those patients no 3D dataset could be acquired. Additionally, 12 patients were excluded based on poor imaging quality. Left ventricular end-diastolic and end-systolic volumes and ejection fraction calculated by the LHM correlated well with manual 2D and 3D measurements (Pearson's r between 0.43 and 0.97, p < 0.05). Left atrial volume (LAV) also correlated significantly although LHM did estimate larger LAV compared to both 2DE and 3DE (Pearson's r between 0.61 and 0.81, p < 0.01). The fully automated software works well in a real-world setting and helps to overcome some of the major hurdles in integrating 3D analysis into daily practice, as it is user-independent and highly reproducible in a group of patients with a clearly defined and well-studied valvular abnormality.
Shahidi, Shoaleh; Bahrampour, Ehsan; Soltanimehr, Elham; Zamani, Ali; Oshagh, Morteza; Moattari, Marzieh; Mehdizadeh, Alireza
2014-09-16
Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods.
Automated Long - Term Scheduling for the SOFIA Airborne Observatory
NASA Technical Reports Server (NTRS)
Civeit, Thomas
2013-01-01
The NASA Stratospheric Observatory for Infrared Astronomy (SOFIA) is a joint US/German project to develop and operate a gyro-stabilized 2.5-meter telescope in a Boeing 747SP. SOFIA's first science observations were made in December 2010. During 2011, SOFIA accomplished 30 flights in the "Early Science" program as well as a deployment to Germany. The new observing period, known as Cycle 1, is scheduled to begin in 2012. It includes 46 science flights grouped in four multi-week observing campaigns spread through a 13-month span. Automation of the flight scheduling process offers a major challenge to the SOFIA mission operations. First because it is needed to mitigate its relatively high cost per unit observing time compared to space-borne missions. Second because automated scheduling techniques available for ground-based and space-based telescopes are inappropriate for an airborne observatory. Although serious attempts have been made in the past to solve part of the problem, until recently mission operations staff was still manually scheduling flights. We present in this paper a new automated solution for generating SOFIA long-term schedules that will be used in operations from the Cycle 1 observing period. We describe the constraints that should be satisfied to solve the SOFIA scheduling problem in the context of real operations. We establish key formulas required to efficiently calculate the aircraft course over ground when evaluating flight schedules. We describe the foundations of the SOFIA long-term scheduler, the constraint representation, and the random search based algorithm that generates observation and instrument schedules. Finally, we report on how the new long-term scheduler has been used in operations to date.
Effects of Automation Types on Air Traffic Controller Situation Awareness and Performance
NASA Technical Reports Server (NTRS)
Sethumadhavan, A.
2009-01-01
The Joint Planning and Development Office has proposed the introduction of automated systems to help air traffic controllers handle the increasing volume of air traffic in the next two decades (JPDO, 2007). Because fully automated systems leave operators out of the decision-making loop (e.g., Billings, 1991), it is important to determine the right level and type of automation that will keep air traffic controllers in the loop. This study examined the differences in the situation awareness (SA) and collision detection performance of individuals when they worked with information acquisition, information analysis, decision and action selection and action implementation automation to control air traffic (Parasuraman, Sheridan, & Wickens, 2000). When the automation was unreliable, the time taken to detect an upcoming collision was significantly longer for all the automation types compared with the information acquisition automation. This poor performance following automation failure was mediated by SA, with lower SA yielding poor performance. Thus, the costs associated with automation failure are greater when automation is applied to higher order stages of information processing. Results have practical implications for automation design and development of SA training programs.
Meyer, Denny; Austin, David William; Kyrios, Michael
2011-01-01
Background The development of e-mental health interventions to treat or prevent mental illness and to enhance wellbeing has risen rapidly over the past decade. This development assists the public in sidestepping some of the obstacles that are often encountered when trying to access traditional face-to-face mental health care services. Objective The objective of our study was to investigate the posttreatment effectiveness of five fully automated self-help cognitive behavior e-therapy programs for generalized anxiety disorder (GAD), panic disorder with or without agoraphobia (PD/A), obsessive–compulsive disorder (OCD), posttraumatic stress disorder (PTSD), and social anxiety disorder (SAD) offered to the international public via Anxiety Online, an open-access full-service virtual psychology clinic for anxiety disorders. Methods We used a naturalistic participant choice, quasi-experimental design to evaluate each of the five Anxiety Online fully automated self-help e-therapy programs. Participants were required to have at least subclinical levels of one of the anxiety disorders to be offered the associated disorder-specific fully automated self-help e-therapy program. These programs are offered free of charge via Anxiety Online. Results A total of 225 people self-selected one of the five e-therapy programs (GAD, n = 88; SAD, n = 50; PD/A, n = 40; PTSD, n = 30; OCD, n = 17) and completed their 12-week posttreatment assessment. Significant improvements were found on 21/25 measures across the five fully automated self-help programs. At postassessment we observed significant reductions on all five anxiety disorder clinical disorder severity ratings (Cohen d range 0.72–1.22), increased confidence in managing one’s own mental health care (Cohen d range 0.70–1.17), and decreases in the total number of clinical diagnoses (except for the PD/A program, where a positive trend was found) (Cohen d range 0.45–1.08). In addition, we found significant improvements in quality of life for the GAD, OCD, PTSD, and SAD e-therapy programs (Cohen d range 0.11–0.96) and significant reductions relating to general psychological distress levels for the GAD, PD/A, and PTSD e-therapy programs (Cohen d range 0.23–1.16). Overall, treatment satisfaction was good across all five e-therapy programs, and posttreatment assessment completers reported using their e-therapy program an average of 395.60 (SD 272.2) minutes over the 12-week treatment period. Conclusions Overall, all five fully automated self-help e-therapy programs appear to be delivering promising high-quality outcomes; however, the results require replication. Trial Registration Australian and New Zealand Clinical Trials Registry ACTRN121611000704998; http://www.anzctr.org.au/trial_view.aspx?ID=336143 (Archived by WebCite at http://www.webcitation.org/618r3wvOG) PMID:22057287
Nexus: A modular workflow management system for quantum simulation codes
NASA Astrophysics Data System (ADS)
Krogel, Jaron T.
2016-01-01
The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.
Evolution paths for advanced automation
NASA Technical Reports Server (NTRS)
Healey, Kathleen J.
1990-01-01
As Space Station Freedom (SSF) evolves, increased automation and autonomy will be required to meet Space Station Freedom Program (SSFP) objectives. As a precursor to the use of advanced automation within the SSFP, especially if it is to be used on SSF (e.g., to automate the operation of the flight systems), the underlying technologies will need to be elevated to a high level of readiness to ensure safe and effective operations. Ground facilities supporting the development of these flight systems -- from research and development laboratories through formal hardware and software development environments -- will be responsible for achieving these levels of technology readiness. These facilities will need to evolve support the general evolution of the SSFP. This evolution will include support for increasing the use of advanced automation. The SSF Advanced Development Program has funded a study to define evolution paths for advanced automaton within the SSFP's ground-based facilities which will enable, promote, and accelerate the appropriate use of advanced automation on-board SSF. The current capability of the test beds and facilities, such as the Software Support Environment, with regard to advanced automation, has been assessed and their desired evolutionary capabilities have been defined. Plans and guidelines for achieving this necessary capability have been constructed. The approach taken has combined indepth interviews of test beds personnel at all SSF Work Package centers with awareness of relevant state-of-the-art technology and technology insertion methodologies. Key recommendations from the study include advocating a NASA-wide task force for advanced automation, and the creation of software prototype transition environments to facilitate the incorporation of advanced automation in the SSFP.
NMR-based automated protein structure determination.
Würz, Julia M; Kazemi, Sina; Schmidt, Elena; Bagaria, Anurag; Güntert, Peter
2017-08-15
NMR spectra analysis for protein structure determination can now in many cases be performed by automated computational methods. This overview of the computational methods for NMR protein structure analysis presents recent automated methods for signal identification in multidimensional NMR spectra, sequence-specific resonance assignment, collection of conformational restraints, and structure calculation, as implemented in the CYANA software package. These algorithms are sufficiently reliable and integrated into one software package to enable the fully automated structure determination of proteins starting from NMR spectra without manual interventions or corrections at intermediate steps, with an accuracy of 1-2 Å backbone RMSD in comparison with manually solved reference structures. Copyright © 2017 Elsevier Inc. All rights reserved.
Automated enforcement and highway safety : final report.
DOT National Transportation Integrated Search
2013-11-01
The objectives of the Automated Enforcement and Highway Safety Research study were to conduct a : literature review of national research related to the effectiveness of Red Light Camera (RLC) programs : in changing crash frequency, crash severity, cr...
AN ULTRAVIOLET-VISIBLE SPECTROPHOTOMETER AUTOMATION SYSTEM. PART III: PROGRAM DOCUMENTATION
The Ultraviolet-Visible Spectrophotometer (UVVIS) automation system accomplishes 'on-line' spectrophotometric quality assurance determinations, report generations, plot generations and data reduction for chlorophyll or color analysis. This system also has the capability to proces...
NASA Technical Reports Server (NTRS)
Abbott, Kathy H.; Schutte, Paul C.
1989-01-01
A development status evaluation is presented for the NASA-Langley Intelligent Cockpit Aids research program, which encompasses AI, human/machine interfaces, and conventional automation. Attention is being given to decision-aiding concepts for human-centered automation, with emphasis on inflight subsystem fault management, inflight mission replanning, and communications management. The cockpit envisioned is for advanced commercial transport aircraft.
NASA Astrophysics Data System (ADS)
Görres, Carolyn-Monika; Kammann, Claudia; Ceulemans, Reinhart
2015-04-01
Soil respiration fluxes are influenced by natural factors such as climate and soil type, but also by anthropogenic activities in managed ecosystems. As a result, soil CO2 fluxes show a large intra- and interannual as well as intra- and intersite variability. Most of the available soil CO2 flux data giving insights into this variability have been measured with manually closed static chambers, but technological advances in the past 15 years have also led to an increased use of automated closed chamber systems. The great advantage of automated chambers in comparison to manually operated chambers is the higher temporal resolution of the flux data. This is especially important if we want to better understand the effects of short-term events, e.g. fertilization or heavy rainfall, on soil CO2 flux variability. However, the chamber method is an invasive measurement method which can potentially alter soil CO2 fluxes and lead to biased measurement results. In the peer-reviewed literature, many papers compare the field performance and results of different closed static chamber designs, or compare manual chambers with automated chamber systems, to identify potential biases in CO2 flux measurements, and thus help to reduce uncertainties in the flux data. However, inter-comparisons of different automated closed dynamic chamber systems are still lacking. Here we are going to present a field comparison of the most-cited automated chamber system, the LI-8100A Automated Soil Flux System, with the also commercially available Greenhouse Gas Monitoring System AGPS. Both measurement systems were installed side by side at a recently harvested poplar bioenergy plantation (POPFULL, http://uahost.uantwerpen.be/popfull/) from April 2014 until August 2014. The plantation provided optimal comparison conditions with a bare field situation after the harvest and a regrowing canopy resulting in a broad variety of microclimates. Furthermore, the plantation was planted in a double-row system with the row width alternating between 1.50 m and 0.75 m, creating spatial differences in e.g. dry bulk density and soil organic carbon content. The soil CO2 flux data sets were split into four subsets each characterized by different environmental conditions, thus presenting different challenges for the measurement equipment, namely 1) daytime, calm conditions, 2) daytime, windy conditions, 3) nighttime, calm conditions, and 4) nighttime, windy conditions. In parallel to the chamber measurements, soil CO2 concentrations were manually measured in the topsoil. Soil CO2 fluxes calculated from this dataset were used as a reference range of soil CO2 fluxes at the field site. Funding support: ERC Advanced Grant agreement (# 233366) POPFULL under the EC 7th Framework Program (FP7/2007-2013), Flemish Hercules Foundation as Infrastructure contract # ZW09-06, and the Methusalem Program of the Flemish Government.
Automated symbolic calculations in nonequilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Kröger, Martin; Hütter, Markus
2010-12-01
We cast the Jacobi identity for continuous fields into a local form which eliminates the need to perform any partial integration to the expense of performing variational derivatives. This allows us to test the Jacobi identity definitely and efficiently and to provide equations between different components defining a potential Poisson bracket. We provide a simple Mathematica TM notebook which allows to perform this task conveniently, and which offers some additional functionalities of use within the framework of nonequilibrium thermodynamics: reversible equations of change for fields, and the conservation of entropy during the reversible dynamics. Program summaryProgram title: Poissonbracket.nb Catalogue identifier: AEGW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 227 952 No. of bytes in distributed program, including test data, etc.: 268 918 Distribution format: tar.gz Programming language: Mathematica TM 7.0 Computer: Any computer running Mathematica TM 6.0 and later versions Operating system: Linux, MacOS, Windows RAM: 100 Mb Classification: 4.2, 5, 23 Nature of problem: Testing the Jacobi identity can be a very complex task depending on the structure of the Poisson bracket. The Mathematica TM notebook provided here solves this problem using a novel symbolic approach based on inherent properties of the variational derivative, highly suitable for the present tasks. As a by product, calculations performed with the Poisson bracket assume a compact form. Solution method: The problem is first cast into a form which eliminates the need to perform partial integration for arbitrary functionals at the expense of performing variational derivatives. The corresponding equations are conveniently obtained using the symbolic programming environment Mathematica TM. Running time: For the test cases and most typical cases in the literature, the running time is of the order of seconds or minutes, respectively.
ESTIMATION OF PHOSPHATE ESTER HYDROLYSIS RATE CONSTANTS - ALKALINE HYDROLYSIS
SPARC (SPARC Performs Automated Reasoning in Chemistry) chemical reactivity models were extended to allow the calculation of alkaline hydrolysis rate constants of phosphate esters in water. The rate is calculated from the energy difference between the initial and transition state...
Noelting, J; Bharucha, A E; Lake, D S; Manduca, A; Fletcher, J G; Riederer, S J; Joseph Melton, L; Zinsmeister, A R
2012-10-01
Inter-observer variability limits the reproducibility of pelvic floor motion measured by magnetic resonance imaging (MRI). Our aim was to develop a semi-automated program measuring pelvic floor motion in a reproducible and refined manner. Pelvic floor anatomy and motion during voluntary contraction (squeeze) and rectal evacuation were assessed by MRI in 64 women with fecal incontinence (FI) and 64 age-matched controls. A radiologist measured anorectal angles and anorectal junction motion. A semi-automated program did the same and also dissected anorectal motion into perpendicular vectors representing the puborectalis and other pelvic floor muscles, assessed the pubococcygeal angle, and evaluated pelvic rotation. Manual and semi-automated measurements of anorectal junction motion (r = 0.70; P < 0.0001) during squeeze and evacuation were correlated, as were anorectal angles at rest, squeeze, and evacuation; angle change during squeeze or evacuation was less so. Semi-automated measurements of anorectal and pelvic bony motion were also reproducible within subjects. During squeeze, puborectalis injury was associated (P ≤ 0.01) with smaller puborectalis but not pelvic floor motion vectors, reflecting impaired puborectalis function. The pubococcygeal angle, reflecting posterior pelvic floor motion, was smaller during squeeze and larger during evacuation. However, pubococcygeal angles and pelvic rotation during squeeze and evacuation did not differ significantly between FI and controls. This semi-automated program provides a reproducible, efficient, and refined analysis of pelvic floor motion by MRI. Puborectalis injury is independently associated with impaired motion of puborectalis, not other pelvic floor muscles in controls and women with FI. © 2012 Blackwell Publishing Ltd.
McKenzie, Kirsten; Walker, Sue; Tong, Shilu
It remains unclear whether the change from a manual to an automated coding system (ACS) for deaths has significantly affected the consistency of Australian mortality data. The underlying causes of 34,000 deaths registered in 1997 in Australia were dual coded, in ICD-9 manually, and by using an automated computer coding program. The diseases most affected by the change from manual to ACS were senile/presenile dementia, and pneumonia. The most common disease to which a manually assigned underlying cause of senile dementia was coded with ACS was unspecified psychoses (37.2%). Only 12.5% of codes assigned by ACS as senile dementia were coded the same by manual coders. This study indicates some important differences in mortality rates when comparing mortality data that have been coded manually with those coded using an automated computer coding program. These differences may be related to both the different interpretation of ICD coding rules between manual and automated coding, and different co-morbidities or co-existing conditions among demographic groups.
Operating and service manual for the NASA Lewis automated far-field antenna range
NASA Technical Reports Server (NTRS)
Terry, John D.
1992-01-01
This NASA Lewis far-field antenna range was recently upgraded and automated to meet the growing and demanding needs of the satellite communications program. Here, assistance is offered in the operation and service of this range. The procedures for configuring the test hardware and for operating the Far-Field Antenna Measurement Program (FAMP) are given. Included are the steps for getting started and for installing the proper microwave equipment.
Automation of the aircraft design process
NASA Technical Reports Server (NTRS)
Heldenfels, R. R.
1974-01-01
The increasing use of the computer to automate the aerospace product development and engineering process is examined with emphasis on structural analysis and design. Examples of systems of computer programs in aerospace and other industries are reviewed and related to the characteristics of aircraft design in its conceptual, preliminary, and detailed phases. Problems with current procedures are identified, and potential improvements from optimum utilization of integrated disciplinary computer programs by a man/computer team are indicated.
NASA Technical Reports Server (NTRS)
Myers, Dale
1987-01-01
An introduction is given to NASA goals in the development of automation (expert systems) and robotics technologies in the Space Station program. Artificial intelligence (AI) has been identified as a means to lowering ground support costs. Telerobotics will enhance space assembly, servicing and repair capabilities, and will be used for an estimated half of the necessary EVA tasks. The general principles guiding NASA in the design, development, ground-testing, interactions with industry and construction of the Space Station component systems are summarized. The telerobotics program has progressed to a point where a telerobot servicer is a firm component of the first Space Station element launch, to support assembly, maintenance and servicing of the Station. The University of Wisconsin has been selected for the establishment of a Center for the Commercial Development of Space, specializing in space automation and robotics.
Left ventricular pressure and volume data acquisition and analysis using LabVIEW.
Cassidy, S C; Teitel, D F
1997-03-01
To automate analysis of left ventricular pressure-volume data, we used LabVIEW to create applications that digitize and display data recorded from conductance and manometric catheters. Applications separate data into cardiac cycles, calculate parallel conductance, and calculate indices of left ventricular function, including end-systolic elastance, preload-recruitable stroke work, stroke volume, ejection fraction, stroke work, maximum and minimum derivative of ventricular pressure, heart rate, indices of relaxation, peak filling rate, and ventricular chamber stiffness. Pressure-volume loops can be graphically displayed. These analyses are exported to a text-file. These applications have simplified and automated the process of evaluating ventricular function.
Human factors phase III : effects of train control technology on operator performance
DOT National Transportation Integrated Search
2005-01-01
This report describes a study evaluating the effects of train control technology on locomotive engineer performance. Several types : of train control systems were evaluated: partial automation (cruise control and programmed stop) and full automation ...
Requirements for Flight Testing Automated Terminal Service
DOT National Transportation Integrated Search
1977-05-01
This report describes requirements for the flight tests of the baseline Automated Terminals Service (ATS) system. The overall objective of the flight test program is to evaluate the feasibility of the ATS concept. Within this objective there are two ...
Human factors phase III : effects of train control technology on operator performance.
DOT National Transportation Integrated Search
2005-01-31
This report describes a study evaluating the effects of train control technology on locomotive engineer performance. Several types of train control systems were evaluated: partial automation (cruise control and programmed stop) and full automation we...
Extended System Operations Studies for Automated Guideway Transit Systems
DOT National Transportation Integrated Search
1982-02-01
The objectives of the System Operations Studies (SOS) of the Automated Guideway Transit Technology (AGTT) program was to develop models for the analysis of system operations, to evaluate AGT system performance and cost, and to establish guidelines fo...
Automated video feature extraction : workshop summary report October 10-11 2012.
DOT National Transportation Integrated Search
2012-12-01
This report summarizes a 2-day workshop on automated video feature extraction. Discussion focused on the Naturalistic Driving : Study, funded by the second Strategic Highway Research Program, and also involved the companion roadway inventory dataset....
Computer automation for feedback system design
NASA Technical Reports Server (NTRS)
1975-01-01
Mathematical techniques and explanations of various steps used by an automated computer program to design feedback systems are summarized. Special attention was given to refining the automatic evaluation suboptimal loop transmission and the translation of time to frequency domain specifications.
Automated Power Assessment for Helicopter Turboshaft Engines
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Litt, Jonathan S.
2008-01-01
An accurate indication of available power is required for helicopter mission planning purposes. Available power is currently estimated on U.S. Army Blackhawk helicopters by performing a Maximum Power Check (MPC), a manual procedure performed by maintenance pilots on a periodic basis. The MPC establishes Engine Torque Factor (ETF), an indication of available power. It is desirable to replace the current manual MPC procedure with an automated approach that will enable continuous real-time assessment of available power utilizing normal mission data. This report presents an automated power assessment approach which processes data currently collected within helicopter Health and Usage Monitoring System (HUMS) units. The overall approach consists of: 1) a steady-state data filter which identifies and extracts steady-state operating points within HUMS data sets; 2) engine performance curve trend monitoring and updating; and 3) automated ETF calculation. The algorithm is coded in MATLAB (The MathWorks, Inc.) and currently runs on a PC. Results from the application of this technique to HUMS mission data collected from UH-60L aircraft equipped with T700-GE-701C engines are presented and compared to manually calculated ETF values. Potential future enhancements are discussed.
The automated system for prevention of industrial-caused diseases
NASA Astrophysics Data System (ADS)
Varnavsky, A. N.
2017-01-01
The paper presents the automated system intended to prevent industrial-caused diseases of workers, the basis of which is represented by algorithms of preventing several negative functional conditions (stress, monotony). The emergence of such state shall be determined based on an analysis of bioelectric signals, in particular, skin-galvanic reactions. Proceeding from the dynamics of the functional state, the automated system offers to perform an optimized set of measures to restore the health of the worker. Implementation of an automated system is presented in Visual Programming system LabVIEW.
Automated CPX support system preliminary design phase
NASA Technical Reports Server (NTRS)
Bordeaux, T. A.; Carson, E. T.; Hepburn, C. D.; Shinnick, F. M.
1984-01-01
The development of the Distributed Command and Control System (DCCS) is discussed. The development of an automated C2 system stimulated the development of an automated command post exercise (CPX) support system to provide a more realistic stimulus to DCCS than could be achieved with the existing manual system. An automated CPX system to support corps-level exercise was designed. The effort comprised four tasks: (1) collecting and documenting user requirements; (2) developing a preliminary system design; (3) defining a program plan; and (4) evaluating the suitability of the TRASANA FOURCE computer model.
Automated analysis of cell migration and nuclear envelope rupture in confined environments.
Elacqua, Joshua J; McGregor, Alexandra L; Lammerding, Jan
2018-01-01
Recent in vitro and in vivo studies have highlighted the importance of the cell nucleus in governing migration through confined environments. Microfluidic devices that mimic the narrow interstitial spaces of tissues have emerged as important tools to study cellular dynamics during confined migration, including the consequences of nuclear deformation and nuclear envelope rupture. However, while image acquisition can be automated on motorized microscopes, the analysis of the corresponding time-lapse sequences for nuclear transit through the pores and events such as nuclear envelope rupture currently requires manual analysis. In addition to being highly time-consuming, such manual analysis is susceptible to person-to-person variability. Studies that compare large numbers of cell types and conditions therefore require automated image analysis to achieve sufficiently high throughput. Here, we present an automated image analysis program to register microfluidic constrictions and perform image segmentation to detect individual cell nuclei. The MATLAB program tracks nuclear migration over time and records constriction-transit events, transit times, transit success rates, and nuclear envelope rupture. Such automation reduces the time required to analyze migration experiments from weeks to hours, and removes the variability that arises from different human analysts. Comparison with manual analysis confirmed that both constriction transit and nuclear envelope rupture were detected correctly and reliably, and the automated analysis results closely matched a manual analysis gold standard. Applying the program to specific biological examples, we demonstrate its ability to detect differences in nuclear transit time between cells with different levels of the nuclear envelope proteins lamin A/C, which govern nuclear deformability, and to detect an increase in nuclear envelope rupture duration in cells in which CHMP7, a protein involved in nuclear envelope repair, had been depleted. The program thus presents a versatile tool for the study of confined migration and its effect on the cell nucleus.
Viscous wing theory development. Volume 1: Analysis, method and results
NASA Technical Reports Server (NTRS)
Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.
1986-01-01
Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.
ESTIMATION OF PHOSPHATE ESTER HYDROLYSIS RATE CONSTANTS. I. ALKALINE HYDROLYSIS
SPARC (SPARC Performs Automated Reasoning in Chemistry) chemical reactivity models were extended to allow the calculation of alkaline hydrolysis rate constants of phosphate esters in water. The rate is calculated from the energy difference between the initial and transition state...
Lee, Woonghee; Stark, Jaime L; Markley, John L
2014-11-01
Peak-picking Of Noe Data Enabled by Restriction Of Shift Assignments-Client Server (PONDEROSA-C/S) builds on the original PONDEROSA software (Lee et al. in Bioinformatics 27:1727-1728. doi: 10.1093/bioinformatics/btr200, 2011) and includes improved features for structure calculation and refinement. PONDEROSA-C/S consists of three programs: Ponderosa Server, Ponderosa Client, and Ponderosa Analyzer. PONDEROSA-C/S takes as input the protein sequence, a list of assigned chemical shifts, and nuclear Overhauser data sets ((13)C- and/or (15)N-NOESY). The output is a set of assigned NOEs and 3D structural models for the protein. Ponderosa Analyzer supports the visualization, validation, and refinement of the results from Ponderosa Server. These tools enable semi-automated NMR-based structure determination of proteins in a rapid and robust fashion. We present examples showing the use of PONDEROSA-C/S in solving structures of four proteins: two that enable comparison with the original PONDEROSA package, and two from the Critical Assessment of automated Structure Determination by NMR (Rosato et al. in Nat Methods 6:625-626. doi: 10.1038/nmeth0909-625 , 2009) competition. The software package can be downloaded freely in binary format from http://pine.nmrfam.wisc.edu/download_packages.html. Registered users of the National Magnetic Resonance Facility at Madison can submit jobs to the PONDEROSA-C/S server at http://ponderosa.nmrfam.wisc.edu, where instructions, tutorials, and instructions can be found. Structures are normally returned within 1-2 days.
NASA Technical Reports Server (NTRS)
Schmidt, Lorne R.; Francoeur, J.; Aguero, Alina; Wertheimer, Michael R.; Klemberg-Sapieha, J. E.; Martinu, L.; Blezius, J. W.; Oliver, M.; Singh, A.
1995-01-01
Three projects are currently underway for the development of new coatings for the protection of materials in the space environment. These coatings are based on vacuum deposition technologies. The projects will go as far as the proof-of-concept stage when the commercial potential for the technology will be demonstrated on pilot-scale fabrication facilities in 1996. These projects are part of a subprogram to develop supporting technologies for automation and robotics technologies being developed under the Canadian Space Agency's STEAR Program, part of the Canadian Space Station Program.
TRAP: automated classification, quantification and annotation of tandemly repeated sequences.
Sobreira, Tiago José P; Durham, Alan M; Gruber, Arthur
2006-02-01
TRAP, the Tandem Repeats Analysis Program, is a Perl program that provides a unified set of analyses for the selection, classification, quantification and automated annotation of tandemly repeated sequences. TRAP uses the results of the Tandem Repeats Finder program to perform a global analysis of the satellite content of DNA sequences, permitting researchers to easily assess the tandem repeat content for both individual sequences and whole genomes. The results can be generated in convenient formats such as HTML and comma-separated values. TRAP can also be used to automatically generate annotation data in the format of feature table and GFF files.
Can we replace curation with information extraction software?
Karp, Peter D
2016-01-01
Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.
Computers Launch Faster, Better Job Matching
ERIC Educational Resources Information Center
Stevenson, Gloria
1976-01-01
Employment Security Automation Project (ESAP), a five-year program sponsored by the Employment and Training Administration, features an innovative computer-assisted job matching system and instantaneous computer-assisted service for unemployment insurance claimants. ESAP will also consolidate existing automated employment security systems to…
Systems Operation Studies for Automated Guideway Transit Systems : Summary Report
DOT National Transportation Integrated Search
1980-02-01
In order to examine specific Automated Guideway Transit (AGT) developments and concepts and to build a better knowledge base for future decision-making, UMTA has undertaken a new program of studies and technology investigations called the Urban Mass ...
Multi-Dimensional Signal Processing Research Program
1981-09-30
applications to real-time image processing and analysis. A specific long-range application is the automated processing of aerial reconnaissance imagery...Non-supervised image segmentation is a potentially im- portant operation in the automated processing of aerial reconnaissance pho- tographs since it
Benefit Estimates of Terminal Area Productivity Program Technologies
NASA Technical Reports Server (NTRS)
Hemm, Robert; Shapiro, Gerald; Lee, David; Gribko, Joana; Glaser, Bonnie
1999-01-01
This report documents benefit analyses for the NASA Terminal Area Technology (TAP) technology programs. Benefits are based on reductions in arrival delays at ten major airports over the 10 years from 2006 through 2015. Detailed analytic airport capacity and delay models were constructed to produce the estimates. The goal of TAP is enable good weather operations tempos in all weather conditions. The TAP program includes technologies to measure and predict runway occupancy times, reduce runway occupancy times in bad weather, accurately predict wake vortex hazards, and couple controller automation with aircraft flight management systems. The report presents and discusses the estimate results and describes the models. Three appendixes document the model algorithms and discuss the input parameters selected for the TAP technologies. The fourth appendix is the user's guide for the models. The results indicate that the combined benefits for all TAP technologies at all 10 airports range from $550 to $650 million per year (in constant 1997 dollars). Additional benefits will accrue from reductions in departure delays. Departure delay benefits are calculated by the current models.
SAGA: A project to automate the management of software production systems
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.
1987-01-01
The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.
NASA Technical Reports Server (NTRS)
Govindaraj, T.; Mitchell, C. M.
1994-01-01
One of the goals of the National Aviation Safety/Automation program is to address the issue of human-centered automation in the cockpit. Human-centered automation is automation that, in the cockpit, enhances or assists the crew rather than replacing them. The Georgia Tech research program focused on this general theme, with emphasis on designing a computer-based pilot's assistant, intelligent (i.e, context-sensitive) displays, and an intelligent tutoring system for understanding and operating the autoflight system. In particular, the aids and displays were designed to enhance the crew's situational awareness of the current state of the automated flight systems and to assist the crew's situational awareness of the current state of the automated flight systems and to assist the crew in coordinating the autoflight system resources. The activities of this grant included: (1) an OFMspert to understand pilot navigation activities in a 727 class aircraft; (2) an extension of OFMspert to understand mode control in a glass cockpit, Georgia Tech Crew Activity Tracking System (GT-CATS); (3) the design of a training system to teach pilots about the vertical navigation portion of the flight management system -VNAV Tutor; and (4) a proof-of-concept display, using existing display technology, to facilitate mode awareness, particularly in situations in which controlled flight into terrain (CFIT) is a potential.
Nexus: a modular workflow management system for quantum simulation codes
Krogel, Jaron T.
2015-08-24
The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummore » chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.« less
SPARTA: Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis.
Johnson, Benjamin K; Scholz, Matthew B; Teal, Tracy K; Abramovitch, Robert B
2016-02-04
Many tools exist in the analysis of bacterial RNA sequencing (RNA-seq) transcriptional profiling experiments to identify differentially expressed genes between experimental conditions. Generally, the workflow includes quality control of reads, mapping to a reference, counting transcript abundance, and statistical tests for differentially expressed genes. In spite of the numerous tools developed for each component of an RNA-seq analysis workflow, easy-to-use bacterially oriented workflow applications to combine multiple tools and automate the process are lacking. With many tools to choose from for each step, the task of identifying a specific tool, adapting the input/output options to the specific use-case, and integrating the tools into a coherent analysis pipeline is not a trivial endeavor, particularly for microbiologists with limited bioinformatics experience. To make bacterial RNA-seq data analysis more accessible, we developed a Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis (SPARTA). SPARTA is a reference-based bacterial RNA-seq analysis workflow application for single-end Illumina reads. SPARTA is turnkey software that simplifies the process of analyzing RNA-seq data sets, making bacterial RNA-seq analysis a routine process that can be undertaken on a personal computer or in the classroom. The easy-to-install, complete workflow processes whole transcriptome shotgun sequencing data files by trimming reads and removing adapters, mapping reads to a reference, counting gene features, calculating differential gene expression, and, importantly, checking for potential batch effects within the data set. SPARTA outputs quality analysis reports, gene feature counts and differential gene expression tables and scatterplots. SPARTA provides an easy-to-use bacterial RNA-seq transcriptional profiling workflow to identify differentially expressed genes between experimental conditions. This software will enable microbiologists with limited bioinformatics experience to analyze their data and integrate next generation sequencing (NGS) technologies into the classroom. The SPARTA software and tutorial are available at sparta.readthedocs.org.
ICECAP: an integrated, general-purpose, automation-assisted IC50/EC50 assay platform.
Li, Ming; Chou, Judy; King, Kristopher W; Jing, Jing; Wei, Dong; Yang, Liyu
2015-02-01
IC50 and EC50 values are commonly used to evaluate drug potency. Mass spectrometry (MS)-centric bioanalytical and biomarker labs are now conducting IC50/EC50 assays, which, if done manually, are tedious and error-prone. Existing bioanalytical sample preparation automation systems cannot meet IC50/EC50 assay throughput demand. A general-purpose, automation-assisted IC50/EC50 assay platform was developed to automate the calculations of spiking solutions and the matrix solutions preparation scheme, the actual spiking and matrix solutions preparations, as well as the flexible sample extraction procedures after incubation. In addition, the platform also automates the data extraction, nonlinear regression curve fitting, computation of IC50/EC50 values, graphing, and reporting. The automation-assisted IC50/EC50 assay platform can process the whole class of assays of varying assay conditions. In each run, the system can handle up to 32 compounds and up to 10 concentration levels per compound, and it greatly improves IC50/EC50 assay experimental productivity and data processing efficiency. © 2014 Society for Laboratory Automation and Screening.
Robotics Programs: Automation Training in Disguise.
ERIC Educational Resources Information Center
Rehg, James A.
1985-01-01
Questions and answers from the book "Guidelines for Robotics Program Development" are presented, addressing some of the major issues confronted by the person setting the direction for a robotics training program. (CT)
SU-G-206-17: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Rutel, I; Yang, K
2016-06-15
Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semiautomated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less
Automated Installation Verification of COMSOL via LiveLink for MATLAB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowell, Michael W
Verifying that a local software installation performs as the developer intends is a potentially time-consuming but necessary step for nuclear safety-related codes. Automating this process not only saves time, but can increase reliability and scope of verification compared to ‘hand’ comparisons. While COMSOL does not include automatic installation verification as many commercial codes do, it does provide tools such as LiveLink™ for MATLAB® and the COMSOL API for use with Java® through which the user can automate the process. Here we present a successful automated verification example of a local COMSOL 5.0 installation for nuclear safety-related calculations at the Oakmore » Ridge National Laboratory’s High Flux Isotope Reactor (HFIR).« less
NASA Technical Reports Server (NTRS)
1979-01-01
The performance, design and verification requirements for the space Construction Automated Fabrication Experiment (SCAFE) are defined. The SCAFE program defines, develops, and demonstrates the techniques, processes, and equipment required for the automatic fabrication of structural elements in space and for the assembly of such elements into a large, lightweight structure. The program defines a large structural platform to be constructed in orbit using the space shuttle as a launch vehicle and construction base.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The Automated Residential Energy Standard (ARES) program is designed to identify levels of thermal integrity (e.g., insulation levels, glazing layers, equipment efficiencies, etc.) that are cost effective for typical residential structures and to create a residential energy standard based on these levels. This document contains technical background the explains the data and the algorithms used by the program.
Robust design of microchannel cooler
NASA Astrophysics Data System (ADS)
He, Ye; Yang, Tao; Hu, Li; Li, Leimin
2005-12-01
Microchannel cooler has offered a new method for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, Taguchi method that is one of robust design method was chosen to optimize three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume method by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The optimal design is presented which compromises between optimal cooling performance and its robustness. This design method proves to be available.
NASA Technical Reports Server (NTRS)
Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam;
2009-01-01
The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.
Automated subsystems control development. [for life support systems of space station
NASA Technical Reports Server (NTRS)
Block, R. F.; Heppner, D. B.; Samonski, F. H., Jr.; Lance, N., Jr.
1985-01-01
NASA has the objective to launch a Space Station in the 1990s. It has been found that the success of the Space Station engineering development, the achievement of initial operational capability (IOC), and the operation of a productive Space Station will depend heavily on the implementation of an effective automation and control approach. For the development of technology needed to implement the required automation and control function, a contract entitled 'Automated Subsystems Control for Life Support Systems' (ASCLSS) was awarded to two American companies. The present paper provides a description of the ASCLSS program. Attention is given to an automation and control architecture study, a generic automation and control approach for hardware demonstration, a standard software approach, application of Air Revitalization Group (ARG) process simulators, and a generic man-machine interface.
Explosive Transient Camera (ETC) Program
NASA Technical Reports Server (NTRS)
Ricker, George
1991-01-01
Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.
Computer-Aided Software Engineering - An approach to real-time software development
NASA Technical Reports Server (NTRS)
Walker, Carrie K.; Turkovich, John J.
1989-01-01
A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.
Computer aided fixture design - A case based approach
NASA Astrophysics Data System (ADS)
Tanji, Shekhar; Raiker, Saiesh; Mathew, Arun Tom
2017-11-01
Automated fixture design plays important role in process planning and integration of CAD and CAM. An automated fixture setup design system is developed where when fixturing surfaces and points are described allowing modular fixture components to get automatically select for generating fixture units and placed into position with satisfying assembled conditions. In past, various knowledge based system have been developed to implement CAFD in practice. In this paper, to obtain an acceptable automated machining fixture design, a case-based reasoning method with developed retrieval system is proposed. Visual Basic (VB) programming language is used in integrating with SolidWorks API (Application programming interface) module for better retrieval procedure reducing computational time. These properties are incorporated in numerical simulation to determine the best fit for practical use.
NASA Astrophysics Data System (ADS)
Rivers, Thane D.
1992-06-01
An Automated Scanning Monochromator was developed using: an Acton Research Corporation (ARC) Monochromator, Ealing Photomultiplier Tube and a Macintosh PC in conjunction with LabVIEW software. The LabVIEW Virtual Instrument written to operate the ARC Monochromator is a mouse driven user friendly program developed for automated spectral data measurements. Resolution and sensitivity of the Automated Scanning Monochromator System were determined experimentally. The Automated monochromator was then used for spectral measurements of a Platinum Lamp. Additionally, the reflectivity curve for a BaSO4 coated screen has been measured. Reflectivity measurements indicate a large discrepancy with expected results. Further analysis of the reflectivity experiment is required for conclusive results.
Segmentation of malignant lesions in 3D breast ultrasound using a depth-dependent model.
Tan, Tao; Gubern-Mérida, Albert; Borelli, Cristina; Manniesing, Rashindra; van Zelst, Jan; Wang, Lei; Zhang, Wei; Platel, Bram; Mann, Ritse M; Karssemeijer, Nico
2016-07-01
Automated 3D breast ultrasound (ABUS) has been proposed as a complementary screening modality to mammography for early detection of breast cancers. To facilitate the interpretation of ABUS images, automated diagnosis and detection techniques are being developed, in which malignant lesion segmentation plays an important role. However, automated segmentation of cancer in ABUS is challenging since lesion edges might not be well defined. In this study, the authors aim at developing an automated segmentation method for malignant lesions in ABUS that is robust to ill-defined cancer edges and posterior shadowing. A segmentation method using depth-guided dynamic programming based on spiral scanning is proposed. The method automatically adjusts aggressiveness of the segmentation according to the position of the voxels relative to the lesion center. Segmentation is more aggressive in the upper part of the lesion (close to the transducer) than at the bottom (far away from the transducer), where posterior shadowing is usually visible. The authors used Dice similarity coefficient (Dice) for evaluation. The proposed method is compared to existing state of the art approaches such as graph cut, level set, and smart opening and an existing dynamic programming method without depth dependence. In a dataset of 78 cancers, our proposed segmentation method achieved a mean Dice of 0.73 ± 0.14. The method outperforms an existing dynamic programming method (0.70 ± 0.16) on this task (p = 0.03) and it is also significantly (p < 0.001) better than graph cut (0.66 ± 0.18), level set based approach (0.63 ± 0.20) and smart opening (0.65 ± 0.12). The proposed depth-guided dynamic programming method achieves accurate breast malignant lesion segmentation results in automated breast ultrasound.
Neff, Michael; Rauhut, Guntram
2014-02-05
Multidimensional potential energy surfaces obtained from explicitly correlated coupled-cluster calculations and further corrections for high-order correlation contributions, scalar relativistic effects and core-correlation energy contributions were generated in a fully automated fashion for the double-minimum benchmark systems OH3(+) and NH3. The black-box generation of the potentials is based on normal coordinates, which were used in the underlying multimode expansions of the potentials and the μ-tensor within the Watson operator. Normal coordinates are not the optimal choice for describing double-minimum potentials and the question remains if they can be used for accurate calculations at all. However, their unique definition is an appealing feature, which removes remaining errors in truncated potential expansions arising from different choices of curvilinear coordinate systems. Fully automated calculations are presented, which demonstrate, that the proposed scheme allows for the determination of energy levels and tunneling splittings as a routine application. Copyright © 2013 Elsevier B.V. All rights reserved.
Computational efficiency for the surface renewal method
NASA Astrophysics Data System (ADS)
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
Using Meta Analysis Techniques to Assess the Safety Effect of Red Light Running Cameras
DOT National Transportation Integrated Search
2002-02-01
Automated enforcement programs, including automated systems that are used to enforce red light running violations, have recently come under scrutiny regarding their value in terms of improving safety, their primary purpose. One of the major hurdles t...
DOT National Transportation Integrated Search
1982-06-01
In order to examine specific Automated Guideway Transit (AGT) developments and concepts, and to build a better knowledge base for future decision-making, the Urban Mass Transportation Administration (UMTA) undertook a new program of studies and techn...
Automated Sequence Generation Process and Software
NASA Technical Reports Server (NTRS)
Gladden, Roy
2007-01-01
"Automated sequence generation" (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences.
9 CFR 318.307 - Record review and maintenance.
Code of Federal Regulations, 2013 CFR
2013-01-01
... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...
9 CFR 318.307 - Record review and maintenance.
Code of Federal Regulations, 2012 CFR
2012-01-01
... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...
9 CFR 318.307 - Record review and maintenance.
Code of Federal Regulations, 2014 CFR
2014-01-01
... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...
9 CFR 318.307 - Record review and maintenance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...
9 CFR 318.307 - Record review and maintenance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...
49 CFR Appendix B to Part 222 - Alternative Safety Measures
Code of Federal Regulations, 2014 CFR
2014-10-01
... statistically valid baseline violation rate must be established through automated or systematic manual... enforcement, a program of public education and awareness directed at motor vehicle drivers, pedestrians and..., a statistically valid baseline violation rate must be established through automated or systematic...
49 CFR Appendix B to Part 222 - Alternative Safety Measures
Code of Federal Regulations, 2013 CFR
2013-10-01
... statistically valid baseline violation rate must be established through automated or systematic manual... enforcement, a program of public education and awareness directed at motor vehicle drivers, pedestrians and..., a statistically valid baseline violation rate must be established through automated or systematic...
Computing and Office Automation: Changing Variables.
ERIC Educational Resources Information Center
Staman, E. Michael
1981-01-01
Trends in computing and office automation and their applications, including planning, institutional research, and general administrative support in higher education, are discussed. Changing aspects of information processing and an increasingly larger user community are considered. The computing literacy cycle may involve programming, analysis, use…
Jing, Li; Amster, I Jonathan
2009-10-15
Offline high performance liquid chromatography combined with matrix assisted laser desorption and Fourier transform ion cyclotron resonance mass spectrometry (HPLC-MALDI-FTICR/MS) provides the means to rapidly analyze complex mixtures of peptides, such as those produced by proteolytic digestion of a proteome. This method is particularly useful for making quantitative measurements of changes in protein expression by using (15)N-metabolic labeling. Proteolytic digestion of combined labeled and unlabeled proteomes produces complex mixtures that with many mass overlaps when analyzed by HPLC-MALDI-FTICR/MS. A significant challenge to data analysis is the matching of pairs of peaks which represent an unlabeled peptide and its labeled counterpart. We have developed an algorithm and incorporated it into a compute program which significantly accelerates the interpretation of (15)N metabolic labeling data by automating the process of identifying unlabeled/labeled peak pairs. The algorithm takes advantage of the high resolution and mass accuracy of FTICR mass spectrometry. The algorithm is shown to be able to successfully identify the (15)N/(14)N peptide pairs and calculate peptide relative abundance ratios in highly complex mixtures from the proteolytic digest of a whole organism protein extract.
NASA Technical Reports Server (NTRS)
Schroeder, Lyle C.; Bailey, M. C.; Mitchell, John L.
1992-01-01
Methods for increasing the electromagnetic (EM) performance of reflectors with rough surfaces were tested and evaluated. First, one quadrant of the 15-meter hoop-column antenna was retrofitted with computer-driven and controlled motors to allow automated adjustment of the reflector surface. The surface errors, measured with metric photogrammetry, were used in a previously verified computer code to calculate control motor adjustments. With this system, a rough antenna surface (rms of approximately 0.180 inch) was corrected in two iterations to approximately the structural surface smoothness limit of 0.060 inch rms. The antenna pattern and gain improved significantly as a result of these surface adjustments. The EM performance was evaluated with a computer program for distorted reflector antennas which had been previously verified with experimental data. Next, the effects of the surface distortions were compensated for in computer simulations by superimposing excitation from an array feed to maximize antenna performance relative to an undistorted reflector. Results showed that a 61-element array could produce EM performance improvements equal to surface adjustments. When both mechanical surface adjustment and feed compensation techniques were applied, the equivalent operating frequency increased from approximately 6 to 18 GHz.
Robson, Barry
2016-12-01
The Q-UEL language of XML-like tags and the associated software applications are providing a valuable toolkit for Evidence Based Medicine (EBM). In this paper the already existing applications, data bases, and tags are brought together with new ones. The particular Q-UEL embodiment used here is the BioIngine. The main challenge is one of bringing together the methods of symbolic reasoning and calculative probabilistic inference that underlie EBM and medical decision making. Some space is taken to review this background. The unification is greatly facilitated by Q-UEL's roots in the notation and algebra of Dirac, and by extending Q-UEL into the Wolfram programming environment. Further, the overall problem of integration is also a relatively simple one because of the nature of Q-UEL as a language for interoperability in healthcare and biomedicine, while the notion of workflow is facilitated because of the EBM best practice known as PICO. What remains difficult is achieving a high degree of overall automation because of a well-known difficulty in capturing human expertise in computers: the Feigenbaum bottleneck. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kadji, Caroline; De Groof, Maxime; Camus, Margaux F; De Angelis, Riccardo; Fellas, Stéphanie; Klass, Magdalena; Cecotti, Vera; Dütemeyer, Vivien; Barakat, Elie; Cannie, Mieke M; Jani, Jacques C
2017-01-01
The aim of this study was to apply a semi-automated calculation method of fetal body volume and, thus, of magnetic resonance-estimated fetal weight (MR-EFW) prior to planned delivery and to evaluate whether the technique of measurement could be simplified while remaining accurate. MR-EFW was calculated using a semi-automated method at 38.6 weeks of gestation in 36 patients and compared to the picture archiving and communication system (PACS). Per patient, 8 sequences were acquired with a slice thickness of 4-8 mm and an intersection gap of 0, 4, 8, 12, 16, or 20 mm. The median absolute relative errors for MR-EFW and the time of planimetric measurements were calculated for all 8 sequences and for each method (assisted vs. PACS), and the difference between the methods was calculated. The median delivery weight was 3,280 g. The overall median relative error for all 288 MR-EFW calculations was 2.4% using the semi-automated method and 2.2% for the PACS method. Measurements did not differ between the 8 sequences using the assisted method (p = 0.313) or the PACS (p = 0.118), while the time of planimetric measurement decreased significantly with a larger gap (p < 0.001) and in the assisted method compared to the PACS method (p < 0.01). Our simplified MR-EFW measurement showed a dramatic decrease in time of planimetric measurement without a decrease in the accuracy of weight estimates. © 2017 S. Karger AG, Basel.
Advanced tow placement of composite fuselage structure
NASA Technical Reports Server (NTRS)
Anderson, Robert L.; Grant, Carroll G.
1992-01-01
The Hercules NASA ACT program was established to demonstrate and validate the low cost potential of the automated tow placement process for fabrication of aircraft primary structures. The program is currently being conducted as a cooperative program in collaboration with the Boeing ATCAS Program. The Hercules advanced tow placement process has been in development since 1982 and was developed specifically for composite aircraft structures. The second generation machine, now in operation at Hercules, is a production-ready machine that uses a low cost prepreg tow material form to produce structures with laminate properties equivalent to prepreg tape layup. Current program activities are focused on demonstration of the automated tow placement process for fabrication of subsonic transport aircraft fuselage crown quadrants. We are working with Boeing Commercial Aircraft and Douglas Aircraft during this phase of the program. The Douglas demonstration panels has co-cured skin/stringers, and the Boeing demonstration panel is an intricately bonded part with co-cured skin/stringers and co-bonded frames. Other aircraft structures that were evaluated for the automated tow placement process include engine nacelle components, fuselage pressure bulkheads, and fuselage tail cones. Because of the cylindrical shape of these structures, multiple parts can be fabricated on one two placement tool, thus reducing the cost per pound of the finished part.
Automated Planning Enables Complex Protocols on Liquid-Handling Robots.
Whitehead, Ellis; Rudolf, Fabian; Kaltenbach, Hans-Michael; Stelling, Jörg
2018-03-16
Robotic automation in synthetic biology is especially relevant for liquid handling to facilitate complex experiments. However, research tasks that are not highly standardized are still rarely automated in practice. Two main reasons for this are the substantial investments required to translate molecular biological protocols into robot programs, and the fact that the resulting programs are often too specific to be easily reused and shared. Recent developments of standardized protocols and dedicated programming languages for liquid-handling operations addressed some aspects of ease-of-use and portability of protocols. However, either they focus on simplicity, at the expense of enabling complex protocols, or they entail detailed programming, with corresponding skills and efforts required from the users. To reconcile these trade-offs, we developed Roboliq, a software system that uses artificial intelligence (AI) methods to integrate (i) generic formal, yet intuitive, protocol descriptions, (ii) complete, but usually hidden, programming capabilities, and (iii) user-system interactions to automatically generate executable, optimized robot programs. Roboliq also enables high-level specifications of complex tasks with conditional execution. To demonstrate the system's benefits for experiments that are difficult to perform manually because of their complexity, duration, or time-critical nature, we present three proof-of-principle applications for the reproducible, quantitative characterization of GFP variants.
Biocoder: A programming language for standardizing and automating biology protocols
2010-01-01
Background Published descriptions of biology protocols are often ambiguous and incomplete, making them difficult to replicate in other laboratories. However, there is increasing benefit to formalizing the descriptions of protocols, as laboratory automation systems (such as microfluidic chips) are becoming increasingly capable of executing them. Our goal in this paper is to improve both the reproducibility and automation of biology experiments by using a programming language to express the precise series of steps taken. Results We have developed BioCoder, a C++ library that enables biologists to express the exact steps needed to execute a protocol. In addition to being suitable for automation, BioCoder converts the code into a readable, English-language description for use by biologists. We have implemented over 65 protocols in BioCoder; the most complex of these was successfully executed by a biologist in the laboratory using BioCoder as the only reference. We argue that BioCoder exposes and resolves ambiguities in existing protocols, and could provide the software foundations for future automation platforms. BioCoder is freely available for download at http://research.microsoft.com/en-us/um/india/projects/biocoder/. Conclusions BioCoder represents the first practical programming system for standardizing and automating biology protocols. Our vision is to change the way that experimental methods are communicated: rather than publishing a written account of the protocols used, researchers will simply publish the code. Our experience suggests that this practice is tractable and offers many benefits. We invite other researchers to leverage BioCoder to improve the precision and completeness of their protocols, and also to adapt and extend BioCoder to new domains. PMID:21059251
Gavino, V C; Milo, G E; Cornwell, D G
1982-03-01
Image analysis was used for the automated measurement of colony frequency (f) and colony diameter (d) in cultures of smooth muscle cells, Initial studies with the inverted microscope showed that number of cells (N) in a colony varied directly with d: log N = 1.98 log d - 3.469 Image analysis generated the complement of a cumulative distribution for f as a function of d. The number of cells in each segment of the distribution function was calculated by multiplying f and the average N for the segment. These data were displayed as a cumulative distribution function. The total number of colonies (fT) and the total number of cells (NT) were used to calculate the average colony size (NA). Population doublings (PD) were then expressed as log2 NA. Image analysis confirmed previous studies in which colonies were sized and counted with an inverted microscope. Thus, image analysis is a rapid and automated technique for the measurement of clonal growth.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D.
1987-01-01
This grant final report executive summary documents a major, long-term program addressing innovative educational issues associated with the development, administration, evaluation, and widespread distribution of transportable educational programs for scientists and engineers to increase their knowledge of, and facilitate their utilization of automated scientific and technical information storage and retrieval systems. This educational program is of very broad scope, being targeted at Colleges of Engineering and Colleges of Physical sciences at a large number of colleges and universities throughout the United States. The educational program is designed to incorporate extensive hands-on, interactive usage of the NASA RECON system and is supported by a number of microcomputer-based software systems to facilitate the delivery and usage of the educational course materials developed as part of the program.
Lead optimization mapper: automating free energy calculations for lead optimization.
Liu, Shuai; Wu, Yujie; Lin, Teng; Abel, Robert; Redmann, Jonathan P; Summa, Christopher M; Jaber, Vivian R; Lim, Nathan M; Mobley, David L
2013-09-01
Alchemical free energy calculations hold increasing promise as an aid to drug discovery efforts. However, applications of these techniques in discovery projects have been relatively few, partly because of the difficulty of planning and setting up calculations. Here, we introduce lead optimization mapper, LOMAP, an automated algorithm to plan efficient relative free energy calculations between potential ligands within a substantial library of perhaps hundreds of compounds. In this approach, ligands are first grouped by structural similarity primarily based on the size of a (loosely defined) maximal common substructure, and then calculations are planned within and between sets of structurally related compounds. An emphasis is placed on ensuring that relative free energies can be obtained between any pair of compounds without combining the results of too many different relative free energy calculations (to avoid accumulation of error) and by providing some redundancy to allow for the possibility of error and consistency checking and provide some insight into when results can be expected to be unreliable. The algorithm is discussed in detail and a Python implementation, based on both Schrödinger's and OpenEye's APIs, has been made available freely under the BSD license.
Automating Disk Forensic Processing with SleuthKit, XML and Python
2009-05-01
1 Automating Disk Forensic Processing with SleuthKit, XML and Python Simson L. Garfinkel Abstract We have developed a program called fiwalk which...files themselves. We show how it is relatively simple to create automated disk forensic applications using a Python module we have written that reads...software that the portable device may contain. Keywords: Computer Forensics; XML; Sleuth Kit; Python I. INTRODUCTION In recent years we have found many
2011 Information Systems Summit 2
2011-04-06
to automate. Some criteria that should be considered: – Are the tests easy to automate? What makes a test easy to automate is the ability to script...ANSI-748-B defines 32 criteria needs for a FAR/DFAR compliant Earned Value Management System. These criteria address 5 areas of Earned Value...are the basis of Increasing the Probability of Success of any program. But there are 11 critical criteria that must be present not matter what
Load-Differential Features for Automated Detection of Fatigue Cracks Using Guided Waves (Preprint)
2011-11-01
AFRL-RX-WP-TP-2011-4363 LOAD-DIFFERENTIAL FEATURES FOR AUTOMATED DETECTION OF FATIGUE CRACKS USING GUIDED WAVES (PREPRINT) Jennifer E...AUTOMATED DETECTION OF FATIGUE CRACKS USING GUIDED WAVES (PREPRINT) 5a. CONTRACT NUMBER FA8650-09-C-5206 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...tensile loads open fatigue cracks and thus enhance their detectability using ultrasonic methods. Here we introduce a class of load-differential methods
1982-01-27
Visible 3. 3 Ea r th Location, Colocation, and Normalization 4. IMAGE ANALYSIS 4. 1 Interactive Capabilities 4.2 Examples 5. AUTOMATED CLOUD...computer Interactive Data Access System (McIDAS) before image analysis and algorithm development were done. Earth-location is an automated procedure to...the factor l / s in (SSE) toward the gain settings given in Table 5. 4. IMAGE ANALYSIS 4.1 Interactive Capabilities The development of automated
NASA Technical Reports Server (NTRS)
1983-01-01
An important concept of the Action Information Management System (AIMS) approach is to evaluate office automation technology in the context of hands on use by technical program managers in the conduct of human acceptance difficulties which may accompany the transition to a significantly changing work environment. The improved productivity and communications which result from application of office automation technology are already well established for general office environments, but benefits unique to NASA are anticipated and these will be explored in detail.
DOE Program on Seismic Characterization for Regions of Interest to CTBT Monitoring,
1995-08-14
processing of the monitoring network data). While developing and testing the corrections and other parameters needed by the automated processing systems...the secondary network. Parameters tabulated in the knowledge base must be appropriate for routine automated processing of network data, and must also...operation of the PNDC, as well as to results of investigations of "special events" (i.e., those events that fail to locate or discriminate during automated
Automated Decision Making and Problem Solving. Volume 1: Executive Summary
NASA Technical Reports Server (NTRS)
Heer, E.
1981-01-01
The May 1980 conference is summarized. Related topics in artificial intelligence, operations research, and control theory were explored. Existing techniques were assessed, trends of development determined, and potential for application in NASA automation technology programs were identified. Formal presentations were made by experts in the three disciplines nd a workshop was held in which current technology in automation and possible NASA interfaces with the academic community to advance this technology were discussed.