Cost Considerations in Nonlinear Finite-Element Computing
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.
1985-01-01
Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.
Low-cost data analysis systems for processing multispectral scanner data
NASA Technical Reports Server (NTRS)
Whitely, S. L.
1976-01-01
The basic hardware and software requirements are described for four low cost analysis systems for computer generated land use maps. The data analysis systems consist of an image display system, a small digital computer, and an output recording device. Software is described together with some of the display and recording devices, and typical costs are cited. Computer requirements are given, and two approaches are described for converting black-white film and electrostatic printer output to inexpensive color output products. Examples of output products are shown.
A computer program for analysis of fuelwood harvesting costs
George B. Harpole; Giuseppe Rensi
1985-01-01
The fuelwood harvesting computer program (FHP) is written in FORTRAN 60 and designed to select a collection of harvest units and systems from among alternatives to satisfy specified energy requirements at a lowest cost per million Btu's as recovered in a boiler, or thousand pounds of H2O evaporative capacity kiln drying. Computed energy costs are used as a...
PCS: a pallet costing system for wood pallet manufacturers (version 1.0 for Windows®)
A. Jefferson, Jr. Palmer; Cynthia D. West; Bruce G. Hansen; Marshall S. White; Hal L. Mitchell
2002-01-01
The Pallet Costing System (PCS) is a computer-based, Microsoft Windows® application that computes the total and per-unit cost of manufacturing an order of wood pallets. Information about the manufacturing facility, along with the pallet-order requirements provided by the customer, is used in determining production cost. The major cost factors addressed by PCS...
Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.
Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P
2010-12-22
Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.
Automated Estimation Of Software-Development Costs
NASA Technical Reports Server (NTRS)
Roush, George B.; Reini, William
1993-01-01
COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.
2 CFR 215.51 - Monitoring and reporting program performance.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., such quantitative data should be related to cost data for computation of unit costs. (2) Reasons why..., analysis and explanation of cost overruns or high unit costs. (e) Recipients shall not be required to... clearance requirements of 5 CFR part 1320 when requesting performance data from recipients. ...
Cloud computing for comparative genomics with windows azure platform.
Kim, Insik; Jung, Jae-Yoon; Deluca, Todd F; Nelson, Tristan H; Wall, Dennis P
2012-01-01
Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services.
Cloud Computing for Comparative Genomics with Windows Azure Platform
Kim, Insik; Jung, Jae-Yoon; DeLuca, Todd F.; Nelson, Tristan H.; Wall, Dennis P.
2012-01-01
Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services. PMID:23032609
Real-time interactive 3D computer stereography for recreational applications
NASA Astrophysics Data System (ADS)
Miyazawa, Atsushi; Ishii, Motonaga; Okuzawa, Kazunori; Sakamoto, Ryuuichi
2008-02-01
With the increasing calculation costs of 3D computer stereography, low-cost, high-speed implementation of the latter requires effective distribution of computing resources. In this paper, we attempt to re-classify 3D display technologies on the basis of humans' 3D perception, in order to determine what level of presence or reality is required in recreational video game systems. We then discuss the design and implementation of stereography systems in two categories of the new classification.
NASA Technical Reports Server (NTRS)
Wolfe, M. G.
1978-01-01
Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.
DORCA 2 computer program. Volume 3: Program listing
NASA Technical Reports Server (NTRS)
Carey, J. B.
1972-01-01
A program listing for the Dynamic Operational Requirements and Cost Analysis Program is presented. Detailed instructions for the computer programming involved in space mission planning and project requirements are developed.
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Zubair, Mohammad
1993-01-01
The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.
Spacelab experiment computer study. Volume 1: Executive summary (presentation)
NASA Technical Reports Server (NTRS)
Lewis, J. L.; Hodges, B. C.; Christy, J. O.
1976-01-01
A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.
DORCA computer program. Volume 1: User's guide
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1971-01-01
The Dynamic Operational Requirements and Cost Analysis Program (DORCA) was written to provide a top level analysis tool for NASA. DORCA relies on a man-machine interaction to optimize results based on external criteria. DORCA relies heavily on outside sources to provide cost information and vehicle performance parameters as the program does not determine these quantities but rather uses them. Given data describing missions, vehicles, payloads, containers, space facilities, schedules, cost values and costing procedures, the program computes flight schedules, cargo manifests, vehicle fleet requirements, acquisition schedules and cost summaries. The program is designed to consider the Earth Orbit, Lunar, Interplanetary and Automated Satellite Programs. A general outline of the capabilities of the program are provided.
Satellite broadcasting system study
NASA Technical Reports Server (NTRS)
1972-01-01
The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.
Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup
Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.
2010-01-01
Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651
Costs, needs must be balanced when buying computer systems.
Krantz, G M; Doyle, J J; Stone, S G
1989-06-01
A healthcare institution must carefully examine its internal needs and external requirements before selecting an information system. The system's costs must be carefully weighed because significant computer cost overruns can cripple overall hospital finances. A New Jersey hospital carefully studied these issues and determined that a contract with a regional data center was its best option.
DEP : a computer program for evaluating lumber drying costs and investments
Stewart Holmes; George B. Harpole; Edward Bilek
1983-01-01
The DEP computer program is a modified discounted cash flow computer program designed for analysis of problems involving economic analysis of wood drying processes. Wood drying processes are different from other processes because of the large amounts of working capital required to finance inventories, and because of relatively large shares of costs charged to inventory...
The multi-disciplinary design study: A life cycle cost algorithm
NASA Technical Reports Server (NTRS)
Harding, R. R.; Pichi, F. J.
1988-01-01
The approach and results of a Life Cycle Cost (LCC) analysis of the Space Station Solar Dynamic Power Subsystem (SDPS) including gimbal pointing and power output performance are documented. The Multi-Discipline Design Tool (MDDT) computer program developed during the 1986 study has been modified to include the design, performance, and cost algorithms for the SDPS as described. As with the Space Station structural and control subsystems, the LCC of the SDPS can be computed within the MDDT program as a function of the engineering design variables. Two simple examples of MDDT's capability to evaluate cost sensitivity and design based on LCC are included. MDDT was designed to accept NASA's IMAT computer program data as input so that IMAT's detailed structural and controls design capability can be assessed with expected system LCC as computed by MDDT. No changes to IMAT were required. Detailed knowledge of IMAT is not required to perform the LCC analyses as the interface with IMAT is noninteractive.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a) Required...
Algorithm For Optimal Control Of Large Structures
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Garba, John A..; Utku, Senol
1989-01-01
Cost of computation appears competitive with other methods. Problem to compute optimal control of forced response of structure with n degrees of freedom identified in terms of smaller number, r, of vibrational modes. Article begins with Hamilton-Jacobi formulation of mechanics and use of quadratic cost functional. Complexity reduced by alternative approach in which quadratic cost functional expressed in terms of control variables only. Leads to iterative solution of second-order time-integral matrix Volterra equation of second kind containing optimal control vector. Cost of algorithm, measured in terms of number of computations required, is of order of, or less than, cost of prior algoritms applied to similar problems.
Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources
NASA Astrophysics Data System (ADS)
Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.
2011-12-01
Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.
14 CFR 1260.151 - Monitoring and reporting program performance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... quantitative data should be related to cost data for computation of unit costs. (2) Reasons why established..., analysis and explanation of cost overruns or high unit costs. (e) Recipients shall not be required to... performance data from recipients. ...
DOT National Transportation Integrated Search
2000-06-01
The Highway Economic Requirements System (HERS) computer model estimates investment requirements for the nation's highways by adding together the costs of highway improvements that the model's benefit-cost analyses indicate are warranted. In making i...
Computer-assisted Behavioral Therapy and Contingency Management for Cannabis Use Disorder
Budney, Alan J.; Stanger, Catherine; Tilford, J. Mick; Scherer, Emily; Brown, Pamela C.; Li, Zhongze; Li, Zhigang; Walker, Denise
2015-01-01
Computer-assisted behavioral treatments hold promise for enhancing access to and reducing costs of treatments for substance use disorders. This study assessed the efficacy of a computer-assisted version of an efficacious, multicomponent treatment for cannabis use disorders (CUD), i.e., motivational enhancement therapy, cognitive-behavioral therapy, and abstinence-based contingency-management (MET/CBT/CM). An initial cost comparison was also performed. Seventy-five adult participants, 59% African Americans, seeking treatment for CUD received either, MET only (BRIEF), therapist-delivered MET/CBT/CM (THERAPIST), or computer-delivered MET/CBT/CM (COMPUTER). During treatment, the THERAPIST and COMPUTER conditions engendered longer durations of continuous cannabis abstinence than BRIEF (p < .05), but did not differ from each other. Abstinence rates and reduction in days of use over time were maintained in COMPUTER at least as well as in THERAPIST. COMPUTER averaged approximately $130 (p < .05) less per case than THERAPIST in therapist costs, which offset most of the costs of CM. Results add to promising findings that illustrate potential for computer-assisted delivery methods to enhance access to evidence-based care, reduce costs, and possibly improve outcomes. The observed maintenance effects and the cost findings require replication in larger clinical trials. PMID:25938629
How Elected Officials Can Control Computer Costs.
ERIC Educational Resources Information Center
Grady, Daniel B.
Elected officials have a special obligation to monitor and make informed decisions about computer expenditures. In doing so, officials should insist that a needs assessment be carried out; review all cost and configuration data; draw up a master plan specifying user needs as well as hardware, software, and personnel requirements; and subject…
Code of Federal Regulations, 2010 CFR
2010-04-01
... increases. (b) At the owner's option, the cost of the computer software may include service contracts to... requirements. (c) The source of funds for the purchase of hardware or software, or contracting for services for... formatted data, including either the purchase and maintenance of computer hardware or software, or both, the...
ERIC Educational Resources Information Center
Vanderheiden, Gregg C.; Lee, Charles C.
Many low-cost and no-cost modifications to computers would greatly increase the number of disabled individuals who could use standard computers without requiring custom modifications, and would increase the ability to attach special input and output systems. The purpose of the Guidelines is to provide an awareness of these access problems and a…
36 CFR 1210.51 - Monitoring and reporting program performance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... quantitative data should be related to cost data for computation of unit costs. (2) Reasons why established..., analysis and explanation of cost overruns or high unit costs. (e) Recipients shall not be required to... requesting performance data from recipients. ...
Simulation Applications in Educational Leadership.
ERIC Educational Resources Information Center
Bozeman, William; Wright, Robert H.
1995-01-01
Explores the use of computer-based simulations using multimedia materials for a graduate course in school administration. Highlights include simulation applications in military and in business; educational simulations; the use of computers and other technology; production requirements and costs; and time required. (LRW)
Radiation Tolerant, FPGA-Based SmallSat Computer System
NASA Technical Reports Server (NTRS)
LaMeres, Brock J.; Crum, Gary A.; Martinez, Andres; Petro, Andrew
2015-01-01
The Radiation Tolerant, FPGA-based SmallSat Computer System (RadSat) computing platform exploits a commercial off-the-shelf (COTS) Field Programmable Gate Array (FPGA) with real-time partial reconfiguration to provide increased performance, power efficiency and radiation tolerance at a fraction of the cost of existing radiation hardened computing solutions. This technology is ideal for small spacecraft that require state-of-the-art on-board processing in harsh radiation environments but where using radiation hardened processors is cost prohibitive.
A Framework for Automating Cost Estimates in Assembly Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calton, T.L.; Peters, R.R.
1998-12-09
When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less
44 CFR 13.42 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-10-01
... AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... to all financial and programmatic records, supporting documents, statistical records, and other... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
44 CFR 13.42 - Retention and access requirements for records.
Code of Federal Regulations, 2011 CFR
2011-10-01
... AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... to all financial and programmatic records, supporting documents, statistical records, and other... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
44 CFR 13.42 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-10-01
... AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... to all financial and programmatic records, supporting documents, statistical records, and other... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
44 CFR 13.42 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... to all financial and programmatic records, supporting documents, statistical records, and other... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
44 CFR 13.42 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-10-01
... AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... to all financial and programmatic records, supporting documents, statistical records, and other... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
Feeney, James M; Montgomery, Stephanie C; Wolf, Laura; Jayaraman, Vijay; Twohig, Michael
2016-09-01
Among transferred trauma patients, challenges with the transfer of radiographic studies include problems loading or viewing the studies at the receiving hospitals, and problems manipulating, reconstructing, or evalu- ating the transferred images. Cloud-based image transfer systems may address some ofthese problems. We reviewed the charts of patients trans- ferred during one year surrounding the adoption of a cloud computing data transfer system. We compared the rates of repeat imaging before (precloud) and af- ter (postcloud) the adoption of the cloud-based data transfer system. During the precloud period, 28 out of 100 patients required 90 repeat studies. With the cloud computing transfer system in place, three out of 134 patients required seven repeat films. There was a statistically significant decrease in the proportion of patients requiring repeat films (28% to 2.2%, P < .0001). Based on an annualized volume of 200 trauma patient transfers, the cost savings estimated using three methods of cost analysis, is between $30,272 and $192,453.
Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning
Kok, Kai Yit; Rajendran, Parvathy
2016-01-01
The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost. PMID:26943630
Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing
NASA Astrophysics Data System (ADS)
Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim
2011-03-01
Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.
NOSC Program Managers Handbook. Revision 1
1988-02-01
cost. The effects of application of life-cycle cost analysis through the planning and RIDT&E phases of a program, and the " design to cost" concept on...is the plan for assuring the quality of the design , design documentation, and fabricated/assembled hardware and associated computer software. 13.5.3.2...listings and printouts, which document the n. requirements, design , or details of compute : software; explain the capabilities and limitations of the
ERIC Educational Resources Information Center
Stone, Antonia
1982-01-01
Provides general information on currently available microcomputers, computer programs (software), hardware requirements, software sources, costs, computer games, and programing. Includes a list of popular microcomputers, providing price category, model, list price, software (cassette, tape, disk), monitor specifications, amount of random access…
15 CFR 14.51 - Monitoring and reporting program performance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... readily quantified, such quantitative data should be related to cost data for computation of unit costs... including, when appropriate, analysis and explanation of cost overruns or high unit costs. (e) Recipients... comply with clearance requirements of 5 CFR part 1320 when requesting performance data from recipients. ...
NASA Technical Reports Server (NTRS)
Devito, D. M.
1981-01-01
A low-cost GPS civil-user mobile terminal whose purchase cost is substantially an order of magnitude less than estimates for the military counterpart is considered with focus on ground station requirements for position monitoring of civil users requiring this capability and the civil user navigation and location-monitoring requirements. Existing survey literature was examined to ascertain the potential users of a low-cost NAVSTAR receiver and to estimate their number, function, and accuracy requirements. System concepts are defined for low cost user equipments for in-situ navigation and the retransmission of low data rate positioning data via a geostationary satellite to a central computing facility.
29 CFR 95.51 - Monitoring and reporting program performance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... quantitative data should be related to cost data for computation of unit costs. (2) Reasons why established..., analysis and explanation of cost overruns or high unit costs. (e) Recipients shall not be required to... performance data from recipients. (Approved by the Office of Management and Budget, Approval Number 1225-0017) ...
Cost effectiveness of a computer-delivered intervention to improve HIV medication adherence
2013-01-01
Background High levels of adherence to medications for HIV infection are essential for optimal clinical outcomes and to reduce viral transmission, but many patients do not achieve required levels. Clinician-delivered interventions can improve patients’ adherence, but usually require substantial effort by trained individuals and may not be widely available. Computer-delivered interventions can address this problem by reducing required staff time for delivery and by making the interventions widely available via the Internet. We previously developed a computer-delivered intervention designed to improve patients’ level of health literacy as a strategy to improve their HIV medication adherence. The intervention was shown to increase patients’ adherence, but it was not clear that the benefits resulting from the increase in adherence could justify the costs of developing and deploying the intervention. The purpose of this study was to evaluate the relation of development and deployment costs to the effectiveness of the intervention. Methods Costs of intervention development were drawn from accounting reports for the grant under which its development was supported, adjusted for costs primarily resulting from the project’s research purpose. Effectiveness of the intervention was drawn from results of the parent study. The relation of the intervention’s effects to changes in health status, expressed as utilities, was also evaluated in order to assess the net cost of the intervention in terms of quality adjusted life years (QALYs). Sensitivity analyses evaluated ranges of possible intervention effectiveness and durations of its effects, and costs were evaluated over several deployment scenarios. Results The intervention’s cost effectiveness depends largely on the number of persons using it and the duration of its effectiveness. Even with modest effects for a small number of patients the intervention was associated with net cost savings in some scenarios and for durations greater than three months and longer it was usually associated with a favorable cost per QALY. For intermediate and larger assumed effects and longer durations of intervention effectiveness, the intervention was associated with net cost savings. Conclusions Computer-delivered adherence interventions may be a cost-effective strategy to improve adherence in persons treated for HIV. Trial registration Clinicaltrials.gov identifier NCT01304186. PMID:23446180
Cost effectiveness of a computer-delivered intervention to improve HIV medication adherence.
Ownby, Raymond L; Waldrop-Valverde, Drenna; Jacobs, Robin J; Acevedo, Amarilis; Caballero, Joshua
2013-02-28
High levels of adherence to medications for HIV infection are essential for optimal clinical outcomes and to reduce viral transmission, but many patients do not achieve required levels. Clinician-delivered interventions can improve patients' adherence, but usually require substantial effort by trained individuals and may not be widely available. Computer-delivered interventions can address this problem by reducing required staff time for delivery and by making the interventions widely available via the Internet. We previously developed a computer-delivered intervention designed to improve patients' level of health literacy as a strategy to improve their HIV medication adherence. The intervention was shown to increase patients' adherence, but it was not clear that the benefits resulting from the increase in adherence could justify the costs of developing and deploying the intervention. The purpose of this study was to evaluate the relation of development and deployment costs to the effectiveness of the intervention. Costs of intervention development were drawn from accounting reports for the grant under which its development was supported, adjusted for costs primarily resulting from the project's research purpose. Effectiveness of the intervention was drawn from results of the parent study. The relation of the intervention's effects to changes in health status, expressed as utilities, was also evaluated in order to assess the net cost of the intervention in terms of quality adjusted life years (QALYs). Sensitivity analyses evaluated ranges of possible intervention effectiveness and durations of its effects, and costs were evaluated over several deployment scenarios. The intervention's cost effectiveness depends largely on the number of persons using it and the duration of its effectiveness. Even with modest effects for a small number of patients the intervention was associated with net cost savings in some scenarios and for durations greater than three months and longer it was usually associated with a favorable cost per QALY. For intermediate and larger assumed effects and longer durations of intervention effectiveness, the intervention was associated with net cost savings. Computer-delivered adherence interventions may be a cost-effective strategy to improve adherence in persons treated for HIV. Clinicaltrials.gov identifier NCT01304186.
29 CFR 548.200 - Requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Requirements for A Basic Rate § 548.200... instance, under certain conditions the Regulations permit the use of the daily average hourly earnings of... excluded from the computation of overtime pay. 5 It is the exclusion of the cost of the meals that is...
The minimal work cost of information processing
NASA Astrophysics Data System (ADS)
Faist, Philippe; Dupuis, Frédéric; Oppenheim, Jonathan; Renner, Renato
2015-07-01
Irreversible information processing cannot be carried out without some inevitable thermodynamical work cost. This fundamental restriction, known as Landauer's principle, is increasingly relevant today, as the energy dissipation of computing devices impedes the development of their performance. Here we determine the minimal work required to carry out any logical process, for instance a computation. It is given by the entropy of the discarded information conditional to the output of the computation. Our formula takes precisely into account the statistically fluctuating work requirement of the logical process. It enables the explicit calculation of practical scenarios, such as computational circuits or quantum measurements. On the conceptual level, our result gives a precise and operational connection between thermodynamic and information entropy, and explains the emergence of the entropy state function in macroscopic thermodynamics.
Low-cost space-varying FIR filter architecture for computational imaging systems
NASA Astrophysics Data System (ADS)
Feng, Guotong; Shoaib, Mohammed; Schwartz, Edward L.; Dirk Robinson, M.
2010-01-01
Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system requires low-complexity algorithms enabling space-varying sharpening. In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost space-varying FIR filter architecture.
32 CFR 32.53 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...
32 CFR 32.53 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...
32 CFR 32.53 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...
7 CFR 1710.205 - Minimum approval requirements for all load forecasts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... computer software applications. RUS will evaluate borrower load forecasts for readability, understanding..., distribution costs, other systems costs, average revenue per kWh, and inflation. Also, a borrower's engineering...
Furniture rough mill costs evaluated by computer simulation
R. Bruce Anderson
1983-01-01
A crosscut-first furniture rough mill was simulated to evaluate processing and raw material costs on an individual part basis. Distributions representing the real-world characteristics of lumber, equipment feed speeds, and processing requirements are programed into the simulation. Costs of parts from a specific cutting bill are given, and effects of lumber input costs...
28 CFR 70.51 - Monitoring and reporting program performance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... quantitative data should be related to cost data for computation of unit costs. (2) Reasons why established..., analysis and explanation of cost overruns or high unit costs. (d) Recipients are required to submit the... when requesting performance data from recipients. [Order No. 1980-95, 60 FR 38242, July 26, 1995; Order...
Principal Component Geostatistical Approach for large-dimensional inverse problems
Kitanidis, P K; Lee, J
2014-01-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113
Principal Component Geostatistical Approach for large-dimensional inverse problems.
Kitanidis, P K; Lee, J
2014-07-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.
An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method
NASA Astrophysics Data System (ADS)
Lu, Dan; Ricciuto, Daniel; Evans, Katherine
2018-03-01
Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1980-01-01
The computational techniques are described which are utilized at Lewis Research Center to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. Cycle performance, and engine weight can be calculated along with costs and installation effects as opposed to fuel consumption alone. Almost any conceivable turbine engine cycle can be studied. These computer codes are: NNEP, WATE, LIFCYC, INSTAL, and POD DRG. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight and cost for representative types of aircraft and missions.
Computer image generation: Reconfigurability as a strategy in high fidelity space applications
NASA Technical Reports Server (NTRS)
Bartholomew, Michael J.
1989-01-01
The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.
Reducing the latency of the Fractal Iterative Method to half an iteration
NASA Astrophysics Data System (ADS)
Béchet, Clémentine; Tallon, Michel
2013-12-01
The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.
NASA Technical Reports Server (NTRS)
Campbell, B. H.
1974-01-01
A methodology which was developed for balanced designing of spacecraft subsystems and interrelates cost, performance, safety, and schedule considerations was refined. The methodology consists of a two-step process: the first step is one of selecting all hardware designs which satisfy the given performance and safety requirements, the second step is one of estimating the cost and schedule required to design, build, and operate each spacecraft design. Using this methodology to develop a systems cost/performance model allows the user of such a model to establish specific designs and the related costs and schedule. The user is able to determine the sensitivity of design, costs, and schedules to changes in requirements. The resulting systems cost performance model is described and implemented as a digital computer program.
Integrated Computer-Aided Drafting Instruction (ICADI).
ERIC Educational Resources Information Center
Chen, C. Y.; McCampbell, David H.
Until recently, computer-aided drafting and design (CAD) systems were almost exclusively operated on mainframes or minicomputers and their cost prohibited many schools from offering CAD instruction. Today, many powerful personal computers are capable of performing the high-speed calculation and analysis required by the CAD application; however,…
Accelerating epistasis analysis in human genetics with consumer graphics hardware.
Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H
2009-07-24
Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.
Synfuel program analysis. Volume 2: VENVAL users manual
NASA Astrophysics Data System (ADS)
Muddiman, J. B.; Whelan, J. W.
1980-07-01
This volume is intended for program analysts and is a users manual for the VENVAL model. It contains specific explanations as to input data requirements and programming procedures for the use of this model. VENVAL is a generalized computer program to aid in evaluation of prospective private sector production ventures. The program can project interrelated values of installed capacity, production, sales revenue, operating costs, depreciation, investment, dent, earnings, taxes, return on investment, depletion, and cash flow measures. It can also compute related public sector and other external costs and revenues if unit costs are furnished.
ERIC Educational Resources Information Center
Van Dusseldorp, Ralph
1984-01-01
Describes the successful, low-cost program for infusion of computer competencies into the curriculum of the School of Education at the University of Alaska, Anchorage, where all students are required to become computer competent prior to graduation. Computer competency goals for students in school's certification programs are outlined. (MBR)
A Faculty-Computer Nexus. Microcomputing Working Paper Series.
ERIC Educational Resources Information Center
McCord, Joan
The effects of the rapid introduction of computing in education on Drexel University faculty were studied. The university decided that incoming 1983 freshmen would be required to own microcomputers, which could be bought at reduced cost. A questionnaire was administered to determine faculty members' experience with computers, their values,…
Digital Data Transmission Via CATV.
ERIC Educational Resources Information Center
Stifle, Jack; And Others
A low cost communications network has been designed for use in the PLATO IV computer-assisted instruction system. Over 1,000 remote computer graphic terminals each requiring a 1200 bps channel are to be connected to one centrally located computer. Digital data are distributed to these terminals using standard commercial cable television (CATV)…
Modeling and simulation of ocean wave propagation using lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Nuraiman, Dian
2017-10-01
In this paper, we present on modeling and simulation of ocean wave propagation from the deep sea to the shoreline. This requires high computational cost for simulation with large domain. We propose to couple a 1D shallow water equations (SWE) model with a 2D incompressible Navier-Stokes equations (NSE) model in order to reduce the computational cost. The coupled model is solved using the lattice Boltzmann method (LBM) with the lattice Bhatnagar-Gross-Krook (BGK) scheme. Additionally, a special method is implemented to treat the complex behavior of free surface close to the shoreline. The result shows the coupled model can reduce computational cost significantly compared to the full NSE model.
The Effect of Computers on School Air-Conditioning.
ERIC Educational Resources Information Center
Fickes, Michael
2000-01-01
Discusses the issue of increased air-conditioning demand when schools equip their classrooms with computers that require enhanced and costlier air-conditioning systems. Air-conditioning costs are analyzed in two elementary schools and a middle school. (GR)
Arias-Vimárlund, V.; Ljunggren, M.; Timpka, T.
1996-01-01
OBJECTIVE: Exploration of the societal health economic effects occurring during the first year after implementation of Computerised Patient Records (CPRs) at Primary Health Care (PHC) centres. DESIGN: Comparative case studies of practice processes and their consequences one year after CPR implementation, using the constant comparison method. Application of transaction-cost analyses at a societal level on the results. SETTING: Two urban PHC centres under a managed care contract in Ostergötland county, Sweden. MAIN OUTCOME MEASURES: Central implementation issues. First-year societal direct normal costs, direct unexpected costs, and indirect costs. Societal benefits. RESULTS: The total societal effect of the CPR implementation was a cost of nearly 250,000 SEK (USD 37,000) per GP team. About 20% of the effect consisted of direct unexpected costs, accured from the reduction of practitioners' leisure time. The main issues in the implementation process were medical informatics knowledge and computer skills, adaptation of the human-computer interaction design to practice routines, and information access through the CPR. CONCLUSIONS: The societal costs exceed the benefits during the first year after CPR implementation at the observed PHC centres. Early investments in requirements engineering and staff training may increase the efficiency. Exploitation of the CPR for disease prevention and clinical quality improvement is necessary to defend the investment in societal terms. The exact calculation of societal costs requires further analysis of the affected groups' willingness to pay. PMID:8947717
The Pilot Training Study: A Cost-Estimating Model for Advanced Pilot Training (APT).
ERIC Educational Resources Information Center
Knollmeyer, L. E.
The Advanced Pilot Training Cost Model is a statement of relationships that may be used, given the necessary inputs, for estimating the resources required and the costs to train pilots in the Air Force formal flying training schools. Resources and costs are computed by weapon system on an annual basis for use in long-range planning or sensitivity…
Non-steady state modelling of wheel-rail contact problem
NASA Astrophysics Data System (ADS)
Guiral, A.; Alonso, A.; Baeza, L.; Giménez, J. G.
2013-01-01
Among all the algorithms to solve the wheel-rail contact problem, Kalker's FastSim has become the most useful computation tool since it combines a low computational cost and enough precision for most of the typical railway dynamics problems. However, some types of dynamic problems require the use of a non-steady state analysis. Alonso and Giménez developed a non-stationary method based on FastSim, which provides both, sufficiently accurate results and a low computational cost. However, it presents some limitations; the method is developed for one time-dependent creepage and its accuracy for varying normal forces has not been checked. This article presents the required changes in order to deal with both problems and compares its results with those given by Kalker's Variational Method for rolling contact.
Computer software to estimate timber harvesting system production, cost, and revenue
Dr. John E. Baumgras; Dr. Chris B. LeDoux
1992-01-01
Large variations in timber harvesting cost and revenue can result from the differences between harvesting systems, the variable attributes of harvesting sites and timber stands, or changing product markets. Consequently, system and site specific estimates of production rates and costs are required to improve estimates of harvesting revenue. This paper describes...
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
Gøthesen, Øystein; Slover, James; Havelin, Leif; Askildsen, Jan Erik; Malchau, Henrik; Furnes, Ove
2013-07-06
The use of Computer Assisted Surgery (CAS) for knee replacements is intended to improve the alignment of knee prostheses in order to reduce the number of revision operations. Is the cost effectiveness of computer assisted surgery influenced by patient volume and age? By employing a Markov model, we analysed the cost effectiveness of computer assisted surgery versus conventional arthroplasty with respect to implant survival and operation volume in two theoretical Norwegian age cohorts. We obtained mortality and hospital cost data over a 20-year period from Norwegian registers. We presumed that the cost of an intervention would need to be below NOK 500,000 per QALY (Quality Adjusted Life Year) gained, to be considered cost effective. The added cost of computer assisted surgery, provided this has no impact on implant survival, is NOK 1037 and NOK 1414 respectively for 60 and 75-year-olds per quality-adjusted life year at a volume of 25 prostheses per year, and NOK 128 and NOK 175 respectively at a volume of 250 prostheses per year. Sensitivity analyses showed that the 10-year implant survival in cohort 1 needs to rise from 89.8% to 90.6% at 25 prostheses per year, and from 89.8 to 89.9% at 250 prostheses per year for computer assisted surgery to be considered cost effective. In cohort 2, the required improvement is a rise from 95.1% to 95.4% at 25 prostheses per year, and from 95.10% to 95.14% at 250 prostheses per year. The cost of using computer navigation for total knee replacements may be acceptable for 60-year-old as well as 75-year-old patients if the technique increases the implant survival rate just marginally, and the department has a high operation volume. A low volume department might not achieve cost-effectiveness unless computer navigation has a more significant impact on implant survival, thus may defer the investments until such data are available.
Mesoscale Models of Fluid Dynamics
NASA Astrophysics Data System (ADS)
Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.
During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.
GPUs: An Emerging Platform for General-Purpose Computation
2007-08-01
programming; real-time cinematic quality graphics Peak stream (26) License required (limited time no- cost evaluation program) Commercially...folding.stanford.edu (accessed 30 March 2007). 2. Fan, Z.; Qiu, F.; Kaufman, A.; Yoakum-Stover, S. GPU Cluster for High Performance Computing. ACM/IEEE...accessed 30 March 2007). 8. Goodnight, N.; Wang, R.; Humphreys, G. Computation on Programmable Graphics Hardware. IEEE Computer Graphics and
ERIC Educational Resources Information Center
Hart, Jeffrey A.
1985-01-01
Presents a discussion of how computer simulations are used in two undergraduate social science courses and a faculty computer literacy course on simulations and artificial intelligence. Includes a list of 60 simulations for use on mainframes and microcomputers. Entries include type of hardware required, publisher's address, and cost. Sample…
ERIC Educational Resources Information Center
Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.
2006-01-01
Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…
Assignment Of Finite Elements To Parallel Processors
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.
1990-01-01
Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.
Angiuoli, Samuel V; White, James R; Matalka, Malcolm; White, Owen; Fricke, W Florian
2011-01-01
The widespread popularity of genomic applications is threatened by the "bioinformatics bottleneck" resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly. We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS) sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2), which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers. Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer) invested in 16S rRNA amplicon sequencing, microbial single-genome and metagenomics WGS projects can achieve cost-efficient bioinformatics support using CloVR in combination with Amazon EC2 as an alternative to local computing centers.
Angiuoli, Samuel V.; White, James R.; Matalka, Malcolm; White, Owen; Fricke, W. Florian
2011-01-01
Background The widespread popularity of genomic applications is threatened by the “bioinformatics bottleneck” resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly. Results We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS) sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2), which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers. Conclusions Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer) invested in 16S rRNA amplicon sequencing, microbial single-genome and metagenomics WGS projects can achieve cost-efficient bioinformatics support using CloVR in combination with Amazon EC2 as an alternative to local computing centers. PMID:22028928
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.
Hop, Skip and Jump: Animation Software.
ERIC Educational Resources Information Center
Eiser, Leslie
1986-01-01
Discusses the features of animation software packages, reviewing eight commercially available programs. Information provided for each program includes name, publisher, current computer(s) required, cost, documentation, input device, import/export capabilities, printing possibilities, what users can originate, types of image manipulation possible,…
Virtualization and cloud computing in dentistry.
Chow, Frank; Muftu, Ali; Shorter, Richard
2014-01-01
The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.
Wide field imaging problems in radio astronomy
NASA Astrophysics Data System (ADS)
Cornwell, T. J.; Golap, K.; Bhatnagar, S.
2005-03-01
The new generation of synthesis radio telescopes now being proposed, designed, and constructed face substantial problems in making images over wide fields of view. Such observations are required either to achieve the full sensitivity limit in crowded fields or for surveys. The Square Kilometre Array (SKA Consortium, Tech. Rep., 2004), now being developed by an international consortium of 15 countries, will require advances well beyond the current state of the art. We review the theory of synthesis radio telescopes for large fields of view. We describe a new algorithm, W projection, for correcting the non-coplanar baselines aberration. This algorithm has improved performance over those previously used (typically an order of magnitude in speed). Despite the advent of W projection, the computing hardware required for SKA wide field imaging is estimated to cost up to $500M (2015 dollars). This is about half the target cost of the SKA. Reconfigurable computing is one way in which the costs can be decreased dramatically.
20 CFR 645.230 - What general fiscal and administrative rules apply to the use of Federal funds?
Code of Federal Regulations, 2011 CFR
2011-04-01
... Acquisition Regulation (FAR) at 48 CFR Part 31. (d) Information technology costs. In addition to the allowable cost provisions identified in § 645.235 of this subpart, the costs of information technology—computer... compliant.” To meet this requirement, information technology must be able to accurately process date/time...
20 CFR 645.230 - What general fiscal and administrative rules apply to the use of Federal funds?
Code of Federal Regulations, 2014 CFR
2014-04-01
... Acquisition Regulation (FAR) at 48 CFR Part 31. (d) Information technology costs. In addition to the allowable cost provisions identified in § 645.235 of this subpart, the costs of information technology—computer... compliant.” To meet this requirement, information technology must be able to accurately process date/time...
20 CFR 645.230 - What general fiscal and administrative rules apply to the use of Federal funds?
Code of Federal Regulations, 2012 CFR
2012-04-01
... Acquisition Regulation (FAR) at 48 CFR Part 31. (d) Information technology costs. In addition to the allowable cost provisions identified in § 645.235 of this subpart, the costs of information technology—computer... compliant.” To meet this requirement, information technology must be able to accurately process date/time...
20 CFR 645.230 - What general fiscal and administrative rules apply to the use of Federal funds?
Code of Federal Regulations, 2013 CFR
2013-04-01
... Acquisition Regulation (FAR) at 48 CFR Part 31. (d) Information technology costs. In addition to the allowable cost provisions identified in § 645.235 of this subpart, the costs of information technology—computer... compliant.” To meet this requirement, information technology must be able to accurately process date/time...
20 CFR 645.230 - What general fiscal and administrative rules apply to the use of Federal funds?
Code of Federal Regulations, 2010 CFR
2010-04-01
... Acquisition Regulation (FAR) at 48 CFR Part 31. (d) Information technology costs. In addition to the allowable cost provisions identified in § 645.235 of this subpart, the costs of information technology—computer... compliant.” To meet this requirement, information technology must be able to accurately process date/time...
Produce yellow-poplar furniture dimension at minimum cost by using YELLOPOP
David G. Marten; David G. Marten
1986-01-01
Describes a computer program called YELLOPOP that determines the least-cost combination of lumber grades required to produce a given cutting order of furniture dimension parts. If the least-cost mix is not available, YELLOPOP can be used to determine the next best alternative. The steps involved in using the program are also described.
Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive and time consuming. One of the main contributors to the high cost and lengthy time is the need to perform many large-scale hardware tests and the inability to integrate all appropriate subsystems early in the design process. The NASA Glenn Research Center is developing the technologies required to enable simulations of full aerospace propulsion systems in sufficient detail to resolve critical design issues early in the design process before hardware is built. This concept, called the Numerical Propulsion System Simulation (NPSS), is focused on the integration of multiple disciplines such as aerodynamics, structures and heat transfer with computing and communication technologies to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS, as illustrated, is to be a "numerical test cell" that enables full engine simulation overnight on cost-effective computing platforms. There are several key elements within NPSS that are required to achieve this capability: 1) clear data interfaces through the development and/or use of data exchange standards, 2) modular and flexible program construction through the use of object-oriented programming, 3) integrated multiple fidelity analysis (zooming) techniques that capture the appropriate physics at the appropriate fidelity for the engine systems, 4) multidisciplinary coupling techniques and finally 5) high performance parallel and distributed computing. The current state of development in these five area focuses on air breathing gas turbine engines and is reported in this paper. However, many of the technologies are generic and can be readily applied to rocket based systems and combined cycles currently being considered for low-cost access-to-space applications. Recent accomplishments include: (1) the development of an industry-standard engine cycle analysis program and plug 'n play architecture, called NPSS Version 1, (2) A full engine simulation that combines a 3D low-pressure subsystem with a 0D high pressure core simulation. This demonstrates the ability to integrate analyses at different levels of detail and to aerodynamically couple components, the fan/booster and low-pressure turbine, through a 3D computational fluid dynamics simulation. (3) Simulation of all of the turbomachinery in a modern turbofan engine on parallel computing platform for rapid and cost-effective execution. This capability can also be used to generate full compressor map, requiring both design and off-design simulation. (4) Three levels of coupling characterize the multidisciplinary analysis under NPSS: loosely coupled, process coupled and tightly coupled. The loosely coupled and process coupled approaches require a common geometry definition to link CAD to analysis tools. The tightly coupled approach is currently validating the use of arbitrary Lagrangian/Eulerian formulation for rotating turbomachinery. The validation includes both centrifugal and axial compression systems. The results of the validation will be reported in the paper. (5) The demonstration of significant computing cost/performance reduction for turbine engine applications using PC clusters. The NPSS Project is supported under the NASA High Performance Computing and Communications Program.
NASA Astrophysics Data System (ADS)
Miret, Josep M.; Sebé, Francesc
Low-cost devices are the key component of several applications: RFID tags permit an automated supply chain management while smart cards are a secure means of storing cryptographic keys required for remote and secure authentication in e-commerce and e-government applications. These devices must be cheap in order to permit their cost-effective massive manufacturing and deployment. Unfortunately, their low cost limits their computational power. Other devices such as nodes of sensor networks suffer from an additional constraint, namely, their limited battery life. Secure applications designed for these devices cannot make use of classical cryptographic primitives designed for full-fledged computers.
Radiotherapy Monte Carlo simulation using cloud computing technology.
Poole, C M; Cornelius, I; Trapp, J V; Langton, C M
2012-12-01
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.
Framework for Computer Assisted Instruction Courseware: A Case Study.
ERIC Educational Resources Information Center
Betlach, Judith A.
1987-01-01
Systematically investigates, defines, and organizes variables related to production of internally designed and implemented computer assisted instruction (CAI) courseware: special needs of users; costs; identification and definition of realistic training needs; CAI definition and design methodology; hardware and software requirements; and general…
One of the strategic objectives of the Computational Toxicology Program is to develop approaches for prioritizing chemicals for subsequent screening and testing. Approaches currently available for this process require extensive resources. Therefore, less costly and time-extensi...
Performance Models for Split-execution Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Schrock, Jonathan
Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less
Development of a small-scale computer cluster
NASA Astrophysics Data System (ADS)
Wilhelm, Jay; Smith, Justin T.; Smith, James E.
2008-04-01
An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.
Intelligent redundant actuation system requirements and preliminary system design
NASA Technical Reports Server (NTRS)
Defeo, P.; Geiger, L. J.; Harris, J.
1985-01-01
Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.
Software beamforming: comparison between a phased array and synthetic transmit aperture.
Li, Yen-Feng; Li, Pai-Chi
2011-04-01
The data-transfer and computation requirements are compared between software-based beamforming using a phased array (PA) and a synthetic transmit aperture (STA). The advantages of a software-based architecture are reduced system complexity and lower hardware cost. Although this architecture can be implemented using commercial CPUs or GPUs, the high computation and data-transfer requirements limit its real-time beamforming performance. In particular, transferring the raw rf data from the front-end subsystem to the software back-end remains challenging with current state-of-the-art electronics technologies, which offset the cost advantage of the software back end. This study investigated the tradeoff between the data-transfer and computation requirements. Two beamforming methods based on a PA and STA, respectively, were used: the former requires a higher data transfer rate and the latter requires more memory operations. The beamformers were implemente;d in an NVIDIA GeForce GTX 260 GPU and an Intel core i7 920 CPU. The frame rate of PA beamforming was 42 fps with a 128-element array transducer, with 2048 samples per firing and 189 beams per image (with a 95 MB/frame data-transfer requirement). The frame rate of STA beamforming was 40 fps with 16 firings per image (with an 8 MB/frame data-transfer requirement). Both approaches achieved real-time beamforming performance but each had its own bottleneck. On the one hand, the required data-transfer speed was considerably reduced in STA beamforming, whereas this required more memory operations, which limited the overall computation time. The advantages of the GPU approach over the CPU approach were clearly demonstrated.
ERIC Educational Resources Information Center
Lippert, Henry T.; Harris, Edward V.
The diverse requirements for computing facilities in education place heavy demands upon available resources. Although multiple or very large computers can supply such diverse needs, their cost makes them impractical for many institutions. Small computers which serve a few specific needs may be an economical answer. However, to serve operationally…
Integrated analysis of engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1981-01-01
The need for light, durable, fuel efficient, cost effective aircraft requires the development of engine structures which are flexible, made from advaced materials (including composites), resist higher temperatures, maintain tighter clearances and have lower maintenance costs. The formal quantification of any or several of these requires integrated computer programs (multilevel and/or interdisciplinary analysis programs interconnected) for engine structural analysis/design. Several integrated analysis computer prorams are under development at Lewis Reseach Center. These programs include: (1) COBSTRAN-Composite Blade Structural Analysis, (2) CODSTRAN-Composite Durability Structural Analysis, (3) CISTRAN-Composite Impact Structural Analysis, (4) STAEBL-StruTailoring of Engine Blades, and (5) ESMOSS-Engine Structures Modeling Software System. Three other related programs, developed under Lewis sponsorship, are described.
Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application
NASA Astrophysics Data System (ADS)
Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.
2013-12-01
The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.
On Convergence of Development Costs and Cost Models for Complex Spaceflight Instrument Electronics
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Patel, Umeshkumar D.; Kasa, Robert L.; Hestnes, Phyllis; Brown, Tammy; Vootukuru, Madhavi
2008-01-01
Development costs of a few recent spaceflight instrument electrical and electronics subsystems have diverged from respective heritage cost model predictions. The cost models used are Grass Roots, Price-H and Parametric Model. These cost models originated in the military and industry around 1970 and were successfully adopted and patched by NASA on a mission-by-mission basis for years. However, the complexity of new instruments recently changed rapidly by orders of magnitude. This is most obvious in the complexity of representative spaceflight instrument electronics' data system. It is now required to perform intermediate processing of digitized data apart from conventional processing of science phenomenon signals from multiple detectors. This involves on-board instrument formatting of computational operands from row data for example, images), multi-million operations per second on large volumes of data in reconfigurable hardware (in addition to processing on a general purpose imbedded or standalone instrument flight computer), as well as making decisions for on-board system adaptation and resource reconfiguration. The instrument data system is now tasked to perform more functions, such as forming packets and instrument-level data compression of more than one data stream, which are traditionally performed by the spacecraft command and data handling system. It is furthermore required that the electronics box for new complex instruments is developed for one-digit watt power consumption, small size and that it is light-weight, and delivers super-computing capabilities. The conflict between the actual development cost of newer complex instruments and its electronics components' heritage cost model predictions seems to be irreconcilable. This conflict and an approach to its resolution are addressed in this paper by determining the complexity parameters, complexity index, and their use in enhanced cost model.
Low cost computer subsystem for the Solar Electric Propulsion Stage (SEPS)
NASA Technical Reports Server (NTRS)
1975-01-01
The Solar Electric Propulsion Stage (SEPS) subsystem which consists of the computer, custom input/output (I/O) unit, and tape recorder for mass storage of telemetry data was studied. Computer software and interface requirements were developed along with computer and I/O unit design parameters. Redundancy implementation was emphasized. Reliability analysis was performed for the complete command computer sybsystem. A SEPS fault tolerant memory breadboard was constructed and its operation demonstrated.
Using a Cray Y-MP as an array processor for a RISC Workstation
NASA Technical Reports Server (NTRS)
Lamaster, Hugh; Rogallo, Sarah J.
1992-01-01
As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.
NASA Astrophysics Data System (ADS)
Mack, Ian W.; Potts, Stephen; McMenemy, Karen R.; Ferguson, R. S.
2006-02-01
The laparoscopic technique for performing abdominal surgery requires a very high degree of skill in the medical practitioner. Much interest has been focused on using computer graphics to provide simulators for training surgeons. Unfortunately, these tend to be complex and have a very high cost, which limits availability and restricts the length of time over which individuals can practice their skills. With computer game technology able to provide the graphics required for a surgical simulator, the cost does not have to be high. However, graphics alone cannot serve as a training simulator. Human interface hardware, the equivalent of the force feedback joystick for a flight simulator game, is required to complete the system. This paper presents a design for a very low cost device to address this vital issue. The design encompasses: the mechanical construction, the electronic interfaces and the software protocols to mimic a laparoscopic surgical set-up. Thus the surgeon has the capability of practicing two-handed procedures with the possibility of force feedback. The force feedback and collision detection algorithms allow surgeons to practice realistic operating theatre procedures with a good degree of authenticity.
The Requirements Generation System: A tool for managing mission requirements
NASA Technical Reports Server (NTRS)
Sheppard, Sylvia B.
1994-01-01
Historically, NASA's cost for developing mission requirements has been a significant part of a mission's budget. Large amounts of time have been allocated in mission schedules for the development and review of requirements by the many groups who are associated with a mission. Additionally, tracing requirements from a current document to a parent document has been time-consuming and costly. The Requirements Generation System (RGS) is a computer-supported cooperative-work tool that assists mission developers in the online creation, review, editing, tracing, and approval of mission requirements as well as in the production of requirements documents. This paper describes the RGS and discusses some lessons learned during its development.
Artificial neural networks using complex numbers and phase encoded weights.
Michel, Howard E; Awwal, Abdul Ahad S
2010-04-01
The model of a simple perceptron using phase-encoded inputs and complex-valued weights is proposed. The aggregation function, activation function, and learning rule for the proposed neuron are derived and applied to Boolean logic functions and simple computer vision tasks. The complex-valued neuron (CVN) is shown to be superior to traditional perceptrons. An improvement of 135% over the theoretical maximum of 104 linearly separable problems (of three variables) solvable by conventional perceptrons is achieved without additional logic, neuron stages, or higher order terms such as those required in polynomial logic gates. The application of CVN in distortion invariant character recognition and image segmentation is demonstrated. Implementation details are discussed, and the CVN is shown to be very attractive for optical implementation since optical computations are naturally complex. The cost of the CVN is less in all cases than the traditional neuron when implemented optically. Therefore, all the benefits of the CVN can be obtained without additional cost. However, on those implementations dependent on standard serial computers, CVN will be more cost effective only in those applications where its increased power can offset the requirement for additional neurons.
NASA Technical Reports Server (NTRS)
1975-01-01
The SATIL 2 computer program was developed to assist with the programmatic evaluation of alternative approaches to establishing and maintaining a specified mix of operational sensors on spacecraft in an operational SEASAT system. The program computes the probability distributions of events (i.e., number of launch attempts, number of spacecraft purchased, etc.), annual recurring cost, and present value of recurring cost. This is accomplished for the specific task of placing a desired mix of sensors in orbit in an optimal fashion in order to satisfy a specified sensor demand function. Flow charts are shown, and printouts of the programs are given.
Static Memory Deduplication for Performance Optimization in Cloud Computing.
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-04-27
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.
Static Memory Deduplication for Performance Optimization in Cloud Computing
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-01-01
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434
Permittivity and conductivity parameter estimations using full waveform inversion
NASA Astrophysics Data System (ADS)
Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.
2018-04-01
Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.
Teaching Multimedia Data Protection through an International Online Competition
ERIC Educational Resources Information Center
Battisti, F.; Boato, G.; Carli, M.; Neri, A.
2011-01-01
Low-cost personal computers, wireless access technologies, the Internet, and computer-equipped classrooms allow the design of novel and complementary methodologies for teaching digital information security in electrical engineering curricula. The challenges of the current digital information era require experts who are effectively able to…
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Massive Photons: An Infrared Regularization Scheme for Lattice QCD+QED.
Endres, Michael G; Shindler, Andrea; Tiburzi, Brian C; Walker-Loud, André
2016-08-12
Standard methods for including electromagnetic interactions in lattice quantum chromodynamics calculations result in power-law finite-volume corrections to physical quantities. Removing these by extrapolation requires costly computations at multiple volumes. We introduce a photon mass to alternatively regulate the infrared, and rely on effective field theory to remove its unphysical effects. Electromagnetic modifications to the hadron spectrum are reliably estimated with a precision and cost comparable to conventional approaches that utilize multiple larger volumes. A significant overall cost advantage emerges when accounting for ensemble generation. The proposed method may benefit lattice calculations involving multiple charged hadrons, as well as quantum many-body computations with long-range Coulomb interactions.
Space shuttle solid rocket booster cost-per-flight analysis technique
NASA Technical Reports Server (NTRS)
Forney, J. A.
1979-01-01
A cost per flight computer model is described which considers: traffic model, component attrition, hardware useful life, turnaround time for refurbishment, manufacturing rates, learning curves on the time to perform tasks, cost improvement curves on quantity hardware buys, inflation, spares philosophy, long lead, hardware funding requirements, and other logistics and scheduling constraints. Additional uses of the model include assessing the cost per flight impact of changing major space shuttle program parameters and searching for opportunities to make cost effective management decisions.
Multi-tasking computer control of video related equipment
NASA Technical Reports Server (NTRS)
Molina, Rod; Gilbert, Bob
1989-01-01
The flexibility, cost-effectiveness and widespread availability of personal computers now makes it possible to completely integrate the previously separate elements of video post-production into a single device. Specifically, a personal computer, such as the Commodore-Amiga, can perform multiple and simultaneous tasks from an individual unit. Relatively low cost, minimal space requirements and user-friendliness, provides the most favorable environment for the many phases of video post-production. Computers are well known for their basic abilities to process numbers, text and graphics and to reliably perform repetitive and tedious functions efficiently. These capabilities can now apply as either additions or alternatives to existing video post-production methods. A present example of computer-based video post-production technology is the RGB CVC (Computer and Video Creations) WorkSystem. A wide variety of integrated functions are made possible with an Amiga computer existing at the heart of the system.
Pinkerton, Steven D.; Pearson, Cynthia R.; Eachus, Susan R.; Berg, Karina M.; Grimes, Richard M.
2008-01-01
Summary Maximizing our economic investment in HIV prevention requires balancing the costs of candidate interventions against their effects and selecting the most cost-effective interventions for implementation. However, many HIV prevention intervention trials do not collect cost information, and those that do use a variety of cost data collection methods and analysis techniques. Standardized cost data collection procedures, instrumentation, and analysis techniques are needed to facilitate the task of assessing intervention costs and to ensure comparability across intervention trials. This article describes the basic elements of a standardized cost data collection and analysis protocol and outlines a computer-based approach to implementing this protocol. Ultimately, the development of such a protocol would require contributions and “buy-in” from a diverse range of stakeholders, including HIV prevention researchers, cost-effectiveness analysts, community collaborators, public health decision makers, and funding agencies. PMID:18301128
Reduce dimension costs by using WALNUT
David G. Martens; David G. Martens
1986-01-01
A computer program called WALNUT is described that determines the leastcost combination of lumber grades required to produce a given cutting order of furniture dimension parts. If the least-cost mix is not available, WALNUT can be used to determine the next best alternative. The steps involved in using the program are described.
29 CFR 97.42 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
...: indirect cost rate computations or proposals, cost allocation plans, and any similar accounting... supporting records starts from end of the fiscal year (or other accounting period) covered by the proposal..., papers, or other records of grantees and subgrantees which are pertinent to the grant, in order to make...
A computer-oriented system for assembling and displaying land management information
Elliot L. Amidon
1964-01-01
Maps contain information basic to land management planning. By transforming conventional map symbols into numbers which are punched into cards, the land manager can have a computer assemble and display information required for a specific job. He can let a computer select information from several maps, combine it with such nonmap data as treatment cost or benefit per...
ERIC Educational Resources Information Center
Ekufu, ThankGod K.
2012-01-01
Organizations are finding it difficult in today's economy to implement the vast information technology infrastructure required to effectively conduct their business operations. Despite the fact that some of these organizations are leveraging on the computational powers and the cost-saving benefits of computing on the Internet cloud, others…
The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.
Ene, Florentina; Delassus, Patrick; Morris, Liam
2014-08-01
The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.
Chimera grids in the simulation of three-dimensional flowfields in turbine-blade-coolant passages
NASA Technical Reports Server (NTRS)
Stephens, M. A.; Rimlinger, M. J.; Shih, T. I.-P.; Civinskas, K. C.
1993-01-01
When computing flows inside geometrically complex turbine-blade coolant passages, the structure of the grid system used can affect significantly the overall time and cost required to obtain solutions. This paper addresses this issue while evaluating and developing computational tools for the design and analysis of coolant-passages, and is divided into two parts. In the first part, the various types of structured and unstructured grids are compared in relation to their ability to provide solutions in a timely and cost-effective manner. This comparison shows that the overlapping structured grids, known as Chimera grids, can rival and in some instances exceed the cost-effectiveness of unstructured grids in terms of both the man hours needed to generate grids and the amount of computer memory and CPU time needed to obtain solutions. In the second part, a computational tool utilizing Chimera grids was used to compute the flow and heat transfer in two different turbine-blade coolant passages that contain baffles and numerous pin fins. These computations showed the versatility and flexibility offered by Chimera grids.
Spacelab Mission Implementation Cost Assessment (SMICA)
NASA Technical Reports Server (NTRS)
Guynes, B. V.
1984-01-01
A total savings of approximately 20 percent is attainable if: (1) mission management and ground processing schedules are compressed; (2) the equipping, staffing, and operating of the Payload Operations Control Center is revised, and (3) methods of working with experiment developers are changed. The development of a new mission implementation technique, which includes mission definition, experiment development, and mission integration/operations, is examined. The Payload Operations Control Center is to relocate and utilize new computer equipment to produce cost savings. Methods of reducing costs by minimizing the Spacelab and payload processing time during pre- and post-mission operation at KSC are analyzed. The changes required to reduce costs in the analytical integration process are studied. The influence of time, requirements accountability, and risk on costs is discussed. Recommendation for cost reductions developed by the Spacelab Mission Implementation Cost Assessment study are listed.
Repository Planning, Design, and Engineering: Part II-Equipment and Costing.
Baird, Phillip M; Gunter, Elaine W
2016-08-01
Part II of this article discusses and provides guidance on the equipment and systems necessary to operate a repository. The various types of storage equipment and monitoring and support systems are presented in detail. While the material focuses on the large repository, the requirements for a small-scale startup are also presented. Cost estimates and a cost model for establishing a repository are presented. The cost model presents an expected range of acquisition costs for the large capital items in developing a repository. A range of 5,000-7,000 ft(2) constructed has been assumed, with 50 frozen storage units, to reflect a successful operation with growth potential. No design or engineering costs, permit or regulatory costs, or smaller items such as the computers, software, furniture, phones, and barcode readers required for operations have been included.
Methods for evaluating and ranking transportation energy conservation programs
NASA Astrophysics Data System (ADS)
Santone, L. C.
1981-04-01
The energy conservation programs are assessed in terms of petroleum savings, incremental costs to consumers probability of technical and market success, and external impacts due to environmental, economic, and social factors. Three ranking functions and a policy matrix are used to evaluate the programs. The net present value measure which computes the present worth of petroleum savings less the present worth of costs is modified by dividing by the present value of DOE funding to obtain a net present value per program dollar. The comprehensive ranking function takes external impacts into account. Procedures are described for making computations of the ranking functions and the attributes that require computation. Computations are made for the electric vehicle, Stirling engine, gas turbine, and MPG mileage guide program.
Looking At Display Technologies
ERIC Educational Resources Information Center
Bull, Glen; Bull, Gina
2005-01-01
A projection system in a classroom with an Internet connection provides a window on the world. Until recently, projectors were expensive and difficult to maintain. Technological advances have resulted in solid-state projectors that require little maintenance and cost no more than a computer. Adding a second or third computer to a classroom…
ERIC Educational Resources Information Center
Miller, Fredrick
2007-01-01
This article discusses 10 "IT myths": (1) Standardizing on a single computing platform reduces costs and enables better support; (2) Requiring students to own a computer (or laptop) will improve the quality of education at an institution; (3) Having a campus subscription to a music download service will reduce the incidence of online copyright…
34 CFR 80.42 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., and any similar accounting computations of the rate at which a particular group of costs is chargeable... plan, or computation and its supporting records starts from end of the fiscal year (or other accounting... to any pertinent books, documents, papers, or other records of grantees and subgrantees which are...
20 CFR 435.53 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-04-01
.... 435.53 Section 435.53 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE..., statistical records, and all other records pertinent to an award must be retained for a period of three years... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
20 CFR 435.53 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-04-01
.... 435.53 Section 435.53 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE..., statistical records, and all other records pertinent to an award must be retained for a period of three years... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
20 CFR 435.53 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-04-01
.... 435.53 Section 435.53 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE..., statistical records, and all other records pertinent to an award must be retained for a period of three years... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
Advanced On-Board Processor (AOP). [for future spacecraft applications
NASA Technical Reports Server (NTRS)
1973-01-01
Advanced On-board Processor the (AOP) uses large scale integration throughout and is the most advanced space qualified computer of its class in existence today. It was designed to satisfy most spacecraft requirements which are anticipated over the next several years. The AOP design utilizes custom metallized multigate arrays (CMMA) which have been designed specifically for this computer. This approach provides the most efficient use of circuits, reduces volume, weight, assembly costs and provides for a significant increase in reliability by the significant reduction in conventional circuit interconnections. The required 69 CMMA packages are assembled on a single multilayer printed circuit board which together with associated connectors constitutes the complete AOP. This approach also reduces conventional interconnections thus further reducing weight, volume and assembly costs.
Ding, Yan; Fei, Yang; Xu, Biao; Yang, Jun; Yan, Weirong; Diwan, Vinod K; Sauerborn, Rainer; Dong, Hengjin
2015-07-25
Studies into the costs of syndromic surveillance systems are rare, especially for estimating the direct costs involved in implementing and maintaining these systems. An Integrated Surveillance System in rural China (ISSC project), with the aim of providing an early warning system for outbreaks, was implemented; village clinics were the main surveillance units. Village doctors expressed their willingness to join in the surveillance if a proper subsidy was provided. This study aims to measure the costs of data collection by village clinics to provide a reference regarding the subsidy level required for village clinics to participate in data collection. We conducted a cross-sectional survey with a village clinic questionnaire and a staff questionnaire using a purposive sampling strategy. We tracked reported events using the ISSC internal database. Cost data included staff time, and the annual depreciation and opportunity costs of computers. We measured the village doctors' time costs for data collection by multiplying the number of full time employment equivalents devoted to the surveillance by the village doctors' annual salaries and benefits, which equaled their net incomes. We estimated the depreciation and opportunity costs of computers by calculating the equivalent annual computer cost and then allocating this to the surveillance based on the percentage usage. The estimated total annual cost of collecting data was 1,423 Chinese Renminbi (RMB) in 2012 (P25 = 857, P75 = 3284), including 1,250 RMB (P25 = 656, P75 = 3000) staff time costs and 134 RMB (P25 = 101, P75 = 335) depreciation and opportunity costs of computers. The total costs of collecting data from the village clinics for the syndromic surveillance system was calculated to be low compared with the individual net income in County A.
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
Access control and privacy in large distributed systems
NASA Technical Reports Server (NTRS)
Leiner, B. M.; Bishop, M.
1986-01-01
Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.
Optimize Resources and Help Reduce Cost of Ownership with Dell[TM] Systems Management
ERIC Educational Resources Information Center
Technology & Learning, 2008
2008-01-01
Maintaining secure, convenient administration of the PC system environment can be a significant drain on resources. Deskside visits can greatly increase the cost of supporting a large number of computers. Even simple tasks, such as tracking inventory or updating software, quickly become expensive when they require physically visiting every…
Using Excel's Solver Function to Facilitate Reciprocal Service Department Cost Allocations
ERIC Educational Resources Information Center
Leese, Wallace R.
2013-01-01
The reciprocal method of service department cost allocation requires linear equations to be solved simultaneously. These computations are often so complex as to cause the abandonment of the reciprocal method in favor of the less sophisticated and theoretically incorrect direct or step-down methods. This article illustrates how Excel's Solver…
Using Excel's Matrix Operations to Facilitate Reciprocal Cost Allocations
ERIC Educational Resources Information Center
Leese, Wallace R.; Kizirian, Tim
2009-01-01
The reciprocal method of service department cost allocation requires linear equations to be solved simultaneously. These computations are often so complex as to cause the abandonment of the reciprocal method in favor of the less sophisticated direct or step-down methods. Here is a short example demonstrating how Excel's sometimes unknown matrix…
McLaren, D G; Buchanan, D S; Williams, J E
1987-10-01
A static, deterministic computer model, programmed in Microsoft Basic for IBM PC and Apple Macintosh computers, was developed to calculate production efficiency (cost per kg of product) for nine alternative types of crossbreeding system involving four breeds of swine. The model simulates efficiencies for four purebred and 60 alternative two-, three- and four-breed rotation, rotaterminal, backcross and static cross systems. Crossbreeding systems were defined as including all purebred, crossbred and commercial matings necessary to maintain a total of 10,000 farrowings. Driving variables for the model are mean conception rate at first service and for an 8-wk breeding season, litter size born, preweaning survival rate, postweaning average daily gain, feed-to-gain ratio and carcass backfat. Predictions are computed using breed direct genetic and maternal effects for the four breeds, plus individual, maternal and paternal specific heterosis values, input by the user. Inputs required to calculate the number of females farrowing in each sub-system include the proportion of males and females replaced each breeding cycle in purebred and crossbred populations, the proportion of male and female offspring in seedstock herds that become breeding animals, and the number of females per boar. Inputs required to calculate the efficiency of terminal production (cost-to-product ratio) for each sub-system include breeding herd feed intake, gilt development costs, feed costs and labor and overhead costs. Crossbreeding system efficiency is calculated as the weighted average of sub-system cost-to-product ratio values, weighting by the number of females farrowing in each sub-system.
Cost and schedule analytical techniques development
NASA Technical Reports Server (NTRS)
1994-01-01
This contract provided technical services and products to the Marshall Space Flight Center's Engineering Cost Office (PP03) and the Program Plans and Requirements Office (PP02) for the period of 3 Aug. 1991 - 30 Nov. 1994. Accomplishments summarized cover the REDSTAR data base, NASCOM hard copy data base, NASCOM automated data base, NASCOM cost model, complexity generators, program planning, schedules, NASA computer connectivity, other analytical techniques, and special project support.
Thermal/Structural Tailoring of Engine Blades (T/STAEBL) User's manual
NASA Technical Reports Server (NTRS)
Brown, K. W.
1994-01-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.
Thermal/Structural Tailoring of Engine Blades (T/STAEBL): User's manual
NASA Astrophysics Data System (ADS)
Brown, K. W.
1994-03-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.
O/S analysis of conceptual space vehicles. Part 1
NASA Technical Reports Server (NTRS)
Ebeling, Charles E.
1995-01-01
The application of recently developed computer models in determining operational capabilities and support requirements during the conceptual design of proposed space systems is discussed. The models used are the reliability and maintainability (R&M) model, the maintenance simulation model, and the operations and support (O&S) cost model. In the process of applying these models, the R&M and O&S cost models were updated. The more significant enhancements include (1) improved R&M equations for the tank subsystems, (2) the ability to allocate schedule maintenance by subsystem, (3) redefined spares calculations, (4) computing a weighted average of the working days and mission days per month, (5) the use of a position manning factor, and (6) the incorporation into the O&S model of new formulas for computing depot and organizational recurring and nonrecurring training costs and documentation costs, and depot support equipment costs. The case study used is based upon a winged, single-stage, vertical-takeoff vehicle (SSV) designed to deliver to the Space Station Freedom (SSF) a 25,000 lb payload including passengers without a crew.
Microcomputer software development facilities
NASA Technical Reports Server (NTRS)
Gorman, J. S.; Mathiasen, C.
1980-01-01
A more efficient and cost effective method for developing microcomputer software is to utilize a host computer with high-speed peripheral support. Application programs such as cross assemblers, loaders, and simulators are implemented in the host computer for each of the microcomputers for which software development is a requirement. The host computer is configured to operate in a time share mode for multiusers. The remote terminals, printers, and down loading capabilities provided are based on user requirements. With this configuration a user, either local or remote, can use the host computer for microcomputer software development. Once the software is developed (through the code and modular debug stage) it can be downloaded to the development system or emulator in a test area where hardware/software integration functions can proceed. The microcomputer software program sources reside in the host computer and can be edited, assembled, loaded, and then downloaded as required until the software development project has been completed.
Thermodynamics of quasideterministic digital computers
NASA Astrophysics Data System (ADS)
Chu, Dominique
2018-02-01
A central result of stochastic thermodynamics is that irreversible state transitions of Markovian systems entail a cost in terms of an infinite entropy production. A corollary of this is that strictly deterministic computation is not possible. Using a thermodynamically consistent model, we show that quasideterministic computation can be achieved at finite, and indeed modest cost with accuracies that are indistinguishable from deterministic behavior for all practical purposes. Concretely, we consider the entropy production of stochastic (Markovian) systems that behave like and and a not gates. Combinations of these gates can implement any logical function. We require that these gates return the correct result with a probability that is very close to 1, and additionally, that they do so within finite time. The central component of the model is a machine that can read and write binary tapes. We find that the error probability of the computation of these gates falls with the power of the system size, whereas the cost only increases linearly with the system size.
requirements: Post-script. The Objective of this report was to determine whether transferring pregnant women from ships costs the Navy more permanent...change of station (PCS) funds than transferring men and nonpregnant women information was extracted from the enlisted master record concerning gender...from gender-integrated afloat units. The direct costs of transfer prior to PRD was compared for men and women and an estimate of PCS costs, if the ships were not gender-integrated, was also calculated.
Study objectives: Will commercial avionics do the job? Improvements needed?
NASA Technical Reports Server (NTRS)
Nasr, Hatem
1992-01-01
Improvements in commercial avionics are covered in a viewgraph format. Topics include the following: computer architecture, user requirements, Boeing 777 aircraft, cost effectiveness, and implemention.
The impact of supercomputers on experimentation: A view from a national laboratory
NASA Technical Reports Server (NTRS)
Peterson, V. L.; Arnold, J. O.
1985-01-01
The relative roles of large scale scientific computers and physical experiments in several science and engineering disciplines are discussed. Increasing dependence on computers is shown to be motivated both by the rapid growth in computer speed and memory, which permits accurate numerical simulation of complex physical phenomena, and by the rapid reduction in the cost of performing a calculation, which makes computation an increasingly attractive complement to experimentation. Computer speed and memory requirements are presented for selected areas of such disciplines as fluid dynamics, aerodynamics, aerothermodynamics, chemistry, atmospheric sciences, astronomy, and astrophysics, together with some examples of the complementary nature of computation and experiment. Finally, the impact of the emerging role of computers in the technical disciplines is discussed in terms of both the requirements for experimentation and the attainment of previously inaccessible information on physical processes.
Optimum Policy Regions for Computer-Directed Teaching Systems.
ERIC Educational Resources Information Center
Smallwood, Richard D.
The development of computer-directed instruction in which the learning protocol is tailored to each student on the basis of his learning history requires a means by which the many different trajectories open to a student can be resolved. Such an optimization procedure can be constructed to reduce the long and costly calculations associated with…
Management Information System for ESD Program Offices.
1978-03-01
Management Information System (MIS) functional requirements for the ESD Program Office are defined in terms of the Computer-Aided Design and Specification Tool. The development of the computer data base and a description of the MIS structure is included in the report. This report addresses management areas such as cost/budgeting, scheduling, tracking capabilities, and ECP
41 CFR 105-72.603 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
... representatives, have the right of timely and unrestricted access to any books, documents, papers, or other... accounting computations of the rate at which a particular group of costs is chargeable (such as computer... records starts at the end of the fiscal year (or other accounting period) covered by the proposal, plan...
29 CFR 95.53 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., have the right of timely and unrestricted access to any books, documents, papers, or other records of... allocation plans, and any similar accounting computations of the rate at which a particular group of costs is... of the fiscal year (or other accounting period) covered by the proposal, plan, or other computation...
2000 Numerical Propulsion System Simulation Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Greg; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac
2001-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective. high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA'S Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 1999 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2000 by the High Performance Computing and Communications Program.
2001 Numerical Propulsion System Simulation Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Gregory; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac
2002-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA's Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 2000 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2001 by the High Performance Computing and Communications Program.
Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.
Chen, Shizhi; Yang, Xiaodong; Tian, Yingli
2015-09-01
A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, U.A.; Baumle, B.; Kohler, P.
1992-10-01
Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.
User participation in the development of the human/computer interface for control centers
NASA Technical Reports Server (NTRS)
Broome, Richard; Quick-Campbell, Marlene; Creegan, James; Dutilly, Robert
1996-01-01
Technological advances coupled with the requirements to reduce operations staffing costs led to the demand for efficient, technologically-sophisticated mission operations control centers. The control center under development for the earth observing system (EOS) is considered. The users are involved in the development of a control center in order to ensure that it is cost-efficient and flexible. A number of measures were implemented in the EOS program in order to encourage user involvement in the area of human-computer interface development. The following user participation exercises carried out in relation to the system analysis and design are described: the shadow participation of the programmers during a day of operations; the flight operations personnel interviews; and the analysis of the flight operations team tasks. The user participation in the interface prototype development, the prototype evaluation, and the system implementation are reported on. The involvement of the users early in the development process enables the requirements to be better understood and the cost to be reduced.
NASA Astrophysics Data System (ADS)
Lu, D.; Ricciuto, D. M.; Evans, K. J.
2017-12-01
Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.
de Lima, Camila; Salomão Helou, Elias
2018-01-01
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
NASA Astrophysics Data System (ADS)
Pai, Archana; Bose, Sukanta; Dhurandhar, Sanjeev
2002-04-01
We extend a coherent network data-analysis strategy developed earlier for detecting Newtonian waveforms to the case of post-Newtonian (PN) waveforms. Since the PN waveform depends on the individual masses of the inspiralling binary, the parameter-space dimension increases by one from that of the Newtonian case. We obtain the number of templates and estimate the computational costs for PN waveforms: for a lower mass limit of 1Msolar, for LIGO-I noise and with 3% maximum mismatch, the online computational speed requirement for single detector is a few Gflops; for a two-detector network it is hundreds of Gflops and for a three-detector network it is tens of Tflops. Apart from idealistic networks, we obtain results for realistic networks comprising of LIGO and VIRGO. Finally, we compare costs incurred in a coincidence detection strategy with those incurred in the coherent strategy detailed above.
Two-step simulation of velocity and passive scalar mixing at high Schmidt number in turbulent jets
NASA Astrophysics Data System (ADS)
Rah, K. Jeff; Blanquart, Guillaume
2016-11-01
Simulation of passive scalar in the high Schmidt number turbulent mixing process requires higher computational cost than that of velocity fields, because the scalar is associated with smaller length scales than velocity. Thus, full simulation of both velocity and passive scalar with high Sc for a practical configuration is difficult to perform. In this work, a new approach to simulate velocity and passive scalar mixing at high Sc is suggested to reduce the computational cost. First, the velocity fields are resolved by Large Eddy Simulation (LES). Then, by extracting the velocity information from LES, the scalar inside a moving fluid blob is simulated by Direct Numerical Simulation (DNS). This two-step simulation method is applied to a turbulent jet and provides a new way to examine a scalar mixing process in a practical application with smaller computational cost. NSF, Samsung Scholarship.
Low cost Ku-band earth terminals for voice/data/facsimile
NASA Technical Reports Server (NTRS)
Kelley, R. L.
1977-01-01
A Ku-band satellite earth terminal capable of providing two way voice/facsimile teleconferencing, 128 Kbps data, telephone, and high-speed imagery services is proposed. Optimized terminal cost and configuration are presented as a function of FDMA and TDMA approaches to multiple access. The entire terminal from the antenna to microphones, speakers and facsimile equipment is considered. Component cost versus performance has been projected as a function of size of the procurement and predicted hardware innovations and production techniques through 1985. The lowest cost combinations of components has been determined in a computer optimization algorithm. The system requirements including terminal EIRP and G/T, satellite size, power per spacecraft transponder, satellite antenna characteristics, and link propagation outage were selected using a computerized system cost/performance optimization algorithm. System cost and terminal cost and performance requirements are presented as a function of the size of a nationwide U.S. network. Service costs are compared with typical conference travel costs to show the viability of the proposed terminal.
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
49 CFR 7.44 - Services performed without charge or at a reduced charge.
Code of Federal Regulations, 2013 CFR
2013-10-01
... charged to any requestor making a request under subpart C of this part for the first two hours of search... search is required two hours of search time will be considered spent when the hourly costs of operating the central processing unit used to perform the search added to the computer operator's salary cost...
49 CFR 7.44 - Services performed without charge or at a reduced charge.
Code of Federal Regulations, 2011 CFR
2011-10-01
... charged to any requestor making a request under subpart C of this part for the first two hours of search... search is required two hours of search time will be considered spent when the hourly costs of operating the central processing unit used to perform the search added to the computer operator's salary cost...
49 CFR 7.44 - Services performed without charge or at a reduced charge.
Code of Federal Regulations, 2012 CFR
2012-10-01
... charged to any requestor making a request under subpart C of this part for the first two hours of search... search is required two hours of search time will be considered spent when the hourly costs of operating the central processing unit used to perform the search added to the computer operator's salary cost...
NASA Technical Reports Server (NTRS)
Stricker, L. T.
1973-01-01
The DORCA Applications study has been directed at development of a data bank management computer program identified as DORMAN. Because of the size of the DORCA data files and the manipulations required on that data to support analyses with the DORCA program, automated data techniques to replace time-consuming manual input generation are required. The Dynamic Operations Requirements and Cost Analysis (DORCA) program was developed for use by NASA in planning future space programs. Both programs are designed for implementation on the UNIVAC 1108 computing system. The purpose of this Executive Summary Report is to define for the NASA management the basic functions of the DORMAN program and its capabilities.
Virtual aluminum castings: An industrial application of ICME
NASA Astrophysics Data System (ADS)
Allison, John; Li, Mei; Wolverton, C.; Su, Xuming
2006-11-01
The automotive product design and manufacturing community is continually besieged by Hercule an engineering, timing, and cost challenges. Nowhere is this more evident than in the development of designs and manufacturing processes for cast aluminum engine blocks and cylinder heads. Increasing engine performance requirements coupled with stringent weight and packaging constraints are pushing aluminum alloys to the limits of their capabilities. To provide high-quality blocks and heads at the lowest possible cost, manufacturing process engineers are required to find increasingly innovative ways to cast and heat treat components. Additionally, to remain competitive, products and manufacturing methods must be developed and implemented in record time. To bridge the gaps between program needs and engineering reality, the use of robust computational models in up-front analysis will take on an increasingly important role. This article describes just such a computational approach, the Virtual Aluminum Castings methodology, which was developed and implemented at Ford Motor Company and demonstrates the feasibility and benefits of integrated computational materials engineering.
Hesford, Andrew J.; Waag, Robert C.
2010-01-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2010-10-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Hesford, Andrew J; Waag, Robert C
2010-10-20
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Variable gravity research facility
NASA Technical Reports Server (NTRS)
Allan, Sean; Ancheta, Stan; Beine, Donna; Cink, Brian; Eagon, Mark; Eckstein, Brett; Luhman, Dan; Mccowan, Daniel; Nations, James; Nordtvedt, Todd
1988-01-01
Spin and despin requirements; sequence of activities required to assemble the Variable Gravity Research Facility (VGRF); power systems technology; life support; thermal control systems; emergencies; communication systems; space station applications; experimental activities; computer modeling and simulation of tether vibration; cost analysis; configuration of the crew compartments; and tether lengths and rotation speeds are discussed.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Arakere, G.; Hariharan, A.; Pandurangan, B.
2012-06-01
The introduction of newer joining technologies like the so-called friction-stir welding (FSW) into automotive engineering entails the knowledge of the joint-material microstructure and properties. Since, the development of vehicles (including military vehicles capable of surviving blast and ballistic impacts) nowadays involves extensive use of the computational engineering analyses (CEA), robust high-fidelity material models are needed for the FSW joints. A two-level material-homogenization procedure is proposed and utilized in this study to help manage computational cost and computer storage requirements for such CEAs. The method utilizes experimental (microstructure, microhardness, tensile testing, and x-ray diffraction) data to construct: (a) the material model for each weld zone and (b) the material model for the entire weld. The procedure is validated by comparing its predictions with the predictions of more detailed but more costly computational analyses.
Assessing the use of computers in industrial occupational health departments.
Owen, J P
1995-04-01
Computers are widely used in business and industry and the benefits of computerizing occupational health (OH) departments have been advocated by several authors. The requirements for successful computerization of an OH department are reviewed. Having identified the theoretical benefits, the real picture in industry is assessed by surveying 52 firms with over 1000 employees in a large urban area. Only 15 (29%) of the companies reported having any OH service, of which six used computers in the OH department, reflecting the business priorities of most of the companies. The types of software systems used and their main use are examined, along with perceived benefits or disadvantages. With the decreasing costs of computers and increasingly 'user-friendly' software, there is a real cost benefit to be gained from using computers in OH departments, although the concept may have to be 'sold' to management.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Arakere, G.; Pandurangan, B.; Sellappan, V.; Vallejo, A.; Ozen, M.
2010-11-01
A multi-disciplinary design-optimization procedure has been introduced and used for the development of cost-effective glass-fiber reinforced epoxy-matrix composite 5 MW horizontal-axis wind-turbine (HAWT) blades. The turbine-blade cost-effectiveness has been defined using the cost of energy (CoE), i.e., a ratio of the three-blade HAWT rotor development/fabrication cost and the associated annual energy production. To assess the annual energy production as a function of the blade design and operating conditions, an aerodynamics-based computational analysis had to be employed. As far as the turbine blade cost is concerned, it is assessed for a given aerodynamic design by separately computing the blade mass and the associated blade-mass/size-dependent production cost. For each aerodynamic design analyzed, a structural finite element-based and a post-processing life-cycle assessment analyses were employed in order to determine a minimal blade mass which ensures that the functional requirements pertaining to the quasi-static strength of the blade, fatigue-controlled blade durability and blade stiffness are satisfied. To determine the turbine-blade production cost (for the currently prevailing fabrication process, the wet lay-up) available data regarding the industry manufacturing experience were combined with the attendant blade mass, surface area, and the duration of the assumed production run. The work clearly revealed the challenges associated with simultaneously satisfying the strength, durability and stiffness requirements while maintaining a high level of wind-energy capture efficiency and a lower production cost.
Use of medical information by computer networks raises major concerns about privacy.
OReilly, M
1995-01-01
The development of computer data-bases and long-distance computer networks is leading to improvements in Canada's health care system. However, these developments come at a cost and require a balancing act between access and confidentiality. Columnist Michael OReilly, who in this article explores the security of computer networks, notes that respect for patients' privacy must be given as high a priority as the ability to see their records in the first place. Images p213-a PMID:7600474
Using a cloud to replenish parched groundwater modeling efforts.
Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B
2010-01-01
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.
Using a cloud to replenish parched groundwater modeling efforts
Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.
2010-01-01
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.
Visualization of unsteady computational fluid dynamics
NASA Astrophysics Data System (ADS)
Haimes, Robert
1994-11-01
A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.
Visualization of unsteady computational fluid dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1994-01-01
A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.
Penny S. Lawson; R. Edward Thomas; Elizabeth S Walker
1996-01-01
OPTIGRAMI V2 is a computer program available for IBM persaonl computer with 80286 and higher processors. OPTIGRAMI V2 determines the least-cost lumber grade mix required to produce a given cutting order for clear parts from rough lumber of known grades in a crosscut-first rough mill operation. It is a user-friendly integrated application that includes optimization...
Use of cloud computing in biomedicine.
Sobeslav, Vladimir; Maresova, Petra; Krejcar, Ondrej; Franca, Tanos C C; Kuca, Kamil
2016-12-01
Nowadays, biomedicine is characterised by a growing need for processing of large amounts of data in real time. This leads to new requirements for information and communication technologies (ICT). Cloud computing offers a solution to these requirements and provides many advantages, such as cost savings, elasticity and scalability of using ICT. The aim of this paper is to explore the concept of cloud computing and the related use of this concept in the area of biomedicine. Authors offer a comprehensive analysis of the implementation of the cloud computing approach in biomedical research, decomposed into infrastructure, platform and service layer, and a recommendation for processing large amounts of data in biomedicine. Firstly, the paper describes the appropriate forms and technological solutions of cloud computing. Secondly, the high-end computing paradigm of cloud computing aspects is analysed. Finally, the potential and current use of applications in scientific research of this technology in biomedicine is discussed.
The UCLA MEDLARS Computer System *
Garvis, Francis J.
1966-01-01
Under a subcontract with UCLA the Planning Research Corporation has changed the MEDLARS system to make it possible to use the IBM 7094/7040 direct-couple computer instead of the Honeywell 800 for demand searches. The major tasks were the rewriting of the programs in COBOL and copying of the stored information on the narrower tapes that IBM computers require. (In the future NLM will copy the tapes for IBM computer users.) The differences in the software required by the two computers are noted. Major and costly revisions would be needed to adapt the large MEDLARS system to the smaller IBM 1401 and 1410 computers. In general, MEDLARS is transferrable to other computers of the IBM 7000 class, the new IBM 360, and those of like size, such as the CDC 1604 or UNIVAC 1108, although additional changes are necessary. Potential future improvements are suggested. PMID:5901355
Interesting viewpoints to those who will put Ada into practice
NASA Technical Reports Server (NTRS)
Carlsson, Arne
1986-01-01
Ada will most probably be used as the programming language for computers in the NASA Space Station. It is reasonable to suppose that Ada will be used for at least embedded computers, because the high software costs for these embedded computers were the reason why Ada activities were initiated about ten years ago. The on-board computers are designed for use in space applications, where maintenance by man is impossible. All manipulation of such computers has to be performed in an autonomous way or remote with commands from the ground. In a manned Space Station some maintenance work can be performed by service people on board, but there are still a lot of applications, which require autonomous computers, for example, vital Space Station functions and unmanned orbital transfer vehicles. Those aspect which have come out of the analysis of Ada characteristics together with the experience of requirements for embedded on-board computers in space applications are examined.
Password Cracking Using Sony Playstations
NASA Astrophysics Data System (ADS)
Kleinhans, Hugo; Butts, Jonathan; Shenoi, Sujeet
Law enforcement agencies frequently encounter encrypted digital evidence for which the cryptographic keys are unknown or unavailable. Password cracking - whether it employs brute force or sophisticated cryptanalytic techniques - requires massive computational resources. This paper evaluates the benefits of using the Sony PlayStation 3 (PS3) to crack passwords. The PS3 offers massive computational power at relatively low cost. Moreover, multiple PS3 systems can be introduced easily to expand parallel processing when additional power is needed. This paper also describes a distributed framework designed to enable law enforcement agents to crack encrypted archives and applications in an efficient and cost-effective manner.
Real-time control system for adaptive resonator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flath, L; An, J; Brase, J
2000-07-24
Sustained operation of high average power solid-state lasers currently requires an adaptive resonator to produce the optimal beam quality. We describe the architecture of a real-time adaptive control system for correcting intra-cavity aberrations in a heat capacity laser. Image data collected from a wavefront sensor are processed and used to control phase with a high-spatial-resolution deformable mirror. Our controller takes advantage of recent developments in low-cost, high-performance processor technology. A desktop-based computational engine and object-oriented software architecture replaces the high-cost rack-mount embedded computers of previous systems.
Evolution of the INMARSAT aeronautical system: Service, system, and business considerations
NASA Technical Reports Server (NTRS)
Sengupta, Jay R.
1995-01-01
A market-driven approach was adopted to develop enhancements to the Inmarsat-Aeronautical system, to address the requirements of potential new market segments. An evolutionary approach and well differentiated product/service portfolio was required, to minimize system upgrade costs and market penetration, respectively. The evolved system definition serves to minimize equipment cost/size/mass for short/medium range aircraft, by reducing the antenna gain requirement and relaxing the performance requirements for non safety-related communications. A validation program involving simulation, laboratory tests, over-satellite tests and flight trials is being conducted to confirm the system definition. Extensive market research has been conducted to determine user requirements and to quantify market demand for future Inmarsat Aero-1 AES, using sophisticated computer assisted survey techniques.
Addressing the computational cost of large EIT solutions.
Boyle, Alistair; Borsic, Andrea; Adler, Andy
2012-05-01
Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.
Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.
2009-01-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
Transportation Planning for Your Community
DOT National Transportation Integrated Search
2000-12-01
The Highway Economic Requirements System (HERS) is a computer model designed to simulate improvement selection decisions based on the relative benefit-cost merits of alternative improvement options. HERS is intended to estimate national level investm...
ERIC Educational Resources Information Center
Classroom Computer Learning, 1990
1990-01-01
Reviewed are three computer software packages including "Martin Luther King, Jr.: Instant Replay of History,""Weeds to Trees," and "The New Print Shop, School Edition." Discussed are hardware requirements, costs, grade levels, availability, emphasis, strengths, and weaknesses. (CW)
Implementation of Cloud based next generation sequencing data analysis in a clinical laboratory.
Onsongo, Getiria; Erdmann, Jesse; Spears, Michael D; Chilton, John; Beckman, Kenneth B; Hauge, Adam; Yohe, Sophia; Schomaker, Matthew; Bower, Matthew; Silverstein, Kevin A T; Thyagarajan, Bharat
2014-05-23
The introduction of next generation sequencing (NGS) has revolutionized molecular diagnostics, though several challenges remain limiting the widespread adoption of NGS testing into clinical practice. One such difficulty includes the development of a robust bioinformatics pipeline that can handle the volume of data generated by high-throughput sequencing in a cost-effective manner. Analysis of sequencing data typically requires a substantial level of computing power that is often cost-prohibitive to most clinical diagnostics laboratories. To address this challenge, our institution has developed a Galaxy-based data analysis pipeline which relies on a web-based, cloud-computing infrastructure to process NGS data and identify genetic variants. It provides additional flexibility, needed to control storage costs, resulting in a pipeline that is cost-effective on a per-sample basis. It does not require the usage of EBS disk to run a sample. We demonstrate the validation and feasibility of implementing this bioinformatics pipeline in a molecular diagnostics laboratory. Four samples were analyzed in duplicate pairs and showed 100% concordance in mutations identified. This pipeline is currently being used in the clinic and all identified pathogenic variants confirmed using Sanger sequencing further validating the software.
Comparative analysis of economic models in selected solar energy computer programs
NASA Astrophysics Data System (ADS)
Powell, J. W.; Barnes, K. A.
1982-01-01
The economic evaluation models in five computer programs widely used for analyzing solar energy systems (F-CHART 3.0, F-CHART 4.0, SOLCOST, BLAST, and DOE-2) are compared. Differences in analysis techniques and assumptions among the programs are assessed from the point of view of consistency with the Federal requirements for life cycle costing (10 CFR Part 436), effect on predicted economic performance, and optimal system size, case of use, and general applicability to diverse systems types and building types. The FEDSOL program developed by the National Bureau of Standards specifically to meet the Federal life cycle cost requirements serves as a basis for the comparison. Results of the study are illustrated in test cases of two different types of Federally owned buildings: a single family residence and a low rise office building.
Automation for Air Traffic Control: The Rise of a New Discipline
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Tobias, Leonard (Technical Monitor)
1997-01-01
The current debate over the concept of Free Flight has renewed interest in automated conflict detection and resolution in the enroute airspace. An essential requirement for effective conflict detection is accurate prediction of trajectories. Trajectory prediction is, however, an inexact process which accumulates errors that grow in proportion to the length of the prediction time interval. Using a model of prediction errors for the trajectory predictor incorporated in the Center-TRACON Automation System (CTAS), a computationally fast algorithm for computing conflict probability has been derived. Furthermore, a method of conflict resolution has been formulated that minimizes the average cost of resolution, when cost is defined as the increment in airline operating costs incurred in flying the resolution maneuver. The method optimizes the trade off between early resolution at lower maneuver costs but higher prediction error on the one hand and late resolution with higher maneuver costs but lower prediction errors on the other. The method determines both the time to initiate the resolution maneuver as well as the characteristics of the resolution trajectory so as to minimize the cost of the resolution. Several computational examples relevant to the design of a conflict probe that can support user-preferred trajectories in the enroute airspace will be presented.
2014-01-01
Background Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. Results We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. Conclusions An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials. PMID:25045516
Parallel conjugate gradient algorithms for manipulator dynamic simulation
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheld, Robert E.
1989-01-01
Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).
NASA Technical Reports Server (NTRS)
Stricker, L. T.
1975-01-01
The LOVES computer program was employed to analyze the geosynchronous portion of the NASA's 1973 automated satellite mission model from 1980 to 1990. The objectives of the analyses were: (1) to demonstrate the capability of the LOVES code to provide the depth and accuracy of data required to support the analyses; and (2) to tradeoff the concept of space servicing automated satellites composed of replaceable modules against the concept of replacing expendable satellites upon failure. The computer code proved to be an invaluable tool in analyzing the logistic requirements of the various test cases required in the tradeoff. It is indicated that the concept of space servicing offers the potential for substantial savings in the cost of operating automated satellite systems.
Quantum computation with realistic magic-state factories
NASA Astrophysics Data System (ADS)
O'Gorman, Joe; Campbell, Earl T.
2017-03-01
Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Minnis, P.; Palikonda, R.; Smith, W. L., Jr.; Spangenberg, D.
2016-12-01
The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) processes and derives near real-time (NRT) global cloud products from operational geostationary satellite imager datasets. These products are being used in NRT to improve forecast model, aircraft icing warnings, and support aircraft field campaigns. Next generation satellites, such as the Japanese Himawari-8 and the upcoming NOAA GOES-R, present challenges for NRT data processing and product dissemination due to the increase in temporal and spatial resolution. The volume of data is expected to increase to approximately 10 folds. This increase in data volume will require additional IT resources to keep up with the processing demands to satisfy NRT requirements. In addition, these resources are not readily available due to cost and other technical limitations. To anticipate and meet these computing resource requirements, we have employed a hybrid cloud computing environment to augment the generation of SatCORPS products. This paper will describe the workflow to ingest, process, and distribute SatCORPS products and the technologies used. Lessons learn from working on both AWS Clouds and GovCloud will be discussed: benefits, similarities, and differences that could impact decision to use cloud computing and storage. A detail cost analysis will be presented. In addition, future cloud utilization, parallelization, and architecture layout will be discussed for GOES-R.
Space Station Furnace Facility. Volume 3: Program cost estimate
NASA Technical Reports Server (NTRS)
1992-01-01
The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.
Cost-Benefit Analysis of Computer Resources for Machine Learning
Champion, Richard A.
2007-01-01
Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
Design of transonic airfoil sections using a similarity theory
NASA Technical Reports Server (NTRS)
Nixon, D.
1978-01-01
A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.
Spacelab user implementation assessment study. Volume 4: SUIAS appendixes
NASA Technical Reports Server (NTRS)
1975-01-01
The capital investment for the integration and checkout of Spacelab payloads is assessed. Detailed data pertaining to this assessment and a computer cost model utilized in the compilation of programmatic resource requirements are delineated.
Model implementation for dynamic computation of system cost for advanced life support
NASA Technical Reports Server (NTRS)
Levri, J. A.; Vaccari, D. A.
2004-01-01
Life support system designs for long-duration space missions have a multitude of requirements drivers, such as mission objectives, political considerations, cost, crew wellness, inherent mission attributes, as well as many other influences. Evaluation of requirements satisfaction can be difficult, particularly at an early stage of mission design. Because launch cost is a critical factor and relatively easy to quantify, it is a point of focus in early mission design. The method used to determine launch cost influences the accuracy of the estimate. This paper discusses the appropriateness of dynamic mission simulation in estimating the launch cost of a life support system. This paper also provides an abbreviated example of a dynamic simulation life support model and possible ways in which such a model might be utilized for design improvement. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.
Dynamic VM Provisioning for TORQUE in a Cloud Environment
NASA Astrophysics Data System (ADS)
Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.
2014-06-01
Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.
Technologies for space station autonomy
NASA Technical Reports Server (NTRS)
Staehle, R. L.
1984-01-01
This report presents an informal survey of experts in the field of spacecraft automation, with recommendations for which technologies should be given the greatest development attention for implementation on the initial 1990's NASA Space Station. The recommendations implemented an autonomy philosophy that was developed by the Concept Development Group's Autonomy Working Group during 1983. They were based on assessments of the technologies' likely maturity by 1987, and of their impact on recurring costs, non-recurring costs, and productivity. The three technology areas recommended for programmatic emphasis were: (1) artificial intelligence expert (knowledge based) systems and processors; (2) fault tolerant computing; and (3) high order (procedure oriented) computer languages. This report also describes other elements required for Station autonomy, including technologies for later implementation, system evolvability, and management attitudes and goals. The cost impact of various technologies is treated qualitatively, and some cases in which both the recurring and nonrecurring costs might be reduced while the crew productivity is increased, are also considered. Strong programmatic emphasis on life cycle cost and productivity is recommended.
A validated methodology for determination of laboratory instrument computer interface efficacy
NASA Astrophysics Data System (ADS)
1984-12-01
This report is intended to provide a methodology for determining when, and for which instruments, direct interfacing of laboratory instrument and laboratory computers is beneficial. This methodology has been developed to assist the Tri-Service Medical Information Systems Program Office in making future decisions regarding laboratory instrument interfaces. We have calculated the time savings required to reach a break-even point for a range of instrument interface prices and corresponding average annual costs. The break-even analyses used empirical data to estimate the number of data points run per day that are required to meet the break-even point. The results indicate, for example, that at a purchase price of $3,000, an instrument interface will be cost-effective if the instrument is utilized for at least 154 data points per day if operated in the continuous mode, or 216 points per day if operated in the discrete mode. Although this model can help to ensure that instrument interfaces are cost effective, additional information should be considered in making the interface decisions. A reduction in results transcription errors may be a major benefit of instrument interfacing.
Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Liu, Shu-Guang; Nichols, Erin; Haga, Jim; Maddox, Brian; Bilderback, Chris; Feller, Mark; Homer, George
2001-01-01
The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost, personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting information science research into parallel computing systems and applications.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The paper describes the computational techniques employed in determining the optimal propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. The computer programs used to perform calculations for all the factors that enter into the selection process of determining the optimum combinations of airplanes and engines are examined. Attention is given to the description of the computer codes including NNEP, WATE, LIFCYC, INSTAL, and POD DRG. A process is illustrated by which turbine engines can be evaluated as to fuel consumption, engine weight, cost and installation effects. Examples are shown as to the benefits of variable geometry and of the tradeoff between fuel burned and engine weights. Future plans for further improvements in the analytical modeling of engine systems are also described.
Simplifying silicon burning: Application of quasi-equilibrium to (alpha) network nucleosynthesis
NASA Technical Reports Server (NTRS)
Hix, W. R.; Thielemann, F.-K.; Khokhlov, A. M.; Wheeler, J. C.
1997-01-01
While the need for accurate calculation of nucleosynthesis and the resulting rate of thermonuclear energy release within hydrodynamic models of stars and supernovae is clear, the computational expense of these nucleosynthesis calculations often force a compromise in accuracy to reduce the computational cost. To redress this trade-off of accuracy for speed, the authors present an improved nuclear network which takes advantage of quasi- equilibrium in order to reduce the number of independent nuclei, and hence the computational cost of nucleosynthesis, without significant reduction in accuracy. In this paper they will discuss the first application of this method, the further reduction in size of the minimal alpha network. The resultant QSE- reduced alpha network is twice as fast as the conventional alpha network it replaces and requires the tracking of half as many abundance variables, while accurately estimating the rate of energy generation. Such reduction in cost is particularly necessary for future generation of multi-dimensional models for supernovae.
Computer program to perform cost and weight analysis of transport aircraft. Volume 1: Summary
NASA Technical Reports Server (NTRS)
1973-01-01
A digital computer program for evaluating the weight and costs of advanced transport designs was developed. The resultant program, intended for use at the preliminary design level, incorporates both batch mode and interactive graphics run capability. The basis of the weight and cost estimation method developed is a unique way of predicting the physical design of each detail part of a vehicle structure at a time when only configuration concept drawings are available. In addition, the technique relies on methods to predict the precise manufacturing processes and the associated material required to produce each detail part. Weight data are generated in four areas of the program. Overall vehicle system weights are derived on a statistical basis as part of the vehicle sizing process. Theoretical weights, actual weights, and the weight of the raw material to be purchased are derived as part of the structural synthesis and part definition processes based on the computed part geometry.
A Mass Computation Model for Lightweight Brayton Cycle Regenerator Heat Exchangers
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2010-01-01
Based on a theoretical analysis of convective heat transfer across large internal surface areas, this paper discusses the design implications for generating lightweight gas-gas heat exchanger designs by packaging such areas into compact three-dimensional shapes. Allowances are made for hot and cold inlet and outlet headers for assembly of completed regenerator (or recuperator) heat exchanger units into closed cycle gas turbine flow ducting. Surface area and resulting volume and mass requirements are computed for a range of heat exchanger effectiveness values and internal heat transfer coefficients. Benefit cost curves show the effect of increasing heat exchanger effectiveness on Brayton cycle thermodynamic efficiency on the plus side, while also illustrating the cost in heat exchanger required surface area, volume, and mass requirements as effectiveness is increased. The equations derived for counterflow and crossflow configurations show that as effectiveness values approach unity, or 100 percent, the required surface area, and hence heat exchanger volume and mass tend toward infinity, since the implication is that heat is transferred at a zero temperature difference. To verify the dimensional accuracy of the regenerator mass computational procedure, calculation of a regenerator specific mass, that is, heat exchanger weight per unit working fluid mass flow, is performed in both English and SI units. Identical numerical values for the specific mass parameter, whether expressed in lb/(lb/sec) or kg/(kg/sec), show the dimensional consistency of overall results.
A Mass Computation Model for Lightweight Brayton Cycle Regenerator Heat Exchangers
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2010-01-01
Based on a theoretical analysis of convective heat transfer across large internal surface areas, this paper discusses the design implications for generating lightweight gas-gas heat exchanger designs by packaging such areas into compact three-dimensional shapes. Allowances are made for hot and cold inlet and outlet headers for assembly of completed regenerator (or recuperator) heat exchanger units into closed cycle gas turbine flow ducting. Surface area and resulting volume and mass requirements are computed for a range of heat exchanger effectiveness values and internal heat transfer coefficients. Benefit cost curves show the effect of increasing heat exchanger effectiveness on Brayton cycle thermodynamic efficiency on the plus side, while also illustrating the cost in heat exchanger required surface area, volume, and mass requirements as effectiveness is increased. The equations derived for counterflow and crossflow configurations show that as effectiveness values approach unity, or 100 percent, the required surface area, and hence heat exchanger volume and mass tend toward infinity, since the implication is that heat is transferred at a zero temperature difference. To verify the dimensional accuracy of the regenerator mass computational procedure, calculation of a regenerator specific mass, that is, heat exchanger weight per unit working fluid mass flow, is performed in both English and SI units. Identical numerical values for the specific mass parameter, whether expressed in lb/(lb/sec) or kg/ (kg/sec), show the dimensional consistency of overall results.
Optimal estimation and scheduling in aquifer management using the rapid feedback control method
NASA Astrophysics Data System (ADS)
Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric
2017-12-01
Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-28
... discretion. MSHA is required to perform mathematical computations based on published cost-of-living data and... altering the budgetary impact of entitlements or the rights of entitlement recipients, or raising novel...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
... computer technology that permitted more efficient, and increasingly paperless, distribution of proxy... requires ongoing technology support, services and maintenance, and is a significant part of the total cost...
Power monitoring and control for large scale projects: SKA, a case study
NASA Astrophysics Data System (ADS)
Barbosa, Domingos; Barraca, João. Paulo; Maia, Dalmiro; Carvalho, Bruno; Vieira, Jorge; Swart, Paul; Le Roux, Gerhard; Natarajan, Swaminathan; van Ardenne, Arnold; Seca, Luis
2016-07-01
Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.
Near-Field Source Localization by Using Focusing Technique
NASA Astrophysics Data System (ADS)
He, Hongyang; Wang, Yide; Saillard, Joseph
2008-12-01
We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.
Arduino: a low-cost multipurpose lab equipment.
D'Ausilio, Alessandro
2012-06-01
Typical experiments in psychological and neurophysiological settings often require the accurate control of multiple input and output signals. These signals are often generated or recorded via computer software and/or external dedicated hardware. Dedicated hardware is usually very expensive and requires additional software to control its behavior. In the present article, I present some accuracy tests on a low-cost and open-source I/O board (Arduino family) that may be useful in many lab environments. One of the strengths of Arduinos is the possibility they afford to load the experimental script on the board's memory and let it run without interfacing with computers or external software, thus granting complete independence, portability, and accuracy. Furthermore, a large community has arisen around the Arduino idea and offers many hardware add-ons and hundreds of free scripts for different projects. Accuracy tests show that Arduino boards may be an inexpensive tool for many psychological and neurophysiological labs.
Reduced cost mission design using surrogate models
NASA Astrophysics Data System (ADS)
Feldhacker, Juliana D.; Jones, Brandon A.; Doostan, Alireza; Hampton, Jerrad
2016-01-01
This paper uses surrogate models to reduce the computational cost associated with spacecraft mission design in three-body dynamical systems. Sampling-based least squares regression is used to project the system response onto a set of orthogonal bases, providing a representation of the ΔV required for rendezvous as a reduced-order surrogate model. Models are presented for mid-field rendezvous of spacecraft in orbits in the Earth-Moon circular restricted three-body problem, including a halo orbit about the Earth-Moon L2 libration point (EML-2) and a distant retrograde orbit (DRO) about the Moon. In each case, the initial position of the spacecraft, the time of flight, and the separation between the chaser and the target vehicles are all considered as design inputs. The results show that sample sizes on the order of 102 are sufficient to produce accurate surrogates, with RMS errors reaching 0.2 m/s for the halo orbit and falling below 0.01 m/s for the DRO. A single function call to the resulting surrogate is up to two orders of magnitude faster than computing the same solution using full fidelity propagators. The expansion coefficients solved for in the surrogates are then used to conduct a global sensitivity analysis of the ΔV on each of the input parameters, which identifies the separation between the spacecraft as the primary contributor to the ΔV cost. Finally, the models are demonstrated to be useful for cheap evaluation of the cost function in constrained optimization problems seeking to minimize the ΔV required for rendezvous. These surrogate models show significant advantages for mission design in three-body systems, in terms of both computational cost and capabilities, over traditional Monte Carlo methods.
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
Goodwin, C S
1976-01-01
A manual system of microbiology reporting with a National Cash Register (NCR) form with printed names of bacteria and antiboitics required less time to compose reports than a previous manual system that involved rubber stamps and handwriting on plain report sheets. The NCR report cost 10-28 pence and, compared with a computer system, it had the advantages of simplicity and familarity, and reports were not delayed by machine breakdown, operator error, or data being incorrectly submitted. A computer reporting system for microbiology resulted in more accurate reports costing 17-97 pence each, faster and more accurate filing and recall of reports, and a greater range of analyses of reports that was valued particularly by the control-of-infection staff. Composition of computer-readable reports by technicians on Port-a-punch cards took longer than composing NCR reports. Enquiries for past results were more quickly answered from computer printouts of reports and a day book in alphabetical order. PMID:939810
NASA Astrophysics Data System (ADS)
Pieper, Steven D.; McKenna, Michael; Chen, David; McDowall, Ian E.
1994-04-01
We are interested in the application of computer animation to surgery. Our current project, a navigation and visualization tool for knee arthroscopy, relies on real-time computer graphics and the human interface technologies associated with virtual reality. We believe that this new combination of techniques will lead to improved surgical outcomes and decreased health care costs. To meet these expectations in the medical field, the system must be safe, usable, and cost-effective. In this paper, we outline some of the most important hardware and software specifications in the areas of video input and output, spatial tracking, stereoscopic displays, computer graphics models and libraries, mass storage and network interfaces, and operating systems. Since this is a fairly new combination of technologies and a new application, our justification for our specifications are drawn from the current generation of surgical technology and by analogy to other fields where virtual reality technology has been more extensively applied and studied.
Reverse time migration by Krylov subspace reduced order modeling
NASA Astrophysics Data System (ADS)
Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali
2018-04-01
Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.
NASA Technical Reports Server (NTRS)
Butera, M. K.
1979-01-01
The success of remotely mapping wetland vegetation of the southwestern coast of Florida is examined. A computerized technique to process aircraft and LANDSAT multispectral scanner data into vegetation classification maps was used. The cost effectiveness of this mapping technique was evaluated in terms of user requirements, accuracy, and cost. Results indicate that mangrove communities are classified most cost effectively by the LANDSAT technique, with an accuracy of approximately 87 percent and with a cost of approximately 3 cent per hectare compared to $46.50 per hectare for conventional ground survey methods.
GT-WGS: an efficient and economic tool for large-scale WGS analyses based on the AWS cloud service.
Wang, Yiqi; Li, Gen; Ma, Mark; He, Fazhong; Song, Zhuo; Zhang, Wei; Wu, Chengkun
2018-01-19
Whole-genome sequencing (WGS) plays an increasingly important role in clinical practice and public health. Due to the big data size, WGS data analysis is usually compute-intensive and IO-intensive. Currently it usually takes 30 to 40 h to finish a 50× WGS analysis task, which is far from the ideal speed required by the industry. Furthermore, the high-end infrastructure required by WGS computing is costly in terms of time and money. In this paper, we aim to improve the time efficiency of WGS analysis and minimize the cost by elastic cloud computing. We developed a distributed system, GT-WGS, for large-scale WGS analyses utilizing the Amazon Web Services (AWS). Our system won the first prize on the Wind and Cloud challenge held by Genomics and Cloud Technology Alliance conference (GCTA) committee. The system makes full use of the dynamic pricing mechanism of AWS. We evaluate the performance of GT-WGS with a 55× WGS dataset (400GB fastq) provided by the GCTA 2017 competition. In the best case, it only took 18.4 min to finish the analysis and the AWS cost of the whole process is only 16.5 US dollars. The accuracy of GT-WGS is 99.9% consistent with that of the Genome Analysis Toolkit (GATK) best practice. We also evaluated the performance of GT-WGS performance on a real-world dataset provided by the XiangYa hospital, which consists of 5× whole-genome dataset with 500 samples, and on average GT-WGS managed to finish one 5× WGS analysis task in 2.4 min at a cost of $3.6. WGS is already playing an important role in guiding therapeutic intervention. However, its application is limited by the time cost and computing cost. GT-WGS excelled as an efficient and affordable WGS analyses tool to address this problem. The demo video and supplementary materials of GT-WGS can be accessed at https://github.com/Genetalks/wgs_analysis_demo .
NASA Astrophysics Data System (ADS)
McFall, Steve
1994-03-01
With the increase in business automation and the widespread availability and low cost of computer systems, law enforcement agencies have seen a corresponding increase in criminal acts involving computers. The examination of computer evidence is a new field of forensic science with numerous opportunities for research and development. Research is needed to develop new software utilities to examine computer storage media, expert systems capable of finding criminal activity in large amounts of data, and to find methods of recovering data from chemically and physically damaged computer storage media. In addition, defeating encryption and password protection of computer files is also a topic requiring more research and development.
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Minnis, P.; Spangenberg, D.; Ayers, J. K.; Palikonda, R.; Vakhnin, A.; Dubois, R.; Murphy, P. R.
2014-12-01
The processing, storage and dissemination of satellite cloud and radiation products produced at NASA Langley Research Center are key activities for the Climate Science Branch. A constellation of systems operates in sync to accomplish these goals. Because of the complexity involved with operating such intricate systems, there are both high failure rates and high costs for hardware and system maintenance. Cloud computing has the potential to ameliorate cost and complexity issues. Over time, the cloud computing model has evolved and hybrid systems comprising off-site as well as on-site resources are now common. Towards our mission of providing the highest quality research products to the widest audience, we have explored the use of the Amazon Web Services (AWS) Cloud and Storage and present a case study of our results and efforts. This project builds upon NASA Langley Cloud and Radiation Group's experience with operating large and complex computing infrastructures in a reliable and cost effective manner to explore novel ways to leverage cloud computing resources in the atmospheric science environment. Our case study presents the project requirements and then examines the fit of AWS with the LaRC computing model. We also discuss the evaluation metrics, feasibility, and outcomes and close the case study with the lessons we learned that would apply to others interested in exploring the implementation of the AWS system in their own atmospheric science computing environments.
Adventures in Private Cloud: Balancing Cost and Capability at the CloudSat Data Processing Center
NASA Astrophysics Data System (ADS)
Partain, P.; Finley, S.; Fluke, J.; Haynes, J. M.; Cronk, H. Q.; Miller, S. D.
2016-12-01
Since the beginning of the CloudSat Mission in 2006, The CloudSat Data Processing Center (DPC) at the Cooperative Institute for Research in the Atmosphere (CIRA) has been ingesting data from the satellite and other A-Train sensors, producing data products, and distributing them to researchers around the world. The computing infrastructure was specifically designed to fulfill the requirements as specified at the beginning of what nominally was a two-year mission. The environment consisted of servers dedicated to specific processing tasks in a rigid workflow to generate the required products. To the benefit of science and with credit to the mission engineers, CloudSat has lasted well beyond its planned lifetime and is still collecting data ten years later. Over that period requirements of the data processing system have greatly expanded and opportunities for providing value-added services have presented themselves. But while demands on the system have increased, the initial design allowed for very little expansion in terms of scalability and flexibility. The design did change to include virtual machine processing nodes and distributed workflows but infrastructure management was still a time consuming task when system modification was required to run new tests or implement new processes. To address the scalability, flexibility, and manageability of the system Cloud computing methods and technologies are now being employed. The use of a public cloud like Amazon Elastic Compute Cloud or Google Compute Engine was considered but, among other issues, data transfer and storage cost becomes a problem especially when demand fluctuates as a result of reprocessing and the introduction of new products and services. Instead, the existing system was converted to an on premises private Cloud using the OpenStack computing platform and Ceph software defined storage to reap the benefits of the Cloud computing paradigm. This work details the decisions that were made, the benefits that have been realized, the difficulties that were encountered and issues that still exist.
NASA Astrophysics Data System (ADS)
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).
Cloud Computing for Geosciences--GeoCloud for standardized geospatial service platforms (Invited)
NASA Astrophysics Data System (ADS)
Nebert, D. D.; Huang, Q.; Yang, C.
2013-12-01
The 21st century geoscience faces challenges of Big Data, spike computing requirements (e.g., when natural disaster happens), and sharing resources through cyberinfrastructure across different organizations (Yang et al., 2011). With flexibility and cost-efficiency of computing resources a primary concern, cloud computing emerges as a promising solution to provide core capabilities to address these challenges. Many governmental and federal agencies are adopting cloud technologies to cut costs and to make federal IT operations more efficient (Huang et al., 2010). However, it is still difficult for geoscientists to take advantage of the benefits of cloud computing to facilitate the scientific research and discoveries. This presentation reports using GeoCloud to illustrate the process and strategies used in building a common platform for geoscience communities to enable the sharing, integration of geospatial data, information and knowledge across different domains. GeoCloud is an annual incubator project coordinated by the Federal Geographic Data Committee (FGDC) in collaboration with the U.S. General Services Administration (GSA) and the Department of Health and Human Services. It is designed as a staging environment to test and document the deployment of a common GeoCloud community platform that can be implemented by multiple agencies. With these standardized virtual geospatial servers, a variety of government geospatial applications can be quickly migrated to the cloud. In order to achieve this objective, multiple projects are nominated each year by federal agencies as existing public-facing geospatial data services. From the initial candidate projects, a set of common operating system and software requirements was identified as the baseline for platform as a service (PaaS) packages. Based on these developed common platform packages, each project deploys and monitors its web application, develops best practices, and documents cost and performance information. This paper presents the background, architectural design, and activities of GeoCloud in support of the Geospatial Platform Initiative. System security strategies and approval processes for migrating federal geospatial data, information, and applications into cloud, and cost estimation for cloud operations are covered. Finally, some lessons learned from the GeoCloud project are discussed as reference for geoscientists to consider in the adoption of cloud computing.
ERIC Educational Resources Information Center
Wulfson, Stephen, Ed.
1990-01-01
Reviewed are six software packages for Apple and/or IBM computers. Included are "Autograph,""The New Game Show,""Science Probe-Earth Science,""Pollution Patrol,""Investigating Plant Growth," and "AIDS: The Investigation." Discussed are the grade level, function, availability, cost, and hardware requirements of each. (CW)
NASA Technical Reports Server (NTRS)
Obler, H. D.
1980-01-01
Air conditioning system, for environmentally controlled areas containing sensitive equipment, regulates temperature and humidity without wasteful and costly reheating. System blends outside air with return air as dictated by various sensors to ensure required humidity in cooled spaces (such as computer room).
Templet Web: the use of volunteer computing approach in PaaS-style cloud
NASA Astrophysics Data System (ADS)
Vostokin, Sergei; Artamonov, Yuriy; Tsarev, Daniil
2018-03-01
This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a) the implementation of "on-demand" access; (b) source code deployment management; (c) high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.
Provider-Independent Use of the Cloud
NASA Astrophysics Data System (ADS)
Harmer, Terence; Wright, Peter; Cunningham, Christina; Perrott, Ron
Utility computing offers researchers and businesses the potential of significant cost-savings, making it possible for them to match the cost of their computing and storage to their demand for such resources. A utility compute provider enables the purchase of compute infrastructures on-demand; when a user requires computing resources a provider will provision a resource for them and charge them only for their period of use of that resource. There has been a significant growth in the number of cloud computing resource providers and each has a different resource usage model, application process and application programming interface (API)-developing generic multi-resource provider applications is thus difficult and time consuming. We have developed an abstraction layer that provides a single resource usage model, user authentication model and API for compute providers that enables cloud-provider neutral applications to be developed. In this paper we outline the issues in using external resource providers, give examples of using a number of the most popular cloud providers and provide examples of developing provider neutral applications. In addition, we discuss the development of the API to create a generic provisioning model based on a common architecture for cloud computing providers.
Analysis of multigrid methods on massively parallel computers: Architectural implications
NASA Technical Reports Server (NTRS)
Matheson, Lesley R.; Tarjan, Robert E.
1993-01-01
We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.
Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner
2017-11-01
Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.
NASA Astrophysics Data System (ADS)
Abbatiello, L. A.; Nephew, E. A.; Ballou, M. L.
1981-03-01
The efficiency and life cycle costs of the brine chiller minimal annual cycle energy system (ACES) for residential space heating, air conditioning, and water heating requirements are compared with three conventional systems. The conventional systems evaluated are a high performance air-to-air heat pump with an electric resistance water heater, an electric furnace with a central air conditioner and an electric resistance water heater, and a high performance air-to-air heat pump with a superheater unit for hot water production. Monthly energy requirements for a reference single family house are calculated, and the initial cost and annual energy consumption of the systems, providing identical energy services, are computed and compared. The ACES consumes one third to one half ot the electrical energy required by the conventional systems and delivers the same annual loads at comparable costs.
Dynamic remapping of parallel computations with varying resource demands
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Saltz, J. H.
1986-01-01
A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.
Computational Study of Scenarios Regarding Explosion Risk Mitigation
NASA Astrophysics Data System (ADS)
Vlasin, Nicolae-Ioan; Mihai Pasculescu, Vlad; Florea, Gheorghe-Daniel; Cornel Suvar, Marius
2016-10-01
Exploration in order to discover new deposits of natural gas, upgrading techniques to exploit these resources and new ways to convert the heat capacity of these gases into industrial usable energy is the research areas of great interest around the globe. But all activities involving the handling of natural gas (exploitation, transport, combustion) are subjected to the same type of risk: the risk to explosion. Experiments carried out physical scenarios to determine ways to reduce this risk can be extremely costly, requiring suitable premises, equipment and apparatus, manpower, time and, not least, presenting the risk of personnel injury. Taking in account the above mentioned, the present paper deals with the possibility of studying the scenarios of gas explosion type events in virtual domain, exemplifying by performing a computer simulation of a stoichiometric air - methane explosion (methane is the main component of natural gas). The advantages of computer-assisted imply are the possibility of using complex virtual geometries of any form as the area of deployment phenomenon, the use of the same geometry for an infinite number of settings of initial parameters as input, total elimination the risk of personnel injury, decrease the execution time etc. Although computer simulations are hardware resources consuming and require specialized personnel to use the CFD (Computational Fluid Dynamics) techniques, the costs and risks associated with these methods are greatly diminished, presenting, in the same time, a major benefit in terms of execution time.
Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A
2018-06-01
Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.
Productivity associated with visual status of computer users.
Daum, Kent M; Clore, Katherine A; Simms, Suzanne S; Vesely, Jon W; Wilczek, Dawn D; Spittle, Brian M; Good, Greg W
2004-01-01
The aim of this project is to examine the potential connection between the astigmatic refractive corrections of subjects using computers and their productivity and comfort. We hypothesize that improving the visual status of subjects using computers results in greater productivity, as well as improved visual comfort. Inclusion criteria required subjects 19 to 30 years of age with complete vision examinations before being enrolled. Using a double-masked, placebo-controlled, randomized design, subjects completed three experimental tasks calculated to assess the effects of refractive error on productivity (time to completion and the number of errors) at a computer. The tasks resembled those commonly undertaken by computer users and involved visual search tasks of: (1) counties and populations; (2) nonsense word search; and (3) a modified text-editing task. Estimates of productivity for time to completion varied from a minimum of 2.5% upwards to 28.7% with 2 D cylinder miscorrection. Assuming a conservative estimate of an overall 2.5% increase in productivity with appropriate astigmatic refractive correction, our data suggest a favorable cost-benefit ratio of at least 2.3 for the visual correction of an employee (total cost 268 dollars) with a salary of 25,000 dollars per year. We conclude that astigmatic refractive error affected both productivity and visual comfort under the conditions of this experiment. These data also suggest a favorable cost-benefit ratio for employers who provide computer-specific eyewear to their employees.
Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.
2004-01-01
Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.
High-speed multiple sequence alignment on a reconfigurable platform.
Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf
2006-01-01
Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.
NASA Technical Reports Server (NTRS)
Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.
1992-01-01
How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.
Reducing the cost of using collocation to compute vibrational energy levels: Results for CH2NH.
Avila, Gustavo; Carrington, Tucker
2017-08-14
In this paper, we improve the collocation method for computing vibrational spectra that was presented in the work of Avila and Carrington, Jr. [J. Chem. Phys. 143, 214108 (2015)]. Known quadrature and collocation methods using a Smolyak grid require storing intermediate vectors with more elements than points on the Smolyak grid. This is due to the fact that grid labels are constrained among themselves and basis labels are constrained among themselves. We show that by using the so-called hierarchical basis functions, one can significantly reduce the memory required. In this paper, the intermediate vectors have only as many elements as the Smolyak grid. The ideas are tested by computing energy levels of CH 2 NH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nash, T.; Atac, R.; Cook, A.
1989-03-06
The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. Themore » system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.« less
NASA Technical Reports Server (NTRS)
Miller, R. E., Jr.; Southall, J. W.; Kawaguchi, A. S.; Redhed, D. D.
1973-01-01
Reports on the design process, support of the design process, IPAD System design catalog of IPAD technical program elements, IPAD System development and operation, and IPAD benefits and impact are concisely reviewed. The approach used to define the design is described. Major activities performed during the product development cycle are identified. The computer system requirements necessary to support the design process are given as computational requirements of the host system, technical program elements and system features. The IPAD computer system design is presented as concepts, a functional description and an organizational diagram of its major components. The cost and schedules and a three phase plan for IPAD implementation are presented. The benefits and impact of IPAD technology are discussed.
Cressman, Sonya; Lam, Stephen; Tammemagi, Martin C; Evans, William K; Leighl, Natasha B; Regier, Dean A; Bolbocean, Corneliu; Shepherd, Frances A; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R; Mayo, John R; McWilliams, Annette; Couture, Christian; English, John C; Goffin, John; Hwang, David M; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J; Goss, Glenwood D; Nicholas, Garth; Seely, Jean M; Sekhon, Harmanjatinder S; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D; Tan, Wan C; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J
2014-10-01
It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer's perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400-$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553-$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254-$52,200; p = 0.061). In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure.
Study 2.5 final report. DORCA computer program. Volume 4: Executive summary report
NASA Technical Reports Server (NTRS)
1972-01-01
The functions and capabilities of the Dynamic Operational Requirements and Cost Analysis Program are explained. The existence and purpose of the program are presented to provide an evaluation of program applicability to areas of responsibility for potential users. The implementation of the program on the Univac 1108 computer is discussed. The application of the program for mission planning and project management is described.
Blanks: a computer program for analyzing furniture rough-part needs in standard-size blanks
Philip A. Araman
1983-01-01
A computer program is described that allows a company to determine the number of edge-glued, standard-size blanks required to satisfy its rough-part needs for a given production period. Yield and cost information also is determined by the program. A list of the program inputs, outputs, and uses of outputs is described, and an example analysis with sample output is...
Guidance on the Stand Down, Mothball, and Reactivation of Ground Test Facilities
NASA Technical Reports Server (NTRS)
Volkman, Gregrey T.; Dunn, Steven C.
2013-01-01
The development of aerospace and aeronautics products typically requires three distinct types of testing resources across research, development, test, and evaluation: experimental ground testing, computational "testing" and development, and flight testing. Over the last twenty plus years, computational methods have replaced some physical experiments and this trend is continuing. The result is decreased utilization of ground test capabilities and, along with market forces, industry consolidation, and other factors, has resulted in the stand down and oftentimes closure of many ground test facilities. Ground test capabilities are (and very likely will continue to be for many years) required to verify computational results and to provide information for regimes where computational methods remain immature. Ground test capabilities are very costly to build and to maintain, so once constructed and operational it may be desirable to retain access to those capabilities even if not currently needed. One means of doing this while reducing ongoing sustainment costs is to stand down the facility into a "mothball" status - keeping it alive to bring it back when needed. Both NASA and the US Department of Defense have policies to accomplish the mothball of a facility, but with little detail. This paper offers a generic process to follow that can be tailored based on the needs of the owner and the applicable facility.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
Graphics Flutter Analysis Methods, an interactive computing system at Lockheed-California Company
NASA Technical Reports Server (NTRS)
Radovcich, N. A.
1975-01-01
An interactive computer graphics system, Graphics Flutter Analysis Methods (GFAM), was developed to complement FAMAS, a matrix-oriented batch computing system, and other computer programs in performing complex numerical calculations using a fully integrated data management system. GFAM has many of the matrix operation capabilities found in FAMAS, but on a smaller scale, and is utilized when the analysis requires a high degree of interaction between the engineer and computer, and schedule constraints exclude the use of batch entry programs. Applications of GFAM to a variety of preliminary design, development design, and project modification programs suggest that interactive flutter analysis using matrix representations is a feasible and cost effective computing tool.
Fault tolerant computer control for a Maglev transportation system
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George
1994-01-01
Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.
Multi-strategy based quantum cost reduction of linear nearest-neighbor quantum circuit
NASA Astrophysics Data System (ADS)
Tan, Ying-ying; Cheng, Xue-yun; Guan, Zhi-jin; Liu, Yang; Ma, Haiying
2018-03-01
With the development of reversible and quantum computing, study of reversible and quantum circuits has also developed rapidly. Due to physical constraints, most quantum circuits require quantum gates to interact on adjacent quantum bits. However, many existing quantum circuits nearest-neighbor have large quantum cost. Therefore, how to effectively reduce quantum cost is becoming a popular research topic. In this paper, we proposed multiple optimization strategies to reduce the quantum cost of the circuit, that is, we reduce quantum cost from MCT gates decomposition, nearest neighbor and circuit simplification, respectively. The experimental results show that the proposed strategies can effectively reduce the quantum cost, and the maximum optimization rate is 30.61% compared to the corresponding results.
NASA Astrophysics Data System (ADS)
Mohammadi, Hadi
Use of the Patch Vulnerability Management (PVM) process should be seriously considered for any networked computing system. The PVM process prevents the operating system (OS) and software applications from being attacked due to security vulnerabilities, which lead to system failures and critical data leakage. The purpose of this research is to create and design a Security and Critical Patch Management Process (SCPMP) framework based on Systems Engineering (SE) principles. This framework will assist Information Technology Department Staff (ITDS) to reduce IT operating time and costs and mitigate the risk of security and vulnerability attacks. Further, this study evaluates implementation of the SCPMP in the networked computing systems of an academic environment in order to: 1. Meet patch management requirements by applying SE principles. 2. Reduce the cost of IT operations and PVM cycles. 3. Improve the current PVM methodologies to prevent networked computing systems from becoming the targets of security vulnerability attacks. 4. Embed a Maintenance Optimization Tool (MOT) in the proposed framework. The MOT allows IT managers to make the most practicable choice of methods for deploying and installing released patches and vulnerability remediation. In recent years, there has been a variety of frameworks for security practices in every networked computing system to protect computer workstations from becoming compromised or vulnerable to security attacks, which can expose important information and critical data. I have developed a new mechanism for implementing PVM for maximizing security-vulnerability maintenance, protecting OS and software packages, and minimizing SCPMP cost. To increase computing system security in any diverse environment, particularly in academia, one must apply SCPMP. I propose an optimal maintenance policy that will allow ITDS to measure and estimate the variation of PVM cycles based on their department's requirements. My results demonstrate that MOT optimizes the process of implementing SCPMP in academic workstations.
Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad
1995-01-01
The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.
System cost/performance analysis (study 2.3). Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Kazangey, T.
1973-01-01
The relationships between performance, safety, cost, and schedule parameters were identified and quantified in support of an overall effort to generate program models and methodology that provide insight into a total space vehicle program. A specific space vehicle system, the attitude control system (ACS), was used, and a modeling methodology was selected that develops a consistent set of quantitative relationships among performance, safety, cost, and schedule, based on the characteristics of the components utilized in candidate mechanisms. These descriptive equations were developed for a three-axis, earth-pointing, mass expulsion ACS. A data base describing typical candidate ACS components was implemented, along with a computer program to perform sample calculations. This approach, implemented on a computer, is capable of determining the effect of a change in functional requirements to the ACS mechanization and the resulting cost and schedule. By a simple extension of this modeling methodology to the other systems in a space vehicle, a complete space vehicle model can be developed. Study results and recommendations are presented.
Integrated risk/cost planning models for the US Air Traffic system
NASA Technical Reports Server (NTRS)
Mulvey, J. M.; Zenios, S. A.
1985-01-01
A prototype network planning model for the U.S. Air Traffic control system is described. The model encompasses the dual objectives of managing collision risks and transportation costs where traffic flows can be related to these objectives. The underlying structure is a network graph with nonseparable convex costs; the model is solved efficiently by capitalizing on its intrinsic characteristics. Two specialized algorithms for solving the resulting problems are described: (1) truncated Newton, and (2) simplicial decomposition. The feasibility of the approach is demonstrated using data collected from a control center in the Midwest. Computational results with different computer systems are presented, including a vector supercomputer (CRAY-XMP). The risk/cost model has two primary uses: (1) as a strategic planning tool using aggregate flight information, and (2) as an integrated operational system for forecasting congestion and monitoring (controlling) flow throughout the U.S. In the latter case, access to a supercomputer is required due to the model's enormous size.
Materials Genome Initiative Element
NASA Technical Reports Server (NTRS)
Vickers, John
2015-01-01
NASA is committed to developing new materials and manufacturing methods that can enable new missions with ever increasing mission demands. Typically, the development and certification of new materials and manufacturing methods in the aerospace industry has required more than 20 years of development time with a costly testing and certification program. To reduce the cost and time to mature these emerging technologies, NASA is developing computational materials tools to improve understanding of the material and guide the certification process.
Information transfer satellite concept study. Volume 4: computer manual
NASA Technical Reports Server (NTRS)
Bergin, P.; Kincade, C.; Kurpiewski, D.; Leinhaupel, F.; Millican, F.; Onstad, R.
1971-01-01
The Satellite Telecommunications Analysis and Modeling Program (STAMP) provides the user with a flexible and comprehensive tool for the analysis of ITS system requirements. While obtaining minimum cost design points, the program enables the user to perform studies over a wide range of user requirements and parametric demands. The program utilizes a total system approach wherein the ground uplink and downlink, the spacecraft, and the launch vehicle are simultaneously synthesized. A steepest descent algorithm is employed to determine the minimum total system cost design subject to the fixed user requirements and imposed constraints. In the process of converging to the solution, the pertinent subsystem tradeoffs are resolved. This report documents STAMP through a technical analysis and a description of the principal techniques employed in the program.
Optimizing Teleportation Cost in Distributed Quantum Circuits
NASA Astrophysics Data System (ADS)
Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh
2018-03-01
The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.
Characterizing quantum supremacy in near-term devices
NASA Astrophysics Data System (ADS)
Boixo, Sergio; Isakov, Sergei V.; Smelyanskiy, Vadim N.; Babbush, Ryan; Ding, Nan; Jiang, Zhang; Bremner, Michael J.; Martinis, John M.; Neven, Hartmut
2018-06-01
A critical question for quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the capabilities of supercomputers. Such a demonstration of what is referred to as quantum supremacy requires a reliable evaluation of the resources required to solve tasks with classical approaches. Here, we propose the task of sampling from the output distribution of random quantum circuits as a demonstration of quantum supremacy. We extend previous results in computational complexity to argue that this sampling task must take exponential time in a classical computer. We introduce cross-entropy benchmarking to obtain the experimental fidelity of complex multiqubit dynamics. This can be estimated and extrapolated to give a success metric for a quantum supremacy demonstration. We study the computational cost of relevant classical algorithms and conclude that quantum supremacy can be achieved with circuits in a two-dimensional lattice of 7 × 7 qubits and around 40 clock cycles. This requires an error rate of around 0.5% for two-qubit gates (0.05% for one-qubit gates), and it would demonstrate the basic building blocks for a fault-tolerant quantum computer.
Johnson, T L; Keith, D W
2001-10-01
The decoupling of fossil-fueled electricity production from atmospheric CO2 emissions via CO2 capture and sequestration (CCS) is increasingly regarded as an important means of mitigating climate change at a reasonable cost. Engineering analyses of CO2 mitigation typically compare the cost of electricity for a base generation technology to that for a similar plant with CO2 capture and then compute the carbon emissions mitigated per unit of cost. It can be hard to interpret mitigation cost estimates from this plant-level approach when a consistent base technology cannot be identified. In addition, neither engineering analyses nor general equilibrium models can capture the economics of plant dispatch. A realistic assessment of the costs of carbon sequestration as an emissions abatement strategy in the electric sector therefore requires a systems-level analysis. We discuss various frameworks for computing mitigation costs and introduce a simplified model of electric sector planning. Results from a "bottom-up" engineering-economic analysis for a representative U.S. North American Electric Reliability Council (NERC) region illustrate how the penetration of CCS technologies and the dispatch of generating units vary with the price of carbon emissions and thereby determine the relationship between mitigation cost and emissions reduction.
Johnson, Timothy L; Keith, David W
2001-10-01
The decoupling of fossil-fueled electricity production from atmospheric CO 2 emissions via CO 2 capture and sequestration (CCS) is increasingly regarded as an important means of mitigating climate change at a reasonable cost. Engineering analyses of CO 2 mitigation typically compare the cost of electricity for a base generation technology to that for a similar plant with CO 2 capture and then compute the carbon emissions mitigated per unit of cost. It can be hard to interpret mitigation cost estimates from this plant-level approach when a consistent base technology cannot be identified. In addition, neither engineering analyses nor general equilibrium models can capture the economics of plant dispatch. A realistic assessment of the costs of carbon sequestration as an emissions abatement strategy in the electric sector therefore requires a systems-level analysis. We discuss various frameworks for computing mitigation costs and introduce a simplified model of electric sector planning. Results from a "bottom-up" engineering-economic analysis for a representative U.S. North American Electric Reliability Council (NERC) region illustrate how the penetration of CCS technologies and the dispatch of generating units vary with the price of carbon emissions and thereby determine the relationship between mitigation cost and emissions reduction.
NASA Technical Reports Server (NTRS)
1976-01-01
The modifications for the Nuclear Instrumentation Modular (NIM) and Computer Automated Measurement Control (CAMAC) equipment, designed for ground based laboratory use, that would be required to permit its use in the Spacelab environments were determined. The cost of these modifications were estimated and the most cost effective approach to implementing them were identified. A shared equipment implementation in which the various Spacelab users draw their required complement of standard NIM and CAMAC equipment for a given flight from a common equipment pool was considered. The alternative approach studied was a dedicated equipment implementation in which each of the users is responsible for procuring either their own NIM/CAMAC equipment or its custom built equivalent.
High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing
NASA Astrophysics Data System (ADS)
Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.
2015-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.
Multi-server blind quantum computation over collective-noise channels
NASA Astrophysics Data System (ADS)
Xiao, Min; Liu, Lin; Song, Xiuli
2018-03-01
Blind quantum computation (BQC) enables ordinary clients to securely outsource their computation task to costly quantum servers. Besides two essential properties, namely correctness and blindness, practical BQC protocols also should make clients as classical as possible and tolerate faults from nonideal quantum channel. In this paper, using logical Bell states as quantum resource, we propose multi-server BQC protocols over collective-dephasing noise channel and collective-rotation noise channel, respectively. The proposed protocols permit completely or almost classical client, meet the correctness and blindness requirements of BQC protocol, and are typically practical BQC protocols.
Inverse kinematics of a dual linear actuator pitch/roll heliostat
NASA Astrophysics Data System (ADS)
Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh
2017-06-01
This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.
A Microcomputer Interface for External Circuit Control.
ERIC Educational Resources Information Center
Gorham, D. A.
1983-01-01
Describes an interface designed to meet the requirements of an instrumentation teaching laboratory, particularly to develop computer-controlled digital circuitry while exploiting electrical drive properties of common transistor-transistor logic (TTL) devices, minimizing cost/number of components. Discusses decoding for Pet, switches, lights, and…
ERIC Educational Resources Information Center
McGrath, Diane, Ed.
1989-01-01
Reviewed is a computer software package entitled "Audubon Wildlife Adventures: Grizzly Bears" for Apple II and IBM microcomputers. Included are availability, hardware requirements, cost, and a description of the program. The murder-mystery flavor of the program is stressed in this program that focuses on illegal hunting and game…
DOT National Transportation Integrated Search
1996-11-01
The Highway Economic Requirements System (HERS) is a computer model designed to simulate improvement selection decisions based on the relative benefit-cost merits of alternative improvement options. HERS is intended to estimate national level investm...
Study of curved glass photovoltaic module and module electrical isolation design requirements
NASA Technical Reports Server (NTRS)
1980-01-01
The design of a 1.2 by 2.4 m curved glass superstrate and support clip assembly is presented, along with the results of finite element computer analysis and a glass industry survey conducted to assess the technical and economic feasibility of the concept. Installed costs for four curved glass module array configurations are estimated and compared with cost previously reported for comparable flat glass module configurations. Electrical properties of candidate module encapsulation systems are evaluated along with present industry practice for the design and testing of electrical insulation systems. Electric design requirements for module encapsulation systems are also discussed.
Study of curved glass photovoltaic module and module electrical isolation design requirements
NASA Astrophysics Data System (ADS)
1980-06-01
The design of a 1.2 by 2.4 m curved glass superstrate and support clip assembly is presented, along with the results of finite element computer analysis and a glass industry survey conducted to assess the technical and economic feasibility of the concept. Installed costs for four curved glass module array configurations are estimated and compared with cost previously reported for comparable flat glass module configurations. Electrical properties of candidate module encapsulation systems are evaluated along with present industry practice for the design and testing of electrical insulation systems. Electric design requirements for module encapsulation systems are also discussed.
Multi-objective group scheduling optimization integrated with preventive maintenance
NASA Astrophysics Data System (ADS)
Liao, Wenzhu; Zhang, Xiufang; Jiang, Min
2017-11-01
This article proposes a single-machine-based integration model to meet the requirements of production scheduling and preventive maintenance in group production. To describe the production for identical/similar and different jobs, this integrated model considers the learning and forgetting effects. Based on machine degradation, the deterioration effect is also considered. Moreover, perfect maintenance and minimal repair are adopted in this integrated model. The multi-objective of minimizing total completion time and maintenance cost is taken to meet the dual requirements of delivery date and cost. Finally, a genetic algorithm is developed to solve this optimization model, and the computation results demonstrate that this integrated model is effective and reliable.
Automatic vehicle monitoring systems study. Report of phase O. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1977-01-01
A set of planning guidelines is presented to help law enforcement agencies and vehicle fleet operators decide which automatic vehicle monitoring (AVM) system could best meet their performance requirements. Improvements in emergency response times and resultant cost benefits obtainable with various operational and planned AVM systems may be synthesized and simulated by means of special computer programs for model city parameters applicable to small, medium, and large urban areas. Design characteristics of various AVM systems and the implementation requirements are illustrated and cost estimated for the vehicles, the fixed sites, and the base equipments. Vehicle location accuracies for different RF links and polling intervals are analyzed.
Technical assistance for law-enforcement communications: Case study report two
NASA Technical Reports Server (NTRS)
Reilly, N. B.; Mustain, J. A.
1979-01-01
Two case histories are presented. In one study the feasibility of consolidating dispatch center operations for small agencies is considered. System load measurements were taken and queueing analysis applied to determine numbers of personnel required for each separate agency and for a consolidated dispatch center. Functional requirements were developed and a cost model was designed to compare relative costs of various alternatives including continuation of the present system, consolidation of a manual system, and consolidated computer-aided dispatching. The second case history deals with the consideration of a multi-regional, intrastate radio frequency for improved interregional communications. Sample standards and specifications for radio equipment are provided.
Design and operations technologies - Integrating the pieces. [for future space systems design
NASA Technical Reports Server (NTRS)
Eldred, C. H.
1979-01-01
As major elements of life-cycle costs (LCC) having critical impacts on the initiation and utilization of future space programs, the areas of vehicle design and operations are reviewed in order to identify technology requirements. Common to both areas is the requirement for efficient integration of broad, complex systems. Operations technologies focus on the extension of space-based capabilities and cost reduction through the combination of innovative design, low-maintenance hardware, and increased manpower productivity. Design technologies focus on computer-aided techniques which increase productivity while maintaining a high degree of flexibility which enhances creativity and permits graceful design changes.
Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ren, Wei; Liu, Hong; Jin, Shi
2014-12-01
In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn → 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn → 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.
Main control computer security model of closed network systems protection against cyber attacks
NASA Astrophysics Data System (ADS)
Seymen, Bilal
2014-06-01
The model that brings the data input/output under control in closed network systems, that maintains the system securely, and that controls the flow of information through the Main Control Computer which also brings the network traffic under control against cyber-attacks. The network, which can be controlled single-handedly thanks to the system designed to enable the network users to make data entry into the system or to extract data from the system securely, intends to minimize the security gaps. Moreover, data input/output record can be kept by means of the user account assigned for each user, and it is also possible to carry out retroactive tracking, if requested. Because the measures that need to be taken for each computer on the network regarding cyber security, do require high cost; it has been intended to provide a cost-effective working environment with this model, only if the Main Control Computer has the updated hardware.
Volumetric calculation using low cost unmanned aerial vehicle (UAV) approach
NASA Astrophysics Data System (ADS)
Rahman, A. A. Ab; Maulud, K. N. Abdul; Mohd, F. A.; Jaafar, O.; Tahar, K. N.
2017-12-01
Unmanned Aerial Vehicles (UAV) technology has evolved dramatically in the 21st century. It is used by both military and general public for recreational purposes and mapping work. Operating cost for UAV is much cheaper compared to that of normal aircraft and it does not require a large work space. The UAV systems have similar functions with the LIDAR and satellite images technologies. These systems require a huge cost, labour and time consumption to produce elevation and dimension data. Measurement of difficult objects such as water tank can also be done by using UAV. The purpose of this paper is to show the capability of UAV to compute the volume of water tank based on a different number of images and control points. The results were compared with the actual volume of the tank to validate the measurement. In this study, the image acquisition was done using Phantom 3 Professional, which is a low cost UAV. The analysis in this study is based on different volume computations using two and four control points with variety set of UAV images. The results show that more images will provide a better quality measurement. With 95 images and four GCP, the error percentage to the actual volume is about 5%. Four controls are enough to get good results but more images are needed, estimated about 115 until 220 images. All in all, it can be concluded that the low cost UAV has a potential to be used for volume of water and dimension measurement.
Dea, Nicolas; Fisher, Charles G; Batke, Juliet; Strelzow, Jason; Mendelsohn, Daniel; Paquette, Scott J; Kwon, Brian K; Boyd, Michael D; Dvorak, Marcel F S; Street, John T
2016-01-01
Pedicle screws are routinely used in contemporary spinal surgery. Screw misplacement may be asymptomatic but is also correlated with potential adverse events. Computer-assisted surgery (CAS) has been associated with improved screw placement accuracy rates. However, this technology has substantial acquisition and maintenance costs. Despite its increasing usage, no rigorous full economic evaluation comparing this technology to current standard of care has been reported. Medical costs are exploding in an unsustainable way. Health economic theory requires that medical equipment costs be compared with expected benefits. To answer this question for computer-assisted spinal surgery, we present an economic evaluation looking specifically at symptomatic misplaced screws leading to reoperation secondary to neurologic deficits or biomechanical concerns. The study design was an observational case-control study from prospectively collected data of consecutive patients treated with the aid of CAS (treatment group) compared with a matched historical cohort of patients treated with conventional fluoroscopy (control group). The patient sample consisted of consecutive patients treated surgically at a quaternary academic center. The primary effectiveness measure studied was the number of reoperations for misplaced screws within 1 year of the index surgery. Secondary outcome measures included were total adverse event rate and postoperative computed tomography usage for pedicle screw examination. A patient-level data cost-effectiveness analysis from the hospital perspective was conducted to determine the value of a navigation system coupled with intraoperative 3-D imaging (O-arm Imaging and the StealthStation S7 Navigation Systems, Medtronic, Louisville, CO, USA) in adult spinal surgery. The capital costs for both alternatives were reported as equivalent annual costs based on the annuitization of capital expenditures method using a 3% discount rate and a 7-year amortization period. Annual maintenance costs were also added. Finally, reoperation costs using a micro-costing approach were calculated for both groups. An incremental cost-effectiveness ratio was calculated and reported as cost per reoperation avoided. Based on reoperation costs in Canada and in the United States, a minimal caseload was calculated for the more expensive alternative to be cost saving. Sensitivity analyses were also conducted. A total of 5,132 pedicle screws were inserted in 502 patients during the study period: 2,682 screws in 253 patients in the treatment group and 2,450 screws in 249 patients in the control group. Overall accuracy rates were 95.2% for the treatment group and 86.9% for the control group. Within 1 year post treatment, two patients (0.8%) required a revision surgery in the treatment group compared with 15 patients (6%) in the control group. An incremental cost-effectiveness ratio of $15,961 per reoperation avoided was calculated for the CAS group. Based on a reoperation cost of $12,618, this new technology becomes cost saving for centers performing more than 254 instrumented spinal procedures per year. Computer-assisted spinal surgery has the potential to reduce reoperation rates and thus to have serious cost-effectiveness and policy implications. High acquisition and maintenance costs of this technology can be offset by equally high reoperation costs. Our cost-effectiveness analysis showed that for high-volume centers with a similar case complexity to the studied population, this technology is economically justified. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Jones, Harry
2003-01-01
The Advanced Life Support (ALS) has used a single number, Equivalent System Mass (ESM), for both reporting progress and technology selection. ESM is the launch mass required to provide a space system. ESM indicates launch cost. ESM alone is inadequate for technology selection, which should include other metrics such as Technology Readiness Level (TRL) and Life Cycle Cost (LCC) and also consider perfom.arxe 2nd risk. ESM has proven difficult to implement as a reporting metric, partly because it includes non-mass technology selection factors. Since it will not be used exclusively for technology selection, a new reporting metric can be made easier to compute and explain. Systems design trades-off performance, cost, and risk, but a risk weighted cost/benefit metric would be too complex to report. Since life support has fixed requirements, different systems usually have roughly equal performance. Risk is important since failure can harm the crew, but it is difficult to treat simply. Cost is not easy to estimate, but preliminary space system cost estimates are usually based on mass, which is better estimated than cost. Amass-based cost estimate, similar to ESM, would be a good single reporting metric. The paper defines and compares four mass-based cost estimates, Equivalent Mass (EM), Equivalent System Mass (ESM), Life Cycle Mass (LCM), and System Mass (SM). EM is traditional in life support and includes mass, volume, power, cooling and logistics. ESM is the specifically defined ALS metric, which adds crew time and possibly other cost factors to EM. LCM is a new metric, a mass-based estimate of LCC measured in mass units. SM includes only the factors of EM that are originally measured in mass, the hardware and logistics mass. All four mass-based metrics usually give similar comparisons. SM is by far the simplest to compute and easiest to explain.
Sacramento's parking lot shading ordinance: environmental and economic costs of compliance
E.G. McPherson
2001-01-01
A survey of 15 Sacramento parking lots and computer modeling were used to evaluate parking capacity and compliance with the 1983 ordinance requiring 50% shade of paved areas (PA) 15 years after development. There were 6% more parking spaces than required by ordinance, and 36% were vacant during peak use periods. Current shade was 14% with 44% of this amount provided by...
Analysis Systems for Air Force Missions.
1987-02-28
satisfying requirements in a cost effective manner. Subroutine libraries were developed for use in the overall systems. These libraries allow for the...anomalies occur or requirements change. A library of HILAT routines has been developed which is used by all processing routines as necessary. Upon...the AIM data were directly applied to the AIRS. Moreover, many of the computer modules in the HILAT library of subroutines have direct application
Strategic Implications of Cloud Computing for Modeling and Simulation (Briefing)
2016-04-01
of Promises with Cloud • Cost efficiency • Unlimited storage • Backup and recovery • Automatic software integration • Easy access to information...activities that wrap the actual exercise itself (e.g., travel for exercise support, data collection, integration , etc.). Cloud -based simulation would...requiring quick delivery rather than fewer large messages requiring high bandwidth. Cloud environments tend to be better at providing high-bandwidth
Bantam System Technology Project
NASA Technical Reports Server (NTRS)
Moon, J. M.; Beveridge, J. R.
1998-01-01
This report focuses on determining a best value, low risk, low cost and highly reliable Data and Command System for support of the launch of low cost vehicles which are to carry small payloads into low earth orbit. The ground-based DCS is considered as a component of the overall ground and flight support system which includes the DCS, flight computer, mission planning system and simulator. Interfaces between the DCS and these other component systems are considered. Consideration is also given to the operational aspects of the mission and of the DCS selected. This project involved: defining requirements, defining an efficient operations concept, defining a DCS architecture which satisfies the requirements and concept, conducting a market survey of commercial and government off-the-shelf DCS candidate systems and rating the candidate systems against the requirements/concept. The primary conclusions are that several low cost, off-the-shelf DCS solutions exist and these can be employed to provide for very low cost operations and low recurring maintenance cost. The primary recommendation is that the DCS design/specification should be integrated within the ground and flight support system design as early as possible to ensure ease of interoperability and efficient allocation of automation functions among the component systems.
Integrated Component-based Data Acquisition Systems for Aerospace Test Facilities
NASA Technical Reports Server (NTRS)
Ross, Richard W.
2001-01-01
The Multi-Instrument Integrated Data Acquisition System (MIIDAS), developed by the NASA Langley Research Center, uses commercial off the shelf (COTS) products, integrated with custom software, to provide a broad range of capabilities at a low cost throughout the system s entire life cycle. MIIDAS combines data acquisition capabilities with online and post-test data reduction computations. COTS products lower purchase and maintenance costs by reducing the level of effort required to meet system requirements. Object-oriented methods are used to enhance modularity, encourage reusability, and to promote adaptability, reducing software development costs. Using only COTS products and custom software supported on multiple platforms reduces the cost of porting the system to other platforms. The post-test data reduction capabilities of MIIDAS have been installed at four aerospace testing facilities at NASA Langley Research Center. The systems installed at these facilities provide a common user interface, reducing the training time required for personnel that work across multiple facilities. The techniques employed by MIIDAS enable NASA to build a system with a lower initial purchase price and reduced sustaining maintenance costs. With MIIDAS, NASA has built a highly flexible next generation data acquisition and reduction system for aerospace test facilities that meets customer expectations.
NASA Technical Reports Server (NTRS)
Gordon, Steven C.
1993-01-01
Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.
The vehicle design evaluation program - A computer-aided design procedure for transport aircraft
NASA Technical Reports Server (NTRS)
Oman, B. H.; Kruse, G. S.; Schrader, O. E.
1977-01-01
The vehicle design evaluation program is described. This program is a computer-aided design procedure that provides a vehicle synthesis capability for vehicle sizing, external load analysis, structural analysis, and cost evaluation. The vehicle sizing subprogram provides geometry, weight, and balance data for aircraft using JP, hydrogen, or methane fuels. The structural synthesis subprogram uses a multistation analysis for aerodynamic surfaces and fuselages to develop theoretical weights and geometric dimensions. The parts definition subprogram uses the geometric data from the structural analysis and develops the predicted fabrication dimensions, parts material raw stock buy requirements, and predicted actual weights. The cost analysis subprogram uses detail part data in conjunction with standard hours, realization factors, labor rates, and material data to develop the manufacturing costs. The program is used to evaluate overall design effects on subsonic commercial type aircraft due to parameter variations.
Systems Engineering and Integration (SE and I)
NASA Technical Reports Server (NTRS)
Chevers, ED; Haley, Sam
1990-01-01
The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.
Simulating smokers' acceptance of modifications in a cessation program.
Spoth, R
1992-01-01
Recent research has underscored the importance of assessing barriers to smokers' acceptance of cessation programs. This paper illustrates the use of computer simulations to gauge smokers' response to program modifications which may produce barriers to participation. It also highlights methodological issues encountered in conducting this work. Computer simulations were based on conjoint analysis, a consumer research method which enables measurement of smokers' relative preference for various modifications of cessation programs. Results from two studies are presented in this paper. The primary study used a randomly selected sample of 218 adult smokers who participated in a computer-assisted phone interview. Initially, the study assessed smokers' relative utility rating of 30 features of cessation programs. Utility data were used in computer-simulated comparisons of a low-cost, self-help oriented program under development and five other existing programs. A baseline version of the program under development and two modifications (for example, use of a support group with a higher level of cost) were simulated. Both the baseline version and modifications received a favorable response vis-à-vis comparison programs. Modifications requiring higher program costs were, however, associated with moderately reduced levels of favorable consumer response. The second study used a sample of 70 smokers who responded to an expanded set of smoking cessation program features focusing on program packaging. This secondary study incorporate in-person, computer-assisted interviews at a shopping mall, with smokers viewing an artist's mock-up of various program options on display. A similar pattern of responses to simulated program modifications emerged, with monetary cost apparently playing a key role. The significance of conjoint-based computer simulation as a tool in program development or dissemination, salient methodological issues, and implications for further research are discussed. PMID:1738813
Simulating smokers' acceptance of modifications in a cessation program.
Spoth, R
1992-01-01
Recent research has underscored the importance of assessing barriers to smokers' acceptance of cessation programs. This paper illustrates the use of computer simulations to gauge smokers' response to program modifications which may produce barriers to participation. It also highlights methodological issues encountered in conducting this work. Computer simulations were based on conjoint analysis, a consumer research method which enables measurement of smokers' relative preference for various modifications of cessation programs. Results from two studies are presented in this paper. The primary study used a randomly selected sample of 218 adult smokers who participated in a computer-assisted phone interview. Initially, the study assessed smokers' relative utility rating of 30 features of cessation programs. Utility data were used in computer-simulated comparisons of a low-cost, self-help oriented program under development and five other existing programs. A baseline version of the program under development and two modifications (for example, use of a support group with a higher level of cost) were simulated. Both the baseline version and modifications received a favorable response vis-à-vis comparison programs. Modifications requiring higher program costs were, however, associated with moderately reduced levels of favorable consumer response. The second study used a sample of 70 smokers who responded to an expanded set of smoking cessation program features focusing on program packaging. This secondary study incorporate in-person, computer-assisted interviews at a shopping mall, with smokers viewing an artist's mock-up of various program options on display. A similar pattern of responses to simulated program modifications emerged, with monetary cost apparently playing a key role. The significance of conjoint-based computer simulation as a tool in program development or dissemination, salient methodological issues, and implications for further research are discussed.
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
NASA Astrophysics Data System (ADS)
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
A physical and economic model of the nuclear fuel cycle
NASA Astrophysics Data System (ADS)
Schneider, Erich Alfred
A model of the nuclear fuel cycle that is suitable for use in strategic planning and economic forecasting is presented. The model, to be made available as a stand-alone software package, requires only a small set of fuel cycle and reactor specific input parameters. Critical design criteria include ease of use by nonspecialists, suppression of errors to within a range dictated by unit cost uncertainties, and limitation of runtime to under one minute on a typical desktop computer. Collision probability approximations to the neutron transport equation that lead to a computationally efficient decoupling of the spatial and energy variables are presented and implemented. The energy dependent flux, governed by coupled integral equations, is treated by multigroup or continuous thermalization methods. The model's output includes a comprehensive nuclear materials flowchart that begins with ore requirements, calculates the buildup of 24 actinides as well as fission products, and concludes with spent fuel or reprocessed material composition. The costs, direct and hidden, of the fuel cycle under study are also computed. In addition to direct disposal and plutonium recycling strategies in current use, the model addresses hypothetical cycles. These include cycles chosen for minor actinide burning and for their low weapons-usable content.
NASA Astrophysics Data System (ADS)
Powless, Amy J.; Feekin, Lauren E.; Hutcheson, Joshua A.; Alapat, Daisy V.; Muldoon, Timothy J.
2016-03-01
Point-of-care approaches for 3-part leukocyte differentials (granulocyte, monocyte, and lymphocyte), traditionally performed using a hematology analyzer within a panel of tests called a complete blood count (CBC), are essential not only to reduce cost but to provide faster results in low resource areas. Recent developments in lab-on-a-chip devices have shown promise in reducing the size and reagents used, relating to a decrease in overall cost. Furthermore, smartphone diagnostic approaches have shown much promise in the area of point-of-care diagnostics, but the relatively high per-unit cost may limit their utility in some settings. We present here a method to reduce computing cost of a simple epi-fluorescence imaging system using a Raspberry Pi (single-board computer, <$40) to perform a 3-part leukocyte differential comparable to results from a hematology analyzer. This system uses a USB color camera in conjunction with a leukocyte-selective vital dye (acridine orange) in order to determine a leukocyte count and differential from a low volume (<20 microliters) of whole blood obtained via fingerstick. Additionally, the system utilizes a "cloud-based" approach to send image data from the Raspberry Pi to a main server and return results back to the user, exporting the bulk of the computational requirements. Six images were acquired per minute with up to 200 cells per field of view. Preliminary results showed that the differential count varied significantly in monocytes with a 1 minute time difference indicating the importance of time-gating to produce an accurate/consist differential.
NASA Astrophysics Data System (ADS)
Shoemaker, Christine; Wan, Ying
2016-04-01
Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).
Cost aware cache replacement policy in shared last-level cache for hybrid memory based fog computing
NASA Astrophysics Data System (ADS)
Jia, Gangyong; Han, Guangjie; Wang, Hao; Wang, Feng
2018-04-01
Fog computing requires a large main memory capacity to decrease latency and increase the Quality of Service (QoS). However, dynamic random access memory (DRAM), the commonly used random access memory, cannot be included into a fog computing system due to its high consumption of power. In recent years, non-volatile memories (NVM) such as Phase-Change Memory (PCM) and Spin-transfer torque RAM (STT-RAM) with their low power consumption have emerged to replace DRAM. Moreover, the currently proposed hybrid main memory, consisting of both DRAM and NVM, have shown promising advantages in terms of scalability and power consumption. However, the drawbacks of NVM, such as long read/write latency give rise to potential problems leading to asymmetric cache misses in the hybrid main memory. Current last level cache (LLC) policies are based on the unified miss cost, and result in poor performance in LLC and add to the cost of using NVM. In order to minimize the cache miss cost in the hybrid main memory, we propose a cost aware cache replacement policy (CACRP) that reduces the number of cache misses from NVM and improves the cache performance for a hybrid memory system. Experimental results show that our CACRP behaves better in LLC performance, improving performance up to 43.6% (15.5% on average) compared to LRU.
Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo
2016-01-01
The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.
NASA Technical Reports Server (NTRS)
Gorospe, George E., Jr.; Daigle, Matthew J.; Sankararaman, Shankar; Kulkarni, Chetan S.; Ng, Eley
2017-01-01
Prognostic methods enable operators and maintainers to predict the future performance for critical systems. However, these methods can be computationally expensive and may need to be performed each time new information about the system becomes available. In light of these computational requirements, we have investigated the application of graphics processing units (GPUs) as a computational platform for real-time prognostics. Recent advances in GPU technology have reduced cost and increased the computational capability of these highly parallel processing units, making them more attractive for the deployment of prognostic software. We present a survey of model-based prognostic algorithms with considerations for leveraging the parallel architecture of the GPU and a case study of GPU-accelerated battery prognostics with computational performance results.
Computational Issues in Damping Identification for Large Scale Problems
NASA Technical Reports Server (NTRS)
Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.
1997-01-01
Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
Personal Computer-less (PC-less) Microcontroller Training Kit
NASA Astrophysics Data System (ADS)
Somantri, Y.; Wahyudin, D.; Fushilat, I.
2018-02-01
The need of microcontroller training kit is necessary for practical work of students of electrical engineering education. However, to use available training kit not only costly but also does not meet the need of laboratory requirements. An affordable and portable microcontroller kit could answer such problem. This paper explains the design and development of Personal Computer Less (PC-Less) Microcontroller Training Kit. It was developed based on Lattepanda processor and Arduino microcontroller as target. The training kit equipped with advanced input-output interfaces that adopted the concept of low cost and low power system. The preliminary usability testing proved this device can be used as a tool for microcontroller programming and industrial automation training. By adopting the concept of portability, the device could be operated in the rural area which electricity and computer infrastructure are limited. Furthermore, the training kit is suitable for student of electrical engineering student from university and vocational high school.
Benefits of cloud computing for PACS and archiving.
Koch, Patrick
2012-01-01
The goal of cloud-based services is to provide easy, scalable access to computing resources and IT services. The healthcare industry requires a private cloud that adheres to government mandates designed to ensure privacy and security of patient data while enabling access by authorized users. Cloud-based computing in the imaging market has evolved from a service that provided cost effective disaster recovery for archived data to fully featured PACS and vendor neutral archiving services that can address the needs of healthcare providers of all sizes. Healthcare providers worldwide are now using the cloud to distribute images to remote radiologists while supporting advanced reading tools, deliver radiology reports and imaging studies to referring physicians, and provide redundant data storage. Vendor managed cloud services eliminate large capital investments in equipment and maintenance, as well as staffing for the data center--creating a reduction in total cost of ownership for the healthcare provider.
Laserprinter applications in a medical graphics department.
Lynch, P J
1987-01-01
Our experience with the Apple Macintosh and LaserWriter equipment has convinced us that lasergraphics holds much current and future promise in the creation of line graphics and typography for the biomedical community. Although we continue to use other computer graphics equipment to produce color slides and an occasional pen-plotter graphic, the most rapidly growing segment of our graphics workload is in material well-suited to production on the Macintosh/LaserWriter system. At present our goal is to integrate all of our computer graphics production (color slides, video paint graphics and monochrome print graphics) into a single Macintosh-based system within the next two years. The software and hardware currently available are capable of producing a wide range of science graphics very quickly and inexpensively. The cost-effectiveness, versatility and relatively low initial investment required to install this equipment make it an attractive alternative for cost-recovery departments just entering the field of computer graphics.
Evolutionary Telemetry and Command Processor (TCP) architecture
NASA Technical Reports Server (NTRS)
Schneider, John R.
1992-01-01
A low cost, modular, high performance, and compact Telemetry and Command Processor (TCP) is being built as the foundation of command and data handling subsystems for the next generation of satellites. The TCP product line will support command and telemetry requirements for small to large spacecraft and from low to high rate data transmission. It is compatible with the latest TDRSS, STDN and SGLS transponders and provides CCSDS protocol communications in addition to standard TDM formats. Its high performance computer provides computing resources for hosted flight software. Layered and modular software provides common services using standardized interfaces to applications thereby enhancing software re-use, transportability, and interoperability. The TCP architecture is based on existing standards, distributed networking, distributed and open system computing, and packet technology. The first TCP application is planned for the 94 SDIO SPAS 3 mission. The architecture enhances rapid tailoring of functions thereby reducing costs and schedules developed for individual spacecraft missions.
Characteristic analysis and simulation for polysilicon comb micro-accelerometer
NASA Astrophysics Data System (ADS)
Liu, Fengli; Hao, Yongping
2008-10-01
High force update rate is a key factor for achieving high performance haptic rendering, which imposes a stringent real time requirement upon the execution environment of the haptic system. This requirement confines the haptic system to simplified environment for reducing the computation cost of haptic rendering algorithms. In this paper, we present a novel "hyper-threading" architecture consisting of several threads for haptic rendering. The high force update rate is achieved with relatively large computation time interval for each haptic loop. The proposed method was testified and proved to be effective with experiments on virtual wall prototype haptic system via Delta Haptic Device.
Cost and benefits design optimization model for fault tolerant flight control systems
NASA Technical Reports Server (NTRS)
Rose, J.
1982-01-01
Requirements and specifications for a method of optimizing the design of fault-tolerant flight control systems are provided. Algorithms that could be used for developing new and modifying existing computer programs are also provided, with recommendations for follow-on work.
ERIC Educational Resources Information Center
Stanford Univ., CA.
Recognizing the need to balance generality and economy in system costs, the Project INFO team at Stanford University developing OASIS has sought to provide generalized and powerful computer support within the normal range of operating and analytical requirements associated with university administration. The specific design objectives of the OASIS…
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yishen; Zhou, Zhi; Liu, Cong
2016-08-01
As more wind power and other renewable resources are being integrated into the electric power grid, the forecast uncertainty brings operational challenges for the power system operators. In this report, different operational strategies for uncertainty management are presented and evaluated. A comprehensive and consistent simulation framework is developed to analyze the performance of different reserve policies and scheduling techniques under uncertainty in wind power. Numerical simulations are conducted on a modified version of the IEEE 118-bus system with a 20% wind penetration level, comparing deterministic, interval, and stochastic unit commitment strategies. The results show that stochastic unit commitment provides amore » reliable schedule without large increases in operational costs. Moreover, decomposition techniques, such as load shift factor and Benders decomposition, can help in overcoming the computational obstacles to stochastic unit commitment and enable the use of a larger scenario set to represent forecast uncertainty. In contrast, deterministic and interval unit commitment tend to give higher system costs as more reserves are being scheduled to address forecast uncertainty. However, these approaches require a much lower computational effort Choosing a proper lower bound for the forecast uncertainty is important for balancing reliability and system operational cost in deterministic and interval unit commitment. Finally, we find that the introduction of zonal reserve requirements improves reliability, but at the expense of higher operational costs.« less
Giesel, F L; Delorme, S; Sibbel, R; Kauczor, H-U; Krix, M
2009-06-01
The aim of the study was to conduct a cost-minimization analysis of contrast-enhanced ultrasound (CEUS) compared to multi-phase computed tomography (M-CT) as the diagnostic standard for diagnosing incidental liver lesions. Different scenarios of a cost-covering realization of CEUS in the ambulant sector in the general health insurance system of Germany were compared to the current cost situation. The absolute savings potential was estimated using different approaches for the calculation of the incidence of liver lesions which require further characterization. CEUS was the more cost-effective method in all scenarios in which CEUS examinations where performed at specialized centers (122.18-186.53 euro) compared to M-CT (223.19 euro). With about 40 000 relevant liver lesions per year, systematic implementation of CEUS would result in a cost savings of 4 m euro per year. However, the scenario of a cost-covering CEUS examination for all physicians who perform liver ultrasound would be the most cost-intensive approach (e. g., 407.87 euro at an average utilization of the ultrasound machine of 25 %, and a CEUS ratio of 5 %). A cost-covering realization of the CEUS method can result in cost savings in the German healthcare system. A centralized approach as proposed by the DEGUM should be targeted.
Metal oxide resistive random access memory based synaptic devices for brain-inspired computing
NASA Astrophysics Data System (ADS)
Gao, Bin; Kang, Jinfeng; Zhou, Zheng; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan
2016-04-01
The traditional Boolean computing paradigm based on the von Neumann architecture is facing great challenges for future information technology applications such as big data, the Internet of Things (IoT), and wearable devices, due to the limited processing capability issues such as binary data storage and computing, non-parallel data processing, and the buses requirement between memory units and logic units. The brain-inspired neuromorphic computing paradigm is believed to be one of the promising solutions for realizing more complex functions with a lower cost. To perform such brain-inspired computing with a low cost and low power consumption, novel devices for use as electronic synapses are needed. Metal oxide resistive random access memory (ReRAM) devices have emerged as the leading candidate for electronic synapses. This paper comprehensively addresses the recent work on the design and optimization of metal oxide ReRAM-based synaptic devices. A performance enhancement methodology and optimized operation scheme to achieve analog resistive switching and low-energy training behavior are provided. A three-dimensional vertical synapse network architecture is proposed for high-density integration and low-cost fabrication. The impacts of the ReRAM synaptic device features on the performances of neuromorphic systems are also discussed on the basis of a constructed neuromorphic visual system with a pattern recognition function. Possible solutions to achieve the high recognition accuracy and efficiency of neuromorphic systems are presented.
A service brokering and recommendation mechanism for better selecting cloud services.
Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan
2014-01-01
Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).
High Available COTS Based Computer for Space
NASA Astrophysics Data System (ADS)
Hartmann, J.; Magistrati, Giorgio
2015-09-01
The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali
A computer implemented method monetizes the security of a cyber-system in terms of losses each stakeholder may expect to lose if a security break down occurs. A non-transitory media stores instructions for generating a stake structure that includes costs that each stakeholder of a system would lose if the system failed to meet security requirements and generating a requirement structure that includes probabilities of failing requirements when computer components fails. The system generates a vulnerability model that includes probabilities of a component failing given threats materializing and generates a perpetrator model that includes probabilities of threats materializing. The system generatesmore » a dot product of the stakes structure, the requirement structure, the vulnerability model and the perpetrator model. The system can further be used to compare, contrast and evaluate alternative courses of actions best suited for the stakeholders and their requirements.« less
Numerical Propulsion System Simulation (NPSS) 1999 Industry Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Greg; Naiman, Cynthia; Evans, Austin
2000-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. In addition, the paper contains a summary of the feedback received from industry partners in the development effort and the actions taken over the past year to respond to that feedback. The NPSS development was supported in FY99 by the High Performance Computing and Communications Program.
Multi-level methods and approximating distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E.
2016-07-15
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparablemore » to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.« less
Concise CIO based precession-nutation formulations
NASA Astrophysics Data System (ADS)
Capitaine, N.; Wallace, P. T.
2008-01-01
Context: The IAU 2000/2006 precession-nutation models have precision goals measured in microarcseconds. To reach this level of performance has required series containing terms at over 1300 frequencies and involving several thousand amplitude coefficients. There are many astronomical applications for which such precision is not required and the associated heavy computations are wasteful. This justifies developing smaller models that achieve adequate precision with greatly reduced computing costs. Aims: We discuss strategies for developing simplified IAU 2000/2006 precession-nutation procedures that offer a range of compromises between accuracy and computing costs. Methods: The chain of transformations linking celestial and terrestrial coordinates comprises frame bias, precession-nutation, Earth rotation and polar motion. We address the bias and precession-nutation (NPB) portion of the chain, linking the Geocentric Celestial Reference System (GCRS) with the Celestial Intermediate Reference System (CIRS), the latter based on the Celestial Intermediate Pole (CIP) and Celestial Intermediate Origin (CIO). Starting from direct series that deliver the CIP coordinates X,Y and (via the quantity s + XY/2) the CIO locator s, we look at the opportunities for simplification. Results: The biggest reductions come from truncating the series, but some additional gains can be made in the areas of the matrix formulation, the expressions for the nutation arguments and by subsuming long period effects into the bias quantities. Three example models are demonstrated that approximate the IAU 2000/2006 CIP to accuracies of 1 mas, 16 mas and 0.4 arcsec throughout 1995-2050 but with computation costs reduced by 1, 2 and 3 orders of magnitude compared with the full model. Appendices A to G are only available in electronic form at http://www.aanda.org
Aerodynamic analysis for aircraft with nacelles, pylons, and winglets at transonic speeds
NASA Technical Reports Server (NTRS)
Boppe, Charles W.
1987-01-01
A computational method has been developed to provide an analysis for complex realistic aircraft configurations at transonic speeds. Wing-fuselage configurations with various combinations of pods, pylons, nacelles, and winglets can be analyzed along with simpler shapes such as airfoils, isolated wings, and isolated bodies. The flexibility required for the treatment of such diverse geometries is obtained by using a multiple nested grid approach in the finite-difference relaxation scheme. Aircraft components (and their grid systems) can be added or removed as required. As a result, the computational method can be used in the same manner as a wind tunnel to study high-speed aerodynamic interference effects. The multiple grid approach also provides high boundary point density/cost ratio. High resolution pressure distributions can be obtained. Computed results are correlated with wind tunnel and flight data using four different transport configurations. Experimental/computational component interference effects are included for cases where data are available. The computer code used for these comparisons is described in the appendices.
Battlefield awareness computers: the engine of battlefield digitization
NASA Astrophysics Data System (ADS)
Ho, Jackson; Chamseddine, Ahmad
1997-06-01
To modernize the army for the 21st century, the U.S. Army Digitization Office (ADO) initiated in 1995 the Force XXI Battle Command Brigade-and-Below (FBCB2) Applique program which became a centerpiece in the U.S. Army's master plan to win future information wars. The Applique team led by TRW fielded a 'tactical Internet' for Brigade and below command to demonstrate the advantages of 'shared situation awareness' and battlefield digitization in advanced war-fighting experiments (AWE) to be conducted in March 1997 at the Army's National Training Center in California. Computing Devices is designated the primary hardware developer for the militarized version of the battlefield awareness computers. The first generation of militarized battlefield awareness computer, designated as the V3 computer, was an integration of off-the-shelf components developed to meet the agressive delivery requirements of the Task Force XXI AWE. The design efficiency and cost effectiveness of the computer hardware were secondary in importance to delivery deadlines imposed by the March 1997 AWE. However, declining defense budgets will impose cost constraints on the Force XXI production hardware that can only be met by rigorous value engineering to further improve design optimization for battlefield awareness without compromising the level of reliability the military has come to expect in modern military hardened vetronics. To answer the Army's needs for a more cost effective computing solution, Computing Devices developed a second generation 'combat ready' battlefield awareness computer, designated the V3+, which is designed specifically to meet the upcoming demands of Force XXI (FBCB2) and beyond. The primary design objective is to achieve a technologically superior design, value engineered to strike an optimal balance between reliability, life cycle cost, and procurement cost. Recognizing that the diverse digitization demands of Force XXI cannot be adequately met by any one computer hardware solution, Computing Devices is planning to develop a notebook sized military computer designed for space limited vehicle-mounted applications, as well as a high-performance portable workstation equipped with a 19', full color, ultra-high resolution and high brightness active matrix liquid crystal display (AMLCD) targeting the command posts and tactical operations centers (TOC) applications. Together with the wearable computers Computing Devices developed at the Minneapolis facility for dismounted soldiers, Computing Devices will have a complete suite of interoperable battlefield awareness computers spanning the entire spectrum of battle digitization operating environments. Although this paper's primary focus is on a second generation 'combat ready' battlefield awareness computer or the V3+, this paper also briefly discusses the extension of the V3+ architecture to address the needs of the embedded and command post applications.3080
Operating Dedicated Data Centers - Is It Cost-Effective?
NASA Astrophysics Data System (ADS)
Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.
2014-06-01
The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.
NASA Technical Reports Server (NTRS)
Heath, Bruce E.
2007-01-01
One result of the relatively recent advances in computing technology has been the decreasing cost of computers and increasing computational power. This has allowed high fidelity airplane simulations to be run on personal computers (PC). Thus, simulators are now used routinely by pilots to substitute real flight hours for simulated flight hours for training for an aircraft type rating thereby reducing the cost of flight training. However, FAA regulations require that such substitution training must be supervised by Certified Flight Instructors (CFI). If the CFI presence could be reduced or eliminated for certain tasks this would mean a further cost savings to the pilot. This would require that the flight simulator have a certain level of 'intelligence' in order to provide feedback on pilot perfolmance similar to that of a CFI. The 'intelligent' flight sinlulator would have at least the capability to use data gathered from the flight to create a measure for the performance of the student pilot. Also, to fully utilize the advances in computational power, the sinlulator would be capable of interacting with the student pilot using the best possible training interventions. This thesis reposts on the two studies conducted at Tuskegee University investigating the effects of interventions on the learning of two flight maneuvers on a flight sinlulator and the robustness and accuracy of calculated perfornlance indices as compared to CFI evaluations of performance. The intent of these studies is to take a step in the direction of creating an 'intelligent' flight simulator. The first study deals with the comparisons of novice pilot performance trained at different levels of above real-time to execute a level S-turn. The second study examined the effect of out-of-the-window (OTW) visual cues in the form of hoops on the performance of novice pilots learning to fly a landing approach on the flight simulator. The reliability/robustness of the computed performance metrics was assessed by comparing them with the evaluations of the landing approach maneuver by a number of CFIs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Simonetto, Andrea
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less
NASA Technical Reports Server (NTRS)
Smith, R. L.; Lyubomirsky, A. S.
1981-01-01
Two techniques were analyzed. The first is a representation using Chebyshev expansions in three-dimensional cells. The second technique employs a temporary file for storing the components of the nonspherical gravity force. Computer storage requirements and relative CPU time requirements are presented. The Chebyshev gravity representation can provide a significant reduction in CPU time in precision orbit calculations, but at the cost of a large amount of direct-access storage space, which is required for a global model.
NASA Astrophysics Data System (ADS)
Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip
2017-10-01
In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.
Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip
2017-10-28
In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.
A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.
Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang
2016-12-07
The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DORCA 2 computer program. Volume 2: Programmer's manual
NASA Technical Reports Server (NTRS)
Gold, B. J.
1972-01-01
A guide for coding the Dynamic Operational Requirements and Cost Analysis Program (DORCA 2) is presented. The manual provides a detailed operation of every subroutine, the layout in core of the major matrices and arrays, and the meaning of all program values. Flow charts are included.
Code of Federal Regulations, 2011 CFR
2011-10-01
... provided in paragraph (d) below, the contracting officer shall charge interest on the daily unliquidated... computed at the end of each month on the daily unliquidated balance of advance payments at the applicable daily interest rate. (c) Interest shall be required on contracts that are for acquisition, at cost, of...
Code of Federal Regulations, 2010 CFR
2010-10-01
... provided in paragraph (d) below, the contracting officer shall charge interest on the daily unliquidated... computed at the end of each month on the daily unliquidated balance of advance payments at the applicable daily interest rate. (c) Interest shall be required on contracts that are for acquisition, at cost, of...
What We've Learned about Assessing Hands-On Science.
ERIC Educational Resources Information Center
Shavelson, Richard J.; Baxter, Gail P.
1992-01-01
A recent study compared hands-on scientific inquiry assessment to assessments involving lab notebooks, computer simulations, short-answer paper-and-pencil problems, and multiple-choice questions. Creating high quality performance assessments is a costly, time-consuming process requiring considerable scientific and technological know-how. Improved…
42 CFR 495.106 - Incentive payments to CAHs.
Code of Federal Regulations, 2011 CFR
2011-10-01
... PROGRAM Requirements Specific to the Medicare Program § 495.106 Incentive payments to CAHs. (a... computers and associated hardware and software, necessary to administer certified EHR technology as defined... determining if a CAH is a qualifying CAH under this section; (3) Specification of EHR reporting periods, cost...
42 CFR 495.106 - Incentive payments to CAHs.
Code of Federal Regulations, 2013 CFR
2013-10-01
... PROGRAM Requirements Specific to the Medicare Program § 495.106 Incentive payments to CAHs. (a... computers and associated hardware and software, necessary to administer certified EHR technology as defined... determining if a CAH is a qualifying CAH under this section; (3) Specification of EHR reporting periods, cost...
Optimal regulatory strategies for metabolic pathways in Escherichia coli depending on protein costs
Wessely, Frank; Bartl, Martin; Guthke, Reinhard; Li, Pu; Schuster, Stefan; Kaleta, Christoph
2011-01-01
While previous studies have shed light on the link between the structure of metabolism and its transcriptional regulation, the extent to which transcriptional regulation controls metabolism has not yet been fully explored. In this work, we address this problem by integrating a large number of experimental data sets with a model of the metabolism of Escherichia coli. Using a combination of computational tools including the concept of elementary flux patterns, methods from network inference and dynamic optimization, we find that transcriptional regulation of pathways reflects the protein investment into these pathways. While pathways that are associated to a high protein cost are controlled by fine-tuned transcriptional programs, pathways that only require a small protein cost are transcriptionally controlled in a few key reactions. As a reason for the occurrence of these different regulatory strategies, we identify an evolutionary trade-off between the conflicting requirements to reduce protein investment and the requirement to be able to respond rapidly to changes in environmental conditions. PMID:21772263
Benefits and costs of low thrust propulsion systems
NASA Technical Reports Server (NTRS)
Robertson, R. I.; Rose, L. J.; Maloy, J. E.
1983-01-01
The results of costs/benefits analyses of three chemical propulsion systems that are candidates for transferring high density, low volume STS payloads from LEO to GEO are reported. Separate algorithms were developed for benefits and costs of primary propulsion systems (PPS) as functions of the required thrust levels. The life cycle costs of each system were computed based on the developmental, production, and deployment costs. A weighted criteria rating approach was taken for the benefits, with each benefit assigned a value commensurate to its relative worth to the overall system. Support costs were included in the costs modeling. Reference missions from NASA, commercial, and DoD catalog payloads were examined. The program was concluded reliable and flexible for evaluating benefits and costs of launch and orbit transfer for any catalog mission, with the most beneficial PPS being a dedicated low thrust configuration using the RL-10 system.
Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad
2015-05-01
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).
NASA Technical Reports Server (NTRS)
Semenov, Boris V.; Acton, Charles H., Jr.; Bachman, Nathaniel J.; Elson, Lee S.; Wright, Edward D.
2005-01-01
The SPICE system of navigation and ancillary data possesses a number of traits that make its use in modern space missions of all types highly cost efficient. The core of the system is a software library providing API interfaces for storing and retrieving such data as trajectories, orientations, time conversions, and instrument geometry parameters. Applications used at any stage of a mission life cycle can call SPICE APIs to access this data and compute geometric quantities required for observation planning, engineering assessment and science data analysis. SPICE is implemented in three different languages, supported on 20+ computer environments, and distributed with complete source code and documentation. It includes capabilities that are extensively tested by everyday use in many active projects and are applicable to all types of space missions - flyby, orbiters, observatories, landers and rovers. While a customer's initial SPICE adaptation for the first mission or experiment requires a modest effort, this initial effort pays off because adaptation for subsequent missions/experiments is just a small fraction of the initial investment, with the majority of tools based on SPICE requiring no or very minor changes.
Efficiency Evaluation of Handling of Geologic-Geophysical Information by Means of Computer Systems
NASA Astrophysics Data System (ADS)
Nuriyahmetova, S. M.; Demyanova, O. V.; Zabirova, L. M.; Gataullin, I. I.; Fathutdinova, O. A.; Kaptelinina, E. A.
2018-05-01
Development of oil and gas resources, considering difficult geological, geographical and economic conditions, requires considerable finance costs; therefore their careful reasons, application of the most perspective directions and modern technologies from the point of view of cost efficiency of planned activities are necessary. For ensuring high precision of regional and local forecasts and modeling of reservoirs of fields of hydrocarbonic raw materials, it is necessary to analyze huge arrays of the distributed information which is constantly changing spatial. The solution of this task requires application of modern remote methods of a research of the perspective oil-and-gas territories, complex use of materials remote, nondestructive the environment of geologic-geophysical and space methods of sounding of Earth and the most perfect technologies of their handling. In the article, the authors considered experience of handling of geologic-geophysical information by means of computer systems by the Russian and foreign companies. Conclusions that the multidimensional analysis of geologicgeophysical information space, effective planning and monitoring of exploration works requires broad use of geoinformation technologies as one of the most perspective directions in achievement of high profitability of an oil and gas industry are drawn.
NASA Technical Reports Server (NTRS)
Schoen, A. H.; Rosenstein, H.; Stanzione, K.; Wisniewski, J. S.
1980-01-01
This report describes the use of the V/STOL Aircraft Sizing and Performance Computer Program (VASCOMP II). The program is useful in performing aircraft parametric studies in a quick and cost efficient manner. Problem formulation and data development were performed by the Boeing Vertol Company and reflects the present preliminary design technology. The computer program, written in FORTRAN IV, has a broad range of input parameters, to enable investigation of a wide variety of aircraft. User oriented features of the program include minimized input requirements, diagnostic capabilities, and various options for program flexibility.
Near-term hybrid vehicle program, phase 1
NASA Technical Reports Server (NTRS)
1979-01-01
The preliminary design of a hybrid vehicle which fully meets or exceeds the requirements set forth in the Near Term Hybrid Vehicle Program is documented. Topics addressed include the general layout and styling, the power train specifications with discussion of each major component, vehicle weight and weight breakdown, vehicle performance, measures of energy consumption, and initial cost and ownership cost. Alternative design options considered and their relationship to the design adopted, computer simulation used, and maintenance and reliability considerations are also discussed.
Net present value analysis: appropriate for public utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, W.N. III
1980-08-28
The net-present-value technique widely used by unregulated companies for capital budgeting can also apply to regulated public utilities. Used to decide whether an investment is worthwhile, the NPV technique discounts an investment's initial outlay or cost. The type of project most appropriate for an NPV analysis is that designed to lower costs. Efficiency-improving investments can be adequately evaluated by the NPV method, which in certain cases is easier to use than some of the more complicated revenue-requirement computer models.
Optimal dynamic remapping of parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Reynolds, Paul F., Jr.
1987-01-01
A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. The decision problem was considered regarding the remapping of workload to processors in a parallel computation when the utility of remapping and the future behavior of the workload is uncertain, and phases exhibit stable execution requirements during a given phase, but requirements may change radically between phases. For these problems a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The fundamental problem of balancing the expected remapping performance gain against the delay cost was addressed. Stochastic dynamic programming is used to show that the remapping decision policy minimizing the expected running time of the computation has an extremely simple structure. Because the gain may not be predictable, the performance of a heuristic policy that does not require estimnation of the gain is examined. The heuristic method's feasibility is demonstrated by its use on an adaptive fluid dynamics code on a multiprocessor. The results suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change. The results also suggest that this heuristic is applicable to computations with more than two phases.
NASA Technical Reports Server (NTRS)
Davis, S. J.; Rosenstein, H.
1975-01-01
The Comprehensive Airship Sizing and Performance Computer Program (CASCOMP) is described which was developed and used in the design and evaluation of advanced lighter-than-air (LTA) craft. The program defines design details such as engine size and number, component weight buildups, required power, and the physical dimensions of airships which are designed to meet specified mission requirements. The program is used in a comparative parametric evaluation of six advanced lighter-than-air concepts. The results indicate that fully buoyant conventional airships have the lightest gross lift required when designed for speeds less than 100 knots and the partially buoyant concepts are superior above 100 knots. When compared on the basis of specific productivity, which is a measure of the direct operating cost, the partially buoyant lifting body/tilting prop-rotor concept is optimum.
Using concatenated quantum codes for universal fault-tolerant quantum gates.
Jochym-O'Connor, Tomas; Laflamme, Raymond
2014-01-10
We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.
Planning Inmarsat's second generation of spacecraft
NASA Astrophysics Data System (ADS)
Williams, W. P.
1982-09-01
The next generation of studies of the Inmarsat service are outlined, such as traffic forecasting studies, communications capacity estimates, space segment design, cost estimates, and financial analysis. Traffic forecasting will require future demand estimates, and a computer model has been developed which estimates demand over the Atlantic, Pacific, and Indian ocean regions. Communications estimates are based on traffic estimates, as a model converts traffic demand into a required capacity figure for a given area. The Erlang formula is used, requiring additional data such as peak hour ratios and distribution estimates. Basic space segment technical requirements are outlined (communications payload, transponder arrangements, etc), and further design studies involve such areas as space segment configuration, launcher and spacecraft studies, transmission planning, and earth segment configurations. Cost estimates of proposed design parameters will be performed, but options must be reduced to make construction feasible. Finally, a financial analysis will be carried out in order to calculate financial returns.
Iannaccone, Reto; Brem, Silvia; Walitza, Susanne
2017-01-01
Patients with obsessive-compulsive disorder (OCD) can be described as cautious and hesitant, manifesting an excessive indecisiveness that hinders efficient decision making. However, excess caution in decision making may also lead to better performance in specific situations where the cost of extended deliberation is small. We compared 16 juvenile OCD patients with 16 matched healthy controls whilst they performed a sequential information gathering task under different external cost conditions. We found that patients with OCD outperformed healthy controls, winning significantly more points. The groups also differed in the number of draws required prior to committing to a decision, but not in decision accuracy. A novel Bayesian computational model revealed that subjective sampling costs arose as a non-linear function of sampling, closely resembling an escalating urgency signal. Group difference in performance was best explained by a later emergence of these subjective costs in the OCD group, also evident in an increased decision threshold. Our findings present a novel computational model and suggest that enhanced information gathering in OCD can be accounted for by a higher decision threshold arising out of an altered perception of costs that, in some specific contexts, may be advantageous. PMID:28403139
NASA Technical Reports Server (NTRS)
Freitas, R. A., Jr. (Editor); Carlson, P. A. (Editor)
1983-01-01
Adoption of an aggressive computer science research and technology program within NASA will: (1) enable new mission capabilities such as autonomous spacecraft, reliability and self-repair, and low-bandwidth intelligent Earth sensing; (2) lower manpower requirements, especially in the areas of Space Shuttle operations, by making fuller use of control center automation, technical support, and internal utilization of state-of-the-art computer techniques; (3) reduce project costs via improved software verification, software engineering, enhanced scientist/engineer productivity, and increased managerial effectiveness; and (4) significantly improve internal operations within NASA with electronic mail, managerial computer aids, an automated bureaucracy and uniform program operating plans.
34 CFR 648.62 - How can the institutional payment be used?
Code of Federal Regulations, 2010 CFR
2010-07-01
..., or supplies required of students in the same course of study. (2) Costs of computer hardware, project specific software, and other equipment prorated by the length of the student's fellowship over the reasonable life of the equipment. (3) Membership fees of professional associations. (4) Travel and per diem...
Wireless Technology in the Library: The RIT Experience: Overview of the Project.
ERIC Educational Resources Information Center
Pitkin, Pat
2001-01-01
Provides an overview of a project at RIT (Rochester Institute of Technology) that experimented with wireless technology, including laptop computers that circulate within the library building. Discusses project requirements, including ease of use, low maintenance, and low cost; motivation, including mobility; implementation; and benefits to the…
Structural performance analysis and redesign
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1978-01-01
Program performs stress buckling and vibrational analysis of large, linear, finite-element systems in excess of 50,000 degrees of freedom. Cost, execution time, and storage requirements are kept reasonable through use of sparse matrix solution techniques, and other computational and data management procedures designed for problems of very large size.
13 CFR 107.1830 - Licensee's Capital Impairment-definition and general requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 301(c) Licensees If the percentage of equity capital investments (at cost) in your portfolio is: And... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Licensee's Capital Impairment... ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES Licensee's Noncompliance With Terms of Leverage Computation of...
5 CFR 831.702 - Adjustment of annuities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Adjustment of annuities. 831.702 Section... (CONTINUED) RETIREMENT Computation of Annuities § 831.702 Adjustment of annuities. (a)(1) An annuity which... benefit shall require a corresponding deduction in the civil service annuity. (3) Any cost-of-living...
Common Sense Wordworking III: Desktop Publishing and Desktop Typesetting.
ERIC Educational Resources Information Center
Crawford, Walt
1987-01-01
Describes current desktop publishing packages available for microcomputers and discusses the disadvantages, especially in cost, for most personal computer users. Also described is a less expensive alternative technology--desktop typesetting--which meets the requirements of users who do not need elaborate techniques for combining text and graphics.…
49 CFR 18.42 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... similar accounting computations of the rate at which a particular group of costs is chargeable (such as... and its supporting records starts from theend of the fiscal year (or other accounting period) covered... pertinent books, documents, papers, or other records of grantees and subgrantees which are pertinent to the...
Selected Economic Translations on Eastern Europe (165th in the series)
1960-04-18
requires the;clarification of many other prob- lems (cost computation, the determination of domestic prices,* foreign exchange rates , etc.).as well...expressed in clearing rubles are derived to a considerable extent from the 70 ■p^^^^ capitalist world market prices, because the foreign exchange rates of
Marsalek, Ondrej; Markland, Thomas E
2016-02-07
Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding as a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.
A lightweight distributed framework for computational offloading in mobile cloud computing.
Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul
2014-01-01
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.
A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing
Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul
2014-01-01
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245
Lam, Stephen; Tammemagi, Martin C.; Evans, William K.; Leighl, Natasha B.; Regier, Dean A.; Bolbocean, Corneliu; Shepherd, Frances A.; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R.; Mayo, John R.; McWilliams, Annette; Couture, Christian; English, John C.; Goffin, John; Hwang, David M.; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J.; Goss, Glenwood D.; Nicholas, Garth; Seely, Jean M.; Sekhon, Harmanjatinder S.; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N.; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D.; Tan, Wan C.; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J.
2014-01-01
Background: It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Methods: Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer’s perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. Results: The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400–$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553–$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254–$52,200; p = 0.061). Conclusion: In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure. PMID:25105438
NASA Technical Reports Server (NTRS)
Heath, Bruce E.; Khan, M. Javed; Rossi, Marcia; Ali, Syed Firasat
2005-01-01
The rising cost of flight training and the low cost of powerful computers have resulted in increasing use of PC-based flight simulators. This has prompted FAA standards regulating such use and allowing aspects of training on simulators meeting these standards to be substituted for flight time. However, the FAA regulations require an authorized flight instructor as part of the training environment. Thus, while costs associated with flight time have been reduced, the cost associated with the need for a flight instructor still remains. The obvious area of research, therefore, has been to develop intelligent simulators. However, the two main challenges of such attempts have been training strategies and assessment. The research reported in this paper was conducted to evaluate various performance metrics of a straight-in landing approach by 33 novice pilots flying a light single engine aircraft simulation. These metrics were compared to assessments of these flights by two flight instructors to establish a correlation between the two techniques in an attempt to determine a composite performance metric for this flight maneuver.
Advanced Launch System Multi-Path Redundant Avionics Architecture Analysis and Characterization
NASA Technical Reports Server (NTRS)
Baker, Robert L.
1993-01-01
The objective of the Multi-Path Redundant Avionics Suite (MPRAS) program is the development of a set of avionic architectural modules which will be applicable to the family of launch vehicles required to support the Advanced Launch System (ALS). To enable ALS cost/performance requirements to be met, the MPRAS must support autonomy, maintenance, and testability capabilities which exceed those present in conventional launch vehicles. The multi-path redundant or fault tolerance characteristics of the MPRAS are necessary to offset a reduction in avionics reliability due to the increased complexity needed to support these new cost reduction and performance capabilities and to meet avionics reliability requirements which will provide cost-effective reductions in overall ALS recurring costs. A complex, real-time distributed computing system is needed to meet the ALS avionics system requirements. General Dynamics, Boeing Aerospace, and C.S. Draper Laboratory have proposed system architectures as candidates for the ALS MPRAS. The purpose of this document is to report the results of independent performance and reliability characterization and assessment analyses of each proposed candidate architecture and qualitative assessments of testability, maintainability, and fault tolerance mechanisms. These independent analyses were conducted as part of the MPRAS Part 2 program and were carried under NASA Langley Research Contract NAS1-17964, Task Assignment 28.
Exploring the use of I/O nodes for computation in a MIMD multiprocessor
NASA Technical Reports Server (NTRS)
Kotz, David; Cai, Ting
1995-01-01
As parallel systems move into the production scientific-computing world, the emphasis will be on cost-effective solutions that provide high throughput for a mix of applications. Cost effective solutions demand that a system make effective use of all of its resources. Many MIMD multiprocessors today, however, distinguish between 'compute' and 'I/O' nodes, the latter having attached disks and being dedicated to running the file-system server. This static division of responsibilities simplifies system management but does not necessarily lead to the best performance in workloads that need a different balance of computation and I/O. Of course, computational processes sharing a node with a file-system service may receive less CPU time, network bandwidth, and memory bandwidth than they would on a computation-only node. In this paper we begin to examine this issue experimentally. We found that high performance I/O does not necessarily require substantial CPU time, leaving plenty of time for application computation. There were some complex file-system requests, however, which left little CPU time available to the application. (The impact on network and memory bandwidth still needs to be determined.) For applications (or users) that cannot tolerate an occasional interruption, we recommend that they continue to use only compute nodes. For tolerant applications needing more cycles than those provided by the compute nodes, we recommend that they take full advantage of both compute and I/O nodes for computation, and that operating systems should make this possible.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU.
Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU
Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109
Unmanned Aerial Vehicles unique cost estimating requirements
NASA Astrophysics Data System (ADS)
Malone, P.; Apgar, H.; Stukes, S.; Sterk, S.
Unmanned Aerial Vehicles (UAVs), also referred to as drones, are aerial platforms that fly without a human pilot onboard. UAVs are controlled autonomously by a computer in the vehicle or under the remote control of a pilot stationed at a fixed ground location. There are a wide variety of drone shapes, sizes, configurations, complexities, and characteristics. Use of these devices by the Department of Defense (DoD), NASA, civil and commercial organizations continues to grow. UAVs are commonly used for intelligence, surveillance, reconnaissance (ISR). They are also use for combat operations, and civil applications, such as firefighting, non-military security work, surveillance of infrastructure (e.g. pipelines, power lines and country borders). UAVs are often preferred for missions that require sustained persistence (over 4 hours in duration), or are “ too dangerous, dull or dirty” for manned aircraft. Moreover, they can offer significant acquisition and operations cost savings over traditional manned aircraft. Because of these unique characteristics and missions, UAV estimates require some unique estimating methods. This paper describes a framework for estimating UAV systems total ownership cost including hardware components, software design, and operations. The challenge of collecting data, testing the sensitivities of cost drivers, and creating cost estimating relationships (CERs) for each key work breakdown structure (WBS) element is discussed. The autonomous operation of UAVs is especially challenging from a software perspective.
Neural Substrates Underlying Effort, Time, and Risk-Based Decision Making in Motivated Behavior
Bailey, Matthew R.; Simpson, Eleanor H.; Balsam, Peter D
2016-01-01
All mobile organisms rely on adaptive motivated behavior to overcome the challenges of living in an environment in which essential resources may be limited. A variety of influences ranging from an organism’s environment, experiential history, and physiological state all influence a cost-benefit analysis which allows motivation to energize behavior and direct it toward specific goals. Here we review the substantial amount of research aimed at discovering the interconnected neural circuits which allow organisms to carry-out the cost-benefit computations which allow them to behave in adaptive ways. We specifically focus on how the brain deals with different types of costs, including effort requirements, delays to reward and payoff riskiness. An examination of this broad literature highlights the importance of the extended neural circuits which enable organisms to make decisions about these different types of costs. This involves Cortical Structures, including the Anterior Cingulate Cortex (ACC), the Orbital Frontal Cortex (OFC), the Infralimbic Cortex (IL), and prelimbic Cortex (PL), as well as the Baso-Lateral Amygdala (BLA), the Nucleus Accumbens (Nacc), the Ventral Pallidal (VP), the Sub Thalamic Nucleus (STN) among others. Some regions are involved in multiple aspects of cost-benefit computations while the involvement of other regions is restricted to information relating to specific types of costs. PMID:27427327
Neural substrates underlying effort, time, and risk-based decision making in motivated behavior.
Bailey, Matthew R; Simpson, Eleanor H; Balsam, Peter D
2016-09-01
All mobile organisms rely on adaptive motivated behavior to overcome the challenges of living in an environment in which essential resources may be limited. A variety of influences ranging from an organism's environment, experiential history, and physiological state all influence a cost-benefit analysis which allows motivation to energize behavior and direct it toward specific goals. Here we review the substantial amount of research aimed at discovering the interconnected neural circuits which allow organisms to carry-out the cost-benefit computations which allow them to behave in adaptive ways. We specifically focus on how the brain deals with different types of costs, including effort requirements, delays to reward and payoff riskiness. An examination of this broad literature highlights the importance of the extended neural circuits which enable organisms to make decisions about these different types of costs. This involves Cortical Structures, including the Anterior Cingulate Cortex (ACC), the Orbital Frontal Cortex (OFC), the Infralimbic Cortex (IL), and prelimbic Cortex (PL), as well as the Baso-Lateral Amygdala (BLA), the Nucleus Accumbens (NAcc), the Ventral Pallidal (VP), the Sub Thalamic Nucleus (STN) among others. Some regions are involved in multiple aspects of cost-benefit computations while the involvement of other regions is restricted to information relating to specific types of costs. Copyright © 2016 Elsevier Inc. All rights reserved.
2007-06-01
management issues he encountered ruled out the Expanion as a viable option for thin-client computing in the Navy. An improvement in thin-client...44 Requirements to capabilities (2004). Retrieved April 29, 2007, from Vision Presence Power: A Program Guide to the U.S. Navy – 2004...Retrieved April 29, 2007, from Vision Presence Power: A Program Guide to the U.S. Navy – 2004 Edition, p. 128. Web site: http://www.chinfo.navy.mil
A Parallel Trade Study Architecture for Design Optimization of Complex Systems
NASA Technical Reports Server (NTRS)
Kim, Hongman; Mullins, James; Ragon, Scott; Soremekun, Grant; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
Design of a successful product requires evaluating many design alternatives in a limited design cycle time. This can be achieved through leveraging design space exploration tools and available computing resources on the network. This paper presents a parallel trade study architecture to integrate trade study clients and computing resources on a network using Web services. The parallel trade study solution is demonstrated to accelerate design of experiments, genetic algorithm optimization, and a cost as an independent variable (CAIV) study for a space system application.
Application of computer-aided dispatch in law enforcement: An introductory planning guide
NASA Technical Reports Server (NTRS)
Sohn, R. L.; Gurfield, R. M.; Garcia, E. A.; Fielding, J. E.
1975-01-01
A set of planning guidelines for the application of computer-aided dispatching (CAD) to law enforcement is presented. Some essential characteristics and applications of CAD are outlined; the results of a survey of systems in the operational or planning phases are summarized. Requirements analysis, system concept design, implementation planning, and performance and cost modeling are described and demonstrated with numerous examples. Detailed descriptions of typical law enforcement CAD systems, and a list of vendor sources, are given in appendixes.
NASA Astrophysics Data System (ADS)
Bouchpan-Lerust-Juéry, L.
2007-08-01
Current and next generation on-board computer systems tend to implement real-time embedded control applications (e.g. Attitude and Orbit Control Subsystem (AOCS), Packet Utililization Standard (PUS), spacecraft autonomy . . . ) which must meet high standards of Reliability and Predictability as well as Safety. All these requirements require a considerable amount of effort and cost for Space Sofware Industry. This paper, in a first part, presents a free Open Source integrated solution to develop RTAI applications from analysis, design, simulation and direct implementation using code generation based on Open Source and in its second part summarises this suggested approach, its results and the conclusion for further work.
Basic materials and structures aspects for hypersonic transport vehicles (HTV)
NASA Astrophysics Data System (ADS)
Steinheil, E.; Uhse, W.
A Mach 5 transport design is used to illustrate structural concepts and criteria for materials selections and also key technologies that must be followed in the areas of computational methods, materials and construction methods. Aside from the primary criteria of low weight, low costs, and conceivable risks, a number of additional requirements must be met, including stiffness and strength, corrosion resistance, durability, and a construction adequate for inspection, maintenance and repair. Current aircraft construction requirements are significantly extended for hypersonic vehicles. Additional consideration is given to long-duration temperature resistance of the airframe structure, the integration of large-volume cryogenic fuel tanks, computational tools, structural design, polymer matrix composites, and advanced manufacturing technologies.
Extreme-scale Algorithms and Solver Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, Jack
A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less
NASA Astrophysics Data System (ADS)
Cui, Yank; Kobara, Kazukuni; Matsuura, Kanta; Imai, Hideki
As pervasive computing technologies develop fast, the privacy protection becomes a crucial issue and needs to be coped with very carefully. Typically, it is difficult to efficiently identify and manage plenty of the low-cost pervasive devices like Radio Frequency Identification Devices (RFID), without leaking any privacy information. In particular, the attacker may not only eavesdrop the communication in a passive way, but also mount an active attack to ask queries adaptively, which is obviously more dangerous. Towards settling this problem, in this paper, we propose two lightweight authentication protocols which are privacy-preserving against active attack, in an asymmetric way. That asymmetric style with privacy-oriented simplification succeeds to reduce the load of low-cost devices and drastically decrease the computation cost for the management of server. This is because that, unlike the usual management of the identities, our approach does not require any synchronization nor exhaustive search in the database, which enjoys great convenience in case of a large-scale system. The protocols are based on a fast asymmetric encryption with specialized simplification and only one cryptographic hash function, which consequently assigns an easy work to pervasive devices. Besides, our results do not require the strong assumption of the random oracle.
Distributed Finite Element Analysis Using a Transputer Network
NASA Technical Reports Server (NTRS)
Watson, James; Favenesi, James; Danial, Albert; Tombrello, Joseph; Yang, Dabby; Reynolds, Brian; Turrentine, Ronald; Shephard, Mark; Baehmann, Peggy
1989-01-01
The principal objective of this research effort was to demonstrate the extraordinarily cost effective acceleration of finite element structural analysis problems using a transputer-based parallel processing network. This objective was accomplished in the form of a commercially viable parallel processing workstation. The workstation is a desktop size, low-maintenance computing unit capable of supercomputer performance yet costs two orders of magnitude less. To achieve the principal research objective, a transputer based structural analysis workstation termed XPFEM was implemented with linear static structural analysis capabilities resembling commercially available NASTRAN. Finite element model files, generated using the on-line preprocessing module or external preprocessing packages, are downloaded to a network of 32 transputers for accelerated solution. The system currently executes at about one third Cray X-MP24 speed but additional acceleration appears likely. For the NASA selected demonstration problem of a Space Shuttle main engine turbine blade model with about 1500 nodes and 4500 independent degrees of freedom, the Cray X-MP24 required 23.9 seconds to obtain a solution while the transputer network, operated from an IBM PC-AT compatible host computer, required 71.7 seconds. Consequently, the $80,000 transputer network demonstrated a cost-performance ratio about 60 times better than the $15,000,000 Cray X-MP24 system.
NASA Technical Reports Server (NTRS)
Haakensen, Erik Edward
1998-01-01
The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.
Computer mapping of LANDSAT data for environmental applications
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator); Mckeon, J. B.; Reed, L. E.; Schmidt, N. F.; Schecter, R. N.
1975-01-01
The author has identified the following significant results. Land cover overlays and maps produced from LANDSAT are providing information on existing land use and resources throughout the 208 study area. The overlays are being used to delineate drainage areas of a predominant land cover type. Information on cover type is also being combined with other pertinent data to develop estimates of sediment and nutrients flows from the drainage area. The LANDSAT inventory of present land cover together with population projects is providing a basis for developing maps of anticipated land use patterns required to evaluate impact on water quality which may result from these patterns. Overlays of forest types were useful for defining wildlife habitat and vegetational resources in the region. LANDSAT data and computer assisted interpretation was found to be a rapid cost effective procedure for inventorying land cover on a regional basis. The entire 208 inventory which include acquisition of ground truth, LANDSAT tapes, computer processing, and production of overlays and coded tapes was completed within a period of 2 months at a cost of about 0.6 cents per acre, a significant improvement in time and cost over conventional photointerpretation and mapping techniques.
NASA Astrophysics Data System (ADS)
Pitton, Giuseppe; Quaini, Annalisa; Rozza, Gianluigi
2017-09-01
We focus on reducing the computational costs associated with the hydrodynamic stability of solutions of the incompressible Navier-Stokes equations for a Newtonian and viscous fluid in contraction-expansion channels. In particular, we are interested in studying steady bifurcations, occurring when non-unique stable solutions appear as physical and/or geometric control parameters are varied. The formulation of the stability problem requires solving an eigenvalue problem for a partial differential operator. An alternative to this approach is the direct simulation of the flow to characterize the asymptotic behavior of the solution. Both approaches can be extremely expensive in terms of computational time. We propose to apply Reduced Order Modeling (ROM) techniques to reduce the demanding computational costs associated with the detection of a type of steady bifurcations in fluid dynamics. The application that motivated the present study is the onset of asymmetries (i.e., symmetry breaking bifurcation) in blood flow through a regurgitant mitral valve, depending on the Reynolds number and the regurgitant mitral valve orifice shape.
Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments
NASA Astrophysics Data System (ADS)
Munsky, Brian; Shepherd, Douglas
2014-03-01
Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.
Comparison of different models for non-invasive FFR estimation
NASA Astrophysics Data System (ADS)
Mirramezani, Mehran; Shadden, Shawn
2017-11-01
Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.
NASA Astrophysics Data System (ADS)
Tripathi, Vijay S.; Yeh, G. T.
1993-06-01
Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.
Scharfe, Michael; Pielot, Rainer; Schreiber, Falk
2010-01-11
Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.
A Service Brokering and Recommendation Mechanism for Better Selecting Cloud Services
Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan
2014-01-01
Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI). PMID:25170937
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, N.A. Jr.
1994-04-01
Over the past decade, computer-aided design (CAD) has become a practical and economical design tool. Today, specifying CAD hardware and software is relatively easy once you know what the design requirements are. But finding experienced CAD professionals is often more difficult. Most CAD users have only two or three years of design experience; more experienced design personnel are frequently not CAD literate. However, effective use of CAD can be the key to lowering design costs and improving design quality--a quest familiar to every manager and designer. By emphasizing computer-aided design literacy at all levels of the firm, a Canadian joint-venturemore » company that specializes in engineering small hydroelectric projects has cut costs, become more productive and improved design quality. This article describes how they did it.« less
Many-body calculations with deuteron based single-particle bases and their associated natural orbits
NASA Astrophysics Data System (ADS)
Puddu, G.
2018-06-01
We use the recently introduced single-particle states obtained from localized deuteron wave-functions as a basis for nuclear many-body calculations. We show that energies can be substantially lowered if the natural orbits (NOs) obtained from this basis are used. We use this modified basis for {}10{{B}}, {}16{{O}} and {}24{{Mg}} employing the bare NNLOopt nucleon–nucleon interaction. The lowering of the energies increases with the mass. Although in principle NOs require a full scale preliminary many-body calculation, we found that an approximate preliminary many-body calculation, with a marginal increase in the computational cost, is sufficient. The use of natural orbits based on an harmonic oscillator basis leads to a much smaller lowering of the energies for a comparable computational cost.
A Primer for Telemetry Interfacing in Accordance with NASA Standards Using Low Cost FPGAs
NASA Astrophysics Data System (ADS)
McCoy, Jake; Schultz, Ted; Tutt, James; Rogers, Thomas; Miles, Drew; McEntaffer, Randall
2016-03-01
Photon counting detector systems on sounding rocket payloads often require interfacing asynchronous outputs with a synchronously clocked telemetry (TM) stream. Though this can be handled with an on-board computer, there are several low cost alternatives including custom hardware, microcontrollers and field-programmable gate arrays (FPGAs). This paper outlines how a TM interface (TMIF) for detectors on a sounding rocket with asynchronous parallel digital output can be implemented using low cost FPGAs and minimal custom hardware. Low power consumption and high speed FPGAs are available as commercial off-the-shelf (COTS) products and can be used to develop the main component of the TMIF. Then, only a small amount of additional hardware is required for signal buffering and level translating. This paper also discusses how this system can be tested with a simulated TM chain in the small laboratory setting using FPGAs and COTS specialized data acquisition products.
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.
NASA Technical Reports Server (NTRS)
Ali, Syed Firasat; Khan, M. Javed; Rossi, Marcia J.; Heath, Bruce e.; Crane, Peter; Ward, Marcus; Crier, Tomyka; Knighten, Tremaine; Culpepper, Christi
2007-01-01
One result of the relatively recent advances in computing technology has been the decreasing cost of computers and increasing computational power. This has allowed high fidelity airplane simulations to be run on personal computers (PC). Thus, simulators are now used routinely by pilots to substitute real flight hours for simulated flight hours for training for an aircraft type rating thereby reducing the cost of flight training. However, FAA regulations require that such substitution training must be supervised by Certified Flight Instructors (CFI). If the CFI presence could be reduced or eliminated for certain tasks this would mean a further cost savings to the pilot. This would require that the flight simulator have a certain level of 'intelligence' in order to provide feedback on pilot performance similar to that of a CFI. The 'intelligent' flight simulator would have at least the capability to use data gathered from the flight to create a measure for the performance of the student pilot. Also, to fully utilize the advances in computational power, the simulator would be capable of interacting with the student pilot using the best possible training interventions. This thesis reports on the two studies conducted at Tuskegee University investigating the effects of interventions on the learning of two flight maneuvers on a flight simulator and the robustness and accuracy of calculated performance indices as compared to CFI evaluations of performance. The intent of these studies is to take a step in the direction of creating an 'intelligent' flight simulator. The first study deals with the comparisons of novice pilot performance trained at different levels of above real-time to execute a level S-turn. The second study examined the effect of out-of-the-window (OTW) visual cues in the form of hoops on the performance of novice pilots learning to fly a landing approach on the flight simulator. The reliability/robustness of the computed performance metrics was assessed by comparing them with the evaluations of the landing approach maneuver by a number of CFIs.
Guidelines for composite materials research related to general aviation aircraft
NASA Technical Reports Server (NTRS)
Dow, N. F.; Humphreys, E. A.; Rosen, B. W.
1983-01-01
Guidelines for research on composite materials directed toward the improvement of all aspects of their applicability for general aviation aircraft were developed from extensive studies of their performance, manufacturability, and cost effectiveness. Specific areas for research and for manufacturing development were identified and evaluated. Inputs developed from visits to manufacturers were used in part to guide these evaluations, particularly in the area of cost effectiveness. Throughout the emphasis was to direct the research toward the requirements of general aviation aircraft, for which relatively low load intensities are encountered, economy of production is a prime requirement, and yet performance still commands a premium. A number of implications regarding further directions for developments in composites to meet these requirements also emerged from the studies. Chief among these is the need for an integrated (computer program) aerodynamic/structures approach to aircraft design.
Cost-Performance Parametrics for Transporting Small Packages to the Mars Vicinity
NASA Technical Reports Server (NTRS)
McCleskey, C.; Lepsch, Roger A.; Martin, J.; Popescu, M.
2015-01-01
This paper explores the costs and performance required to deliver a small-sized payload package (CubeSat-sized, for instance) to various transportation nodes en route to Mars and near-Mars destinations (such as Mars moons, Phobos and Deimos). Needed is a contemporary assessment and summary compilation of transportation metrics that factor both performance and affordability of modern and emerging delivery capabilities. The paper brings together: (a) required mass transport gear ratios in delivering payload from Earths surface to the Mars vicinity, (b) the cyclical energy required for delivery, and (c) the affordability and availability of various means of transporting material across various Earth-Moon vicinity and Near-Mars vicinity nodes relevant to Mars transportation. Examples for unit deliveries are computed and tabulated, using a CubeSat as a unit, for periodic near-Mars delivery campaign scenarios.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...
2015-05-22
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less
Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps
Bowen, Chris; Ye, Gu; Alterovitz, Ron
2015-01-01
In unstructured environments in people’s homes and workspaces, robots executing a task may need to avoid obstacles while satisfying task motion constraints, e.g., keeping a plate of food level to avoid spills or properly orienting a finger to push a button. We introduce a sampling-based method for computing motion plans that are collision-free and minimize a cost metric that encodes task motion constraints. Our time-dependent cost metric, learned from a set of demonstrations, encodes features of a task’s motion that are consistent across the demonstrations and, hence, are likely required to successfully execute the task. Our sampling-based motion planner uses the learned cost metric to compute plans that simultaneously avoid obstacles and satisfy task constraints. The motion planner is asymptotically optimal and minimizes the Mahalanobis distance between the planned trajectory and the distribution of demonstrations in a feature space parameterized by the locations of task-relevant objects. The motion planner also leverages the distribution of the demonstrations to significantly reduce plan computation time. We demonstrate the method’s effectiveness and speed using a small humanoid robot performing tasks requiring both obstacle avoidance and satisfaction of learned task constraints. Note to Practitioners Motivated by the desire to enable robots to autonomously operate in cluttered home and workplace environments, this paper presents an approach for intuitively training a robot in a manner that enables it to repeat the task in novel scenarios and in the presence of unforeseen obstacles in the environment. Based on user-provided demonstrations of the task, our method learns features of the task that are consistent across the demonstrations and that we expect should be repeated by the robot when performing the task. We next present an efficient algorithm for planning robot motions to perform the task based on the learned features while avoiding obstacles. We demonstrate the effectiveness of our motion planner for scenarios requiring transferring a powder and pushing a button in environments with obstacles, and we plan to extend our results to more complex tasks in the future. PMID:26279642
Code of Federal Regulations, 2011 CFR
2011-01-01
... of the employee doing the work. (2) For computer searches for records, the direct costs of computer... $15.00. Fee Amounts Table Type of fee Amount of fee Manual Search and Review Pro rated Salary Costs. Computer Search Direct Costs. Photocopy $0.15 a page. Other Reproduction Costs Direct Costs. Elective...
NASA Astrophysics Data System (ADS)
Stewart, Iris T.; Loague, Keith
2003-12-01
Groundwater vulnerability assessments of nonpoint source agrochemical contamination at regional scales are either qualitative in nature or require prohibitively costly computational efforts. By contrast, the type transfer function (TTF) modeling approach for vadose zone pesticide leaching presented here estimates solute concentrations at a depth of interest, only uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application. TTFs are soil texture based travel time probability density functions that describe a characteristic leaching behavior for soil profiles with similar soil hydraulic properties. Seven sets of TTFs, representing different levels of upscaling, were developed for six loam soil textural classes with the aid of simulated breakthrough curves from synthetic data sets. For each TTF set, TTFs were determined from a group or subgroup of breakthrough curves for each soil texture by identifying the effective parameters of the function that described the average leaching behavior of the group. The grouping of the breakthrough curves was based on the TTF index, a measure of the magnitude of the peak concentration, the peak arrival time, and the concentration spread. Comparison to process-based simulations show that the TTFs perform well with respect to mass balance, concentration magnitude, and the timing of concentration peaks. Sets of TTFs based on individual soil textures perform better for all the evaluation criteria than sets that span all textures. As prediction accuracy and computational cost increase with the number of TTFs in a set, the selection of a TTF set is determined by a given application.
A Technical Survey on Optimization of Processing Geo Distributed Data
NASA Astrophysics Data System (ADS)
Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.
2018-04-01
With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.
A model to forecast data centre infrastructure costs.
NASA Astrophysics Data System (ADS)
Vernet, R.
2015-12-01
The computing needs in the HEP community are increasing steadily, but the current funding situation in many countries is tight. As a consequence experiments, data centres, and funding agencies have to rationalize resource usage and expenditures. CC-IN2P3 (Lyon, France) provides computing resources to many experiments including LHC, and is a major partner for astroparticle projects like LSST, CTA or Euclid. The financial cost to accommodate all these experiments is substantial and has to be planned well in advance for funding and strategic reasons. In that perspective, leveraging infrastructure expenses, electric power cost and hardware performance observed in our site over the last years, we have built a model that integrates these data and provides estimates of the investments that would be required to cater to the experiments for the mid-term future. We present how our model is built and the expenditure forecast it produces, taking into account the experiment roadmaps. We also examine the resource growth predicted by our model over the next years assuming a flat-budget scenario.
NASA Astrophysics Data System (ADS)
Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun
2018-02-01
Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.
Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.
Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer
2015-11-01
Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.
COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL
NASA Technical Reports Server (NTRS)
Roush, G. B.
1994-01-01
The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo Professional 5.0 for recompilation. An executable is provided on the distribution diskettes. COSTMODL requires 512K RAM. The standard distribution medium for COSTMODL is three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. COSTMODL was developed in 1991. IBM PC is a registered trademark of International Business Machines. Borland and Turbo Pascal are registered trademarks of Borland International, Inc. Turbo Professional is a trademark of TurboPower Software. MS-DOS is a registered trademark of Microsoft Corporation. Turbo Professional is a trademark of TurboPower Software.
Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?
Madhyastha, Tara M; Koh, Natalie; Day, Trevor K M; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J; Rajan, Sabreena; Woelfer, Karl A; Wolf, Jonathan; Grabowski, Thomas J
2017-01-01
The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows "in the cloud." Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster.
Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fijany, A.; Milman, M.; Redding, D.
1994-12-31
In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less
Automated Generation of Message-Passing Programs: An Evaluation Using CAPTools
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Jin, Haoqiang; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During that same time period, a steady transition of hardware and system software also occurred, forcing us to expend great efforts into migrating and re-coding our applications. As applications and machine architectures become increasingly complex, the cost and time required for this process will become prohibitive. In this paper, we present the first set of results in our evaluation of interactive parallelization tools. In particular, we evaluate CAPTool's ability to parallelize computational aeroscience applications. CAPTools was tested on serial versions of the NAS Parallel Benchmarks and ARC3D, a computational fluid dynamics application, on two platforms: the SGI Origin 2000 and the Cray T3E. This evaluation includes performance, amount of user interaction required, limitations and portability. Based on these results, a discussion on the feasibility of computer aided parallelization of aerospace applications is presented along with suggestions for future work.
Schutzer, Matthew E; Arthur, Douglas W; Anscher, Mitchell S
2016-05-01
Value in health care is defined as outcomes achieved per dollar spent, and understanding cost is critical to delivering high-value care. Traditional costing methods reflect charges rather than fundamental costs to provide a service. The more rigorous method of time-driven activity-based costing was used to compare cost between whole-breast radiotherapy (WBRT) and accelerated partial-breast irradiation (APBI) using balloon-based brachytherapy. For WBRT (25 fractions with five-fraction boost) and APBI (10 fractions twice daily), process maps were created outlining each activity from consultation to post-treatment follow up. Through staff interviews, time estimates were obtained for each activity. The capacity cost rates (CCR), defined as cost per minute, were calculated for personnel, equipment, and physical space. Total cost was calculated by multiplying the time required of each resource by its CCR. This was then summed and combined with cost of consumable materials. The total cost for WBRT was $5,333 and comprised 56% personnel costs and 44% space/equipment costs. For APBI, the total cost was $6,941 (30% higher than WBRT) and comprised 51% personnel costs, 6% space/equipment costs, and 43% consumable materials costs. The attending physician had the highest CCR of all personnel ($4.28/min), and APBI required 24% more attending time than WBRT. The most expensive activity for APBI was balloon placement and for WBRT was computed tomography simulation. APBI cost more than WBRT when using the dose/fractionation schemes analyzed. Future research should use time-driven activity-based costing to better understand cost with the aim of reducing expenditure and defining bundled payments. Copyright © 2016 by American Society of Clinical Oncology.
de Vlas, Sake J.; Fischer, Peter U.; Weil, Gary J.; Goldman, Ann S.
2013-01-01
The Global Program to Eliminate Lymphatic Filariasis (LF) has a target date of 2020. This program is progressing well in many countries. However, progress has been slow in some countries, and others have not yet started their mass drug administration (MDA) programs. Acceleration is needed. We studied how increasing MDA frequency from once to twice per year would affect program duration and costs by using computer simulation modeling and cost projections. We used the LYMFASIM simulation model to estimate how many annual or semiannual MDA rounds would be required to eliminate LF for Indian and West African scenarios with varied pre-control endemicity and coverage levels. Results were used to estimate total program costs assuming a target population of 100,000 eligibles, a 3% discount rate, and not counting the costs of donated drugs. A sensitivity analysis was done to investigate the robustness of these results with varied assumptions for key parameters. Model predictions suggested that semiannual MDA will require the same number of MDA rounds to achieve LF elimination as annual MDA in most scenarios. Thus semiannual MDA programs should achieve this goal in half of the time required for annual programs. Due to efficiency gains, total program costs for semiannual MDA programs are projected to be lower than those for annual MDA programs in most scenarios. A sensitivity analysis showed that this conclusion is robust. Semiannual MDA is likely to shorten the time and lower the cost required for LF elimination in countries where it can be implemented. This strategy may improve prospects for global elimination of LF by the target year 2020. PMID:23301115
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Ad Hoc modeling, expert problem solving, and R&T program evaluation
NASA Technical Reports Server (NTRS)
Silverman, B. G.; Liebowitz, J.; Moustakis, V. S.
1983-01-01
A simplified cost and time (SCAT) analysis program utilizing personal-computer technology is presented and demonstrated in the case of the NASA-Goddard end-to-end data system. The difficulties encountered in implementing complex program-selection and evaluation models in the research and technology field are outlined. The prototype SCAT system described here is designed to allow user-friendly ad hoc modeling in real time and at low cost. A worksheet constructed on the computer screen displays the critical parameters and shows how each is affected when one is altered experimentally. In the NASA case, satellite data-output and control requirements, ground-facility data-handling capabilities, and project priorities are intricately interrelated. Scenario studies of the effects of spacecraft phaseout or new spacecraft on throughput and delay parameters are shown. The use of a network of personal computers for higher-level coordination of decision-making processes is suggested, as a complement or alternative to complex large-scale modeling.
Quadratic Programming for Allocating Control Effort
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2005-01-01
A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Transfer of Training from Simulators to Operational Equipment--Are Simulators Effective?
ERIC Educational Resources Information Center
Thomson, Douglas R.
1989-01-01
Examines the degree of fidelity required of a computer simulation to ensure maximum transfer of training. Simulators used in the military services for training pilots are described; relationships between fidelity, transfer, and cost are explored; and feedback to the student and measures of training effectiveness are discussed. (nine references)…
Graphical Internet Access on a Budget: Making a Pseudo-SLIP Connection.
ERIC Educational Resources Information Center
McCulley, P. Michael
1995-01-01
Examines The Internet Adapter (TIA), an Internet protocol that allows computers to be directly on the Internet and access graphics over standard telephone lines using high-speed modems. Compares TIA's system requirements, performance, and costs to other Internet connections. Sidebars describe connections other than TIA and how to find information…
SOC-DS computer code provides tool for design evaluation of homogeneous two-material nuclear shield
NASA Technical Reports Server (NTRS)
Disney, R. K.; Ricks, L. O.
1967-01-01
SOC-DS Code /Shield Optimization Code-Direc Search/, selects a nuclear shield material of optimum volume, weight, or cost to meet the requirments of a given radiation dose rate or energy transmission constraint. It is applicable to evaluating neutron and gamma ray shields for all nuclear reactors.
Low-Cost Terminal Alternative for Learning Center Managers. Final Report.
ERIC Educational Resources Information Center
Nix, C. Jerome; And Others
This study established the feasibility of replacing high performance and relatively expensive computer terminals with less expensive ones adequate for supporting specific tasks of Advanced Instructional System (AIS) at Lowry AFB, Colorado. Surveys of user requirements and available devices were conducted and the results used in a system analysis.…
49 CFR 19.53 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... authorized representatives, have the right of timely and unrestricted access to any books, documents, papers... allocation plans, and any similar accounting computations of the rate at which a particular group of costs is... supporting records starts at the end of the fiscal year (or other accounting period) covered by the proposal...
34 CFR 74.53 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the right of timely and unrestricted access to any books, documents, papers, or other records of... allocation plans; and any similar accounting computations of the rate at which a particular group of costs is... starts at the end of the fiscal year (or other accounting period) covered by the proposal, plan, or other...
41 CFR 105-71.142 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
... similar accounting computations of the rates at which a particular group of costs is chargeable (such as... and its supporting records starts from end of the fiscal year (or other accounting period) covered by..., documents, papers, or other records of grantees and subgrantees which are pertinent to the grant, in order...
2 CFR 215.53 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-01-01
... representatives, have the right of timely and unrestricted access to any books, documents, papers, or other... any similar accounting computations of the rate at which a particular group of costs is chargeable... supporting records starts at the end of the fiscal year (or other accounting period) covered by the proposal...
7 CFR 3019.53 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-01-01
... representatives, have the right of timely and unrestricted access to any books, documents, papers, or other... any similar accounting computations of the rate at which a particular group of costs is chargeable... supporting records starts at the end of the fiscal year (or other accounting period) covered by the proposal...
14 CFR 1274.601 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-01-01
... access to any books, documents, papers, or other records of Recipients that are pertinent to the awards... or proposals, cost allocation plans, and any similar accounting computations of the rate at which a... supporting records starts at the end of the fiscal year (or other accounting period) covered by the proposal...
Financial feasibility of marker-aided selection in Douglas-fir.
G.R. Johnson; N.C. Wheeler; S.H. Strauss
2000-01-01
The land area required for a marker-aided selection (MAS) program to break-even (i.e., have equal costs and benefits) was estimated using computer simulation for coastal Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) in the Pacific Northwestern United States. We compared the selection efficiency obtained when using an index that included the...
Section 405 of the Clean Water Act requires the U.S. EPA to develop and issue regulations that identify: 1) uses for sludge including disposal; 2) specific factors (including costs) to be taken into account in determining the measures and practices applicable for each use or disp...
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
NASA Technical Reports Server (NTRS)
VanDalsem, William R.; Livingston, Mary E.; Melton, John E.; Torres, Francisco J.; Stremel, Paul M.
1999-01-01
Continuous improvement of aerospace product development processes is a driving requirement across much of the aerospace community. As up to 90% of the cost of an aerospace product is committed during the first 10% of the development cycle, there is a strong emphasis on capturing, creating, and communicating better information (both requirements and performance) early in the product development process. The community has responded by pursuing the development of computer-based systems designed to enhance the decision-making capabilities of product development individuals and teams. Recently, the historical foci on sharing the geometrical representation and on configuration management are being augmented: Physics-based analysis tools for filling the design space database; Distributed computational resources to reduce response time and cost; Web-based technologies to relieve machine-dependence; and Artificial intelligence technologies to accelerate processes and reduce process variability. Activities such as the Advanced Design Technologies Testbed (ADTT) project at NASA Ames Research Center study the strengths and weaknesses of the technologies supporting each of these trends, as well as the overall impact of the combination of these trends on a product development event. Lessons learned and recommendations for future activities will be reported.
NASA Astrophysics Data System (ADS)
Suresh, K.; Balaji, S.; Saravanan, K.; Navas, J.; David, C.; Panigrahi, B. K.
2018-02-01
We developed a simple, low cost user-friendly automated indirect ion beam fluence measurement system for ion irradiation and analysis experiments requiring indirect beam fluence measurements unperturbed by sample conditions like low temperature, high temperature, sample biasing as well as in regular ion implantation experiments in the ion implanters and electrostatic accelerators with continuous beam. The system, which uses simple, low cost, off-the-shelf components/systems and two distinct layers of in-house built softwarenot only eliminates the need for costly data acquisition systems but also overcomes difficulties in using properietry software. The hardware of the system is centered around a personal computer, a PIC16F887 based embedded system, a Faraday cup drive cum monitor circuit, a pair of Faraday Cups and a beam current integrator and the in-house developed software include C based microcontroller firmware and LABVIEW based virtual instrument automation software. The automatic fluence measurement involves two important phases, a current sampling phase lasting over 20-30 seconds during which the ion beam current is continuously measured by intercepting the ion beam and the averaged beam current value is computed. A subsequent charge computation phase lasting 700-900 seconds is executed making the ion beam to irradiate the samples and the incremental fluence received by the sampleis estimated usingthe latest averaged beam current value from the ion beam current sampling phase. The cycle of current sampling-charge computation is repeated till the required fluence is reached. Besides simplicity and cost-effectiveness, other important advantages of the developed system include easy reconfiguration of the system to suit customisation of experiments, scalability, easy debug and maintenance of the hardware/software, ability to work as a standalone system. The system was tested with different set of samples and ion fluences and the results were verified using Rutherford backscattering technique which showed the satisfactory functioning of the system. The accuracy of the fluence measurements is found to be less than 2% which meets the demands of the irradiation experiments undertaken using the developed set up. The system was incorporated for regular use at the existing ultra high vacuum (UHV) ion irradiation chamber of 1.7 MV Tandem accelerator and several ion implantation experiments on a variety of samples like SS304, D9, ODS alloys have been successfully carried out.
Parallel Computation of Unsteady Flows on a Network of Workstations
NASA Technical Reports Server (NTRS)
1997-01-01
Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.
NASA Astrophysics Data System (ADS)
Yoon, S.
2016-12-01
To define geodetic reference frame using GPS data collected by Continuously Operating Reference Stations (CORS) network, historical GPS data needs to be reprocessed regularly. Reprocessing GPS data collected by upto 2000 CORS sites for the last two decades requires a lot of computational resource. At National Geodetic Survey (NGS), there has been one completed reprocessing in 2011, and currently, the second reprocessing is undergoing. For the first reprocessing effort, in-house computing resource was utilized. In the current second reprocessing effort, outsourced cloud computing platform is being utilized. In this presentation, the outline of data processing strategy at NGS is described as well as the effort to parallelize the data processing procedure in order to maximize the benefit of the cloud computing. The time and cost savings realized by utilizing cloud computing approach will also be discussed.
Sittig, D F; Franklin, M; Turetsky, M; Sussman, A J; Bates, D W; Komaroff, A L; Teich, J M
1998-01-01
The process of creating a clinical referral for a patient and the transfer of information from the primary care physician to the specialist and back again is a key component in the struggle to deliver less costly and more effective clinical care. We have created a computer-based clinical referral application which facilitates 1) identifying an appropriate specialist; 2) collecting the clinical, demographic, and financial data required to generate a referral; and 3) transferring the information between the specialist and the primary care physician. Preliminary results indicate that the new computer-based process is faster.
Multiscale Simulations of Reactive Transport
NASA Astrophysics Data System (ADS)
Tartakovsky, D. M.; Bakarji, J.
2014-12-01
Discrete, particle-based simulations offer distinct advantages when modeling solute transport and chemical reactions. For example, Brownian motion is often used to model diffusion in complex pore networks, and Gillespie-type algorithms allow one to handle multicomponent chemical reactions with uncertain reaction pathways. Yet such models can be computationally more intensive than their continuum-scale counterparts, e.g., advection-dispersion-reaction equations. Combining the discrete and continuum models has a potential to resolve the quantity of interest with a required degree of physicochemical granularity at acceptable computational cost. We present computational examples of such "hybrid models" and discuss the challenges associated with coupling these two levels of description.
A Contextual Information Acquisition Approach Based on Semantics and Mashup Technology
NASA Astrophysics Data System (ADS)
He, Yangfan; Li, Lu; He, Keqing; Chen, Xiuhong
Pay per use is an essential feature of cloud computing. Users can make use of some parts of a large scale service to satisfy their requirements, merely at the cost of a little payment. A good understanding of the users' requirement is a prerequisite for choosing the service in need precisely. Context implies users' potential requirements, which can be a complement to the requirements delivered explicitly. However, traditional context-aware computing research always demands some specific kinds of sensors to acquire contextual information, which renders a threshold too high for an application to become context-aware. This paper comes up with an approach which combines contextual information obtained directly and indirectly from the cloud services. Semantic relationship between different kinds of contexts lays foundation for the searching of the cloud services. And mashup technology is adopted to compose the heterogonous services. Abundant contextual information may lend strong support to a comprehensive understanding of users' context and a bettered abstraction of contextual requirements.
Aerothermodynamic testing requirements for future space transportation systems
NASA Technical Reports Server (NTRS)
Paulson, John W., Jr.; Miller, Charles G., III
1995-01-01
Aerothermodynamics, encompassing aerodynamics, aeroheating, and fluid dynamic and physical processes, is the genesis for the design and development of advanced space transportation vehicles. It provides crucial information to other disciplines involved in the development process such as structures, materials, propulsion, and avionics. Sources of aerothermodynamic information include ground-based facilities, computational fluid dynamic (CFD) and engineering computer codes, and flight experiments. Utilization of this triad is required to provide the optimum requirements while reducing undue design conservatism, risk, and cost. This paper discusses the role of ground-based facilities in the design of future space transportation system concepts. Testing methodology is addressed, including the iterative approach often required for the assessment and optimization of configurations from an aerothermodynamic perspective. The influence of vehicle shape and the transition from parametric studies for optimization to benchmark studies for final design and establishment of the flight data book is discussed. Future aerothermodynamic testing requirements including the need for new facilities are also presented.
Using Mosix for Wide-Area Compuational Resources
Maddox, Brian G.
2004-01-01
One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.
An emulator for minimizing finite element analysis implementation resources
NASA Technical Reports Server (NTRS)
Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.
1982-01-01
A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.
NASA Astrophysics Data System (ADS)
Doyle, Paul; Mtenzi, Fred; Smith, Niall; Collins, Adrian; O'Shea, Brendan
2012-09-01
The scientific community is in the midst of a data analysis crisis. The increasing capacity of scientific CCD instrumentation and their falling costs is contributing to an explosive generation of raw photometric data. This data must go through a process of cleaning and reduction before it can be used for high precision photometric analysis. Many existing data processing pipelines either assume a relatively small dataset or are batch processed by a High Performance Computing centre. A radical overhaul of these processing pipelines is required to allow reduction and cleaning rates to process terabyte sized datasets at near capture rates using an elastic processing architecture. The ability to access computing resources and to allow them to grow and shrink as demand fluctuates is essential, as is exploiting the parallel nature of the datasets. A distributed data processing pipeline is required. It should incorporate lossless data compression, allow for data segmentation and support processing of data segments in parallel. Academic institutes can collaborate and provide an elastic computing model without the requirement for large centralized high performance computing data centers. This paper demonstrates how a base 10 order of magnitude improvement in overall processing time has been achieved using the "ACN pipeline", a distributed pipeline spanning multiple academic institutes.
Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations
Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey
2012-01-01
Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254
Microdot - A Four-Bit Microcontroller Designed for Distributed Low-End Computing in Satellites
NASA Astrophysics Data System (ADS)
2002-03-01
Many satellites are an integrated collection of sensors and actuators that require dedicated real-time control. For single processor systems, additional sensors require an increase in computing power and speed to provide the multi-tasking capability needed to service each sensor. Faster processors cost more and consume more power, which taxes a satellite's power resources and may lead to shorter satellite lifetimes. An alternative design approach is a distributed network of small and low power microcontrollers designed for space that handle the computing requirements of each individual sensor and actuator. The design of microdot, a four-bit microcontroller for distributed low-end computing, is presented. The design is based on previous research completed at the Space Electronics Branch, Air Force Research Laboratory (AFRL/VSSE) at Kirtland AFB, NM, and the Air Force Institute of Technology at Wright-Patterson AFB, OH. The Microdot has 29 instructions and a 1K x 4 instruction memory. The distributed computing architecture is based on the Philips Semiconductor I2C Serial Bus Protocol. A prototype was implemented and tested using an Altera Field Programmable Gate Array (FPGA). The prototype was operable to 9.1 MHz. The design was targeted for fabrication in a radiation-hardened-by-design gate-array cell library for the TSMC 0.35 micrometer CMOS process.
NASA Astrophysics Data System (ADS)
Hamilton, Joel; Whittlesey, Norman K.; Robison, M. Henry; Willis, David
2002-08-01
This analysis addresses three important conceptual problems in the measurement of direct and indirect costs and benefits: (1) the distribution of impacts between a regional economy and the encompassing state economy; (2) the distinction between indirect impacts and indirect costs (IC), focusing on the dynamic time path unemployed resources follow to find alternative employment; and (3) the distinction among the affected firms' microeconomic categories of fixed and variable costs as they are used to compute regional direct and indirect costs. It uses empirical procedures that reconcile the usual measures of economic impact provided by input/output models with the estimates of economic costs and benefits required for analysis of welfare changes. The paper illustrates the relationships and magnitudes involved in the context of water policy issues facing the Pecos River Basin of New Mexico.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
NASA Astrophysics Data System (ADS)
Nigg, D. W.; Wheeler, F. J.
1981-01-01
A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and the capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigg, D.W.; Wheeler, F.J.
1981-01-01
A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and themore » capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.« less
Online learning in optical tomography: a stochastic approach
NASA Astrophysics Data System (ADS)
Chen, Ke; Li, Qin; Liu, Jian-Guo
2018-07-01
We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.
NASA Astrophysics Data System (ADS)
Takasaki, Koichi
This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).
Resource Provisioning in SLA-Based Cluster Computing
NASA Astrophysics Data System (ADS)
Xiong, Kaiqi; Suh, Sang
Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.
A parallel-processing approach to computing for the geographic sciences
Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Haga, Jim; Maddox, Brian; Feller, Mark
2001-01-01
The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting research into various areas, such as advanced computer architecture, algorithms to meet the processing needs for real-time image and data processing, the creation of custom datasets from seamless source data, rapid turn-around of products for emergency response, and support for computationally intense spatial and temporal modeling.
NASA Technical Reports Server (NTRS)
Rubbert, P. E.
1978-01-01
The commercial airplane builder's viewpoint on the important issues involved in the development of improved computational aerodynamics tools such as powerful computers optimized for fluid flow problems is presented. The primary user of computational aerodynamics in a commercial aircraft company is the design engineer who is concerned with solving practical engineering problems. From his viewpoint, the development of program interfaces and pre-and post-processing capability for new computational methods is just as important as the algorithms and machine architecture. As more and more details of the entire flow field are computed, the visibility of the output data becomes a major problem which is then doubled when a design capability is added. The user must be able to see, understand, and interpret the results calculated. Enormous costs are expanded because of the need to work with programs having only primitive user interfaces.
Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.
Liao, Wen-Hwa; Qiu, Wan-Li
2016-01-01
Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture.
NASA Astrophysics Data System (ADS)
Thompson, Kyle Bonner
An algorithm is described to efficiently compute aerothermodynamic design sensitivities using a decoupled variable set. In a conventional approach to computing design sensitivities for reacting flows, the species continuity equations are fully coupled to the conservation laws for momentum and energy. In this algorithm, the species continuity equations are solved separately from the mixture continuity, momentum, and total energy equations. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This decoupled approach for computing design sensitivities with the adjoint system is demonstrated for inviscid flow in chemical non-equilibrium around a re-entry vehicle with a retro-firing annular nozzle. The sensitivities of the surface temperature and mass flow rate through the nozzle plenum are computed with respect to plenum conditions and verified against sensitivities computed using a complex-variable finite-difference approach. The decoupled scheme significantly reduces the computational time and memory required to complete the optimization, making this an attractive method for high-fidelity design of hypersonic vehicles.
Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco
2014-01-01
Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required beacause of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied. PMID:25285917
Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco
2014-01-01
Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required because of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied.
Aid to planning the marketing of mining area boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giles, R.H. Jr.
Reducing trespass, legal costs, and timber and wildlife poaching and increasing control, safety, and security are key reasons why mine land boundaries need to be marked. Accidents may be reduced, especially when associated with blast area boundaries, and in some cases increased income may be gained from hunting and recreational fees on well-marked areas. A BASIC computer program for an IBM-PC has been developed that requires minimum inputs to estimate boundary marking costs. This paper describes the rationale for the program and shows representative outputs. 3 references, 3 tables.
NASA Technical Reports Server (NTRS)
Ebeling, Charles
1991-01-01
The primary objective is to develop a methodology for predicting operational and support parameters and costs of proposed space systems. The first phase consists of: (1) the identification of data sources; (2) the development of a methodology for determining system reliability and maintainability parameters; (3) the implementation of the methodology through the use of prototypes; and (4) support in the development of an integrated computer model. The phase 1 results are documented and a direction is identified to proceed to accomplish the overall objective.
NECAP: NASA's Energy-Cost Analysis Program. Part 1: User's manual
NASA Technical Reports Server (NTRS)
Henninger, R. H. (Editor)
1975-01-01
The NECAP is a sophisticated building design and energy analysis tool which has embodied within it all of the latest ASHRAE state-of-the-art techniques for performing thermal load calculation and energy usage predictions. It is a set of six individual computer programs which include: response factor program, data verification program, thermal load analysis program, variable temperature program, system and equipment simulation program, and owning and operating cost program. Each segment of NECAP is described, and instructions are set forth for preparing the required input data and for interpreting the resulting reports.
Vehicle Lightweighting: Challenges and Opportunities with Aluminum
NASA Astrophysics Data System (ADS)
Sachdev, Anil K.; Mishra, Raja K.; Mahato, Anirban; Alpas, Ahmet
Rising energy costs, consumer preferences and regulations drive requirements for fuel economy, performance, comfort, safety and cost of future automobiles. These conflicting situations offer challenges for vehicle lightweighting, for which aluminum applications are key. This paper describes product design needs and materials and process development opportunities driven by theoretical, experimental and modeling tools in the area of sheet and castings. Computational tools and novel experimental techniques used in their development are described. The paper concludes with challenges that lie ahead for pervasive use of aluminum and the necessary fundamental R&D that is still needed.
The Hidden Costs of Owning a Microcomputer.
ERIC Educational Resources Information Center
McDole, Thomas L.
Before purchasing computer hardware, individuals must consider the costs associated with the setup and operation of a microcomputer system. Included among the initial costs of purchasing a computer are the costs of the computer, one or more disk drives, a monitor, and a printer as well as the costs of such optional peripheral devices as a plotter…
TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling
NASA Astrophysics Data System (ADS)
Nelson, J.; Jones, N.; Ames, D. P.
2015-12-01
Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsalek, Ondrej; Markland, Thomas E., E-mail: tmarkland@stanford.edu
Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding asmore » a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.« less
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
Cost-of-living indexes and demographic change.
Diamond, C A
1990-06-01
The Consumer Price Index (CPI), although not without problems, is the most often used mechanism for adjusting contracts for cost-of-living changes in the US. The US Bureau of Labor Statistics lists several problems associated with using the CPI as a cost-of-living index where the proportion of 2-worker families is increasing, population is shifting, and work week hours are changing. This study shows how to compute cost-of-living indexes which are inexpensive to update, use less restrictive assumptions about consumer preferences, do not require statistical estimation, and handle the problem of increasing numbers of families where both the husband and wife work. This study attempts to how widely in fact the CPI varies with alternative true cost-of-living varies with alternative true cost-of-living indexes although in the end this de facto cost-of-living measure holds up quite well. In times of severe price inflation people change their preferences by substitution, necessitating a flexible cost-of-living index that accounts for this fundamental economic behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Federick T.; Schlicher, Bob G
2015-01-01
There are many influencing economic factors to weigh from the defender-practitioner stakeholder point-of-view that involve cost combined with development/deployment models. Some examples include the cost of countermeasures themselves, the cost of training and the cost of maintenance. Meanwhile, we must better anticipate the total cost from a compromise. The return on investment in countermeasures is essentially impact costs (i.e., the costs from violating availability, integrity and confidentiality / privacy requirements). The natural question arises about choosing the main risks that must be mitigated/controlled and monitored in deciding where to focus security investments. To answer this question, we have investigated themore » cost/benefits to the attacker/defender to better estimate risk exposure. In doing so, it s important to develop a sound basis for estimating the factors that derive risk exposure, such as likelihood that a threat will emerge and whether it will be thwarted. This impact assessment framework can provide key information for ranking cybersecurity threats and managing risk.« less
Acoustic environmental accuracy requirements for response determination
NASA Technical Reports Server (NTRS)
Pettitt, M. R.
1983-01-01
A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.
Implementing Computer Based Laboratories
NASA Astrophysics Data System (ADS)
Peterson, David
2001-11-01
Physics students at Francis Marion University will complete several required laboratory exercises utilizing computer-based Vernier probes. The simple pendulum, the acceleration due to gravity, simple harmonic motion, radioactive half lives, and radiation inverse square law experiments will be incorporated into calculus-based and algebra-based physics courses. Assessment of student learning and faculty satisfaction will be carried out by surveys and test results. Cost effectiveness and time effectiveness assessments will be presented. Majors in Computational Physics, Health Physics, Engineering, Chemistry, Mathematics and Biology take these courses, and assessments will be categorized by major. To enhance the computer skills of students enrolled in the courses, MAPLE will be used for further analysis of the data acquired during the experiments. Assessment of these enhancement exercises will also be presented.
NASA Astrophysics Data System (ADS)
Sourbier, F.; Operto, S.; Virieux, J.
2006-12-01
We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor computes the corresponding sub-domain of the gradient. In the end, the gradient is centralized on the master processor using a collective communation. The gradient is scaled by the diagonal elements of the Hessian matrix. This scaling is computed only once per frequency before the first iteration of the inversion. Estimation of the diagonal terms of the Hessian requires performing one simulation per non redondant shot and receiver position. The same strategy that the one used for the gradient is used to compute the diagonal Hessian in parallel. This algorithm was applied to a dense wide-angle data set recorded by 100 OBSs in the eastern Nankai trough, offshore Japan. Thirteen frequencies ranging from 3 and 15 Hz were inverted. Tweny iterations per frequency were computed leading to 260 tomographic velocity models of increasing resolution. The velocity model dimensions are 105 km x 25 km corresponding to a finite-difference grid of 4201 x 1001 grid with a 25-m grid interval. The number of shot was 1005 and the number of inverted OBS gathers was 93. The inversion requires 20 days on 6 32-bits bi-processor nodes with 4 Gbytes of RAM memory per node when only the LU factorization is performed in parallel. Preliminary estimations of the time required to perform the inversion with the fully-parallelized code is 6 and 4 days using 20 and 50 processors respectively.
Performance-scalable volumetric data classification for online industrial inspection
NASA Astrophysics Data System (ADS)
Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.
2002-03-01
Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.
Personal digital assistant applications for the healthcare provider.
Keplar, Kristine E; Urbanski, Christopher J
2003-02-01
To review some common medical applications available for personal digital assistants (PDAs), with brief discussion of the different PDA operating systems and memory requirements. Key search terms included handheld, PDA, personal digital assistants, and medical applications. The literature was accessed through MEDLINE (1999-August 2002). Other information was obtained through secondary sources such as Web sites describing common PDAs. Medical applications available on PDAs are numerous and include general drug references, specialized drug references (e.g., pediatrics, geriatrics, cardiology, infectious disease), diagnostic guides, medical calculators, herbal medication references, nursing references, toxicology references, and patient tracking databases. Costs and memory requirements for these programs can vary; consequently, the healthcare provider must limit the medication applications that are placed on the handheld computer. This article attempts to systematically describe the common medical applications available for the handheld computer along with cost, memory and download requirements, and Web site information. This review found many excellent PDA drug information applications offering many features which will aid the healthcare provider. Very likely, after using these PDA applications, the healthcare provider will find them indispensable, as their multifunctional capabilities can save time, improve accuracy, and allow for general business procedures as well as being a quick reference tool. To avoid the benefits of this technology might be a step backward.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1993-01-01
Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.
Determining Training Device Requirements in Army Aviation Systems
NASA Technical Reports Server (NTRS)
Poumade, M. L.
1984-01-01
A decision making methodology which applies the systems approach to the training problem is discussed. Training is viewed as a total system instead of a collection of individual devices and unrelated techniques. The core of the methodology is the use of optimization techniques such as the transportation algorithm and multiobjective goal programming with training task and training device specific data. The role of computers, especially automated data bases and computer simulation models, in the development of training programs is also discussed. The approach can provide significant training enhancement and cost savings over the more traditional, intuitive form of training development and device requirements process. While given from an aviation perspective, the methodology is equally applicable to other training development efforts.
Computer control of a robotic satellite servicer
NASA Technical Reports Server (NTRS)
Fernandez, K. R.
1980-01-01
The advantages that will accrue from the in-orbit servicing of satellites are listed. It is noted that in a concept in satellite servicing which holds promise as a compromise between the high flexibility and adaptability of manned vehicles and the lower cost of an unmanned vehicle involves an unmanned servicer carrying a remotely supervised robotic manipulator arm. Because of deficiencies in sensor technology, robot servicing would require that satellites be designed according to a modular concept. A description is given of the servicer simulation hardware, the computer and interface hardware, and the software. It is noted that several areas require further development; these include automated docking, modularization of satellite design, reliable connector and latching mechanisms, development of manipulators for space environments, and development of automated diagnostic techniques.
Recent advances in QM/MM free energy calculations using reference potentials.
Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L
2015-05-01
Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.
Lightweight ECC based RFID authentication integrated with an ID verifier transfer protocol.
He, Debiao; Kumar, Neeraj; Chilamkurti, Naveen; Lee, Jong-Hyouk
2014-10-01
The radio frequency identification (RFID) technology has been widely adopted and being deployed as a dominant identification technology in a health care domain such as medical information authentication, patient tracking, blood transfusion medicine, etc. With more and more stringent security and privacy requirements to RFID based authentication schemes, elliptic curve cryptography (ECC) based RFID authentication schemes have been proposed to meet the requirements. However, many recently published ECC based RFID authentication schemes have serious security weaknesses. In this paper, we propose a new ECC based RFID authentication integrated with an ID verifier transfer protocol that overcomes the weaknesses of the existing schemes. A comprehensive security analysis has been conducted to show strong security properties that are provided from the proposed authentication scheme. Moreover, the performance of the proposed authentication scheme is analyzed in terms of computational cost, communicational cost, and storage requirement.
A Secure and Efficient Audit Mechanism for Dynamic Shared Data in Cloud Storage
2014-01-01
With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data. PMID:24959630
A secure and efficient audit mechanism for dynamic shared data in cloud storage.
Kwon, Ohmin; Koo, Dongyoung; Shin, Yongjoo; Yoon, Hyunsoo
2014-01-01
With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data.
The NASA MERIT program - Developing new concepts for accurate flight planning
NASA Technical Reports Server (NTRS)
Steinberg, R.
1982-01-01
It is noted that the rising cost of aviation fuel has necessitated the development of a new approach to upper air forecasting for flight planning. It is shown that the spatial resolution of the present weather forecast models used in fully automated computer flight planning is an important accuracy-limiting factor, and it is proposed that man be put back into the system, although not in the way he has been used in the past. A new approach is proposed which uses the application of man-computer interactive display techniques to upper air forecasting to retain the fine scale features of the atmosphere inherent in the present data base in order to provide a more accurate and cost effective flight plan. It is pointed out that, as a result of NASA research, the hardware required for this approach already exists.
NASA Astrophysics Data System (ADS)
Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.
2016-07-01
Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
1974-01-01
A collection of blank worksheets for use on each BRAVO problem to be analyzed is supplied, for the purposes of recording the inputs for the BRAVO analysis, working out the definition of mission equipment, recording inputs to the satellite synthesis computer program, estimating satellite earth station costs, costing terrestrial systems, and cost effectiveness calculations. The group of analysts working BRAVO will normally use a set of worksheets on each problem, however, the workbook pages are of sufficiently good quality that the user can duplicate them, if more worksheet blanks are required than supplied. For Vol. 1, see N74-12493; for Vol. 2, see N74-14530.
NASA Astrophysics Data System (ADS)
Lindsay, R. A.; Cox, B. V.
Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
Social Protocols for Agile Virtual Teams
NASA Astrophysics Data System (ADS)
Picard, Willy
Despite many works on collaborative networked organizations (CNOs), CSCW, groupware, workflow systems and social networks, computer support for virtual teams is still insufficient, especially support for agility, i.e. the capability of virtual team members to rapidly and cost efficiently adapt the way they interact to changes. In this paper, requirements for computer support for agile virtual teams are presented. Next, an extension of the concept of social protocol is proposed as a novel model supporting agile interactions within virtual teams. The extended concept of social protocol consists of an extended social network and a workflow model.
Eisenbach, Markus
2017-01-01
A major impediment to deploying next-generation high-performance computational systems is the required electrical power, often measured in units of megawatts. The solution to this problem is driving the introduction of novel machine architectures, such as those employing many-core processors and specialized accelerators. In this article, we describe the use of a hybrid accelerated architecture to achieve both reduced time to solution and the associated reduction in the electrical cost for a state-of-the-art materials science computation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W. S.
Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets
2010-01-01
Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262