Sample records for computationally frugal methods

  1. Practical Use of Computationally Frugal Model Analysis Methods

    DOE PAGES

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  2. Uncertainty quantification for environmental models

    USGS Publications Warehouse

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear

  3. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.

  4. Optimally frugal foraging

    NASA Astrophysics Data System (ADS)

    Bénichou, O.; Bhat, U.; Krapivsky, P. L.; Redner, S.

    2018-02-01

    We introduce the frugal foraging model in which a forager performs a discrete-time random walk on a lattice in which each site initially contains S food units. The forager metabolizes one unit of food at each step and starves to death when it last ate S steps in the past. Whenever the forager eats, it consumes all food at its current site and this site remains empty forever (no food replenishment). The crucial property of the forager is that it is frugal and eats only when encountering food within at most k steps of starvation. We compute the average lifetime analytically as a function of the frugality threshold and show that there exists an optimal strategy, namely, an optimal frugality threshold k* that maximizes the forager lifetime.

  5. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    USDA-ARS?s Scientific Manuscript database

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  6. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  7. Knowledge, transparency, and refutability in groundwater models, an example from the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Hill, Mary C.; Faunt, Claudia C.; Belcher, Wayne; Sweetkind, Donald; Tiedeman, Claire; Kavetski, Dmitri

    2013-01-01

    This work demonstrates how available knowledge can be used to build more transparent and refutable computer models of groundwater systems. The Death Valley regional groundwater flow system, which surrounds a proposed site for a high level nuclear waste repository of the United States of America, and the Nevada National Security Site (NNSS), where nuclear weapons were tested, is used to explore model adequacy, identify parameters important to (and informed by) observations, and identify existing old and potential new observations important to predictions. Model development is pursued using a set of fundamental questions addressed with carefully designed metrics. Critical methods include using a hydrogeologic model, managing model nonlinearity by designing models that are robust while maintaining realism, using error-based weighting to combine disparate types of data, and identifying important and unimportant parameters and observations and optimizing parameter values with computationally frugal schemes. The frugal schemes employed in this study require relatively few (10–1000 s), parallelizable model runs. This is beneficial because models able to approximate the complex site geology defensibly tend to have high computational cost. The issue of model defensibility is particularly important given the contentious political issues involved.

  8. Frugal Biotech Applications of Low-Temperature Plasma.

    PubMed

    Machala, Zdenko; Graves, David B

    2018-06-01

    Gas discharge low-temperature air plasma can be utilized for a variety of applications, including biomedical, at low cost. We term these applications 'frugal plasma' - an example of frugal innovation. We demonstrate how simple, robust, low-cost frugal plasma devices can be used to safely disinfect instruments, surfaces, and water. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  10. Creating Frugal Citizens: The Liberal Egalitarian Case for Teaching Frugality

    ERIC Educational Resources Information Center

    Zwarthoed, Danielle

    2015-01-01

    According to Agenda 21, the United Nation's action plan for sustainable development, "Governments and private sector organisations should promote more positive attitudes towards sustainable consumption through education, public awareness programmes and other means". But some could wonder whether the cultivation of frugal consumption…

  11. On the suitability of fast and frugal heuristics for designing values clarification methods in patient decision aids: a critical analysis.

    PubMed

    Pieterse, Arwen H; de Vries, Marieke

    2013-09-01

    Increasingly, patient decision aids and values clarification methods (VCMs) are being developed to support patients in making preference-sensitive health-care decisions. Many VCMs encourage extensive deliberation about options, without solid theoretical or empirical evidence showing that deliberation is advantageous. Research suggests that simple, fast and frugal heuristic decision strategies sometimes result in better judgments and decisions. Durand et al. have developed two fast and frugal heuristic-based VCMs. To critically analyse the suitability of the 'take the best' (TTB) and 'tallying' fast and frugal heuristics in the context of patient decision making. Analysis of the structural similarities between the environments in which the TTB and tallying heuristics have been proven successful and the context of patient decision making and of the potential of these heuristic decision processes to support patient decision making. The specific nature of patient preference-sensitive decision making does not seem to resemble environments in which the TTB and tallying heuristics have proven successful. Encouraging patients to consider less rather than more relevant information potentially even deteriorates their values clarification process. Values clarification methods promoting the use of more intuitive decision strategies may sometimes be more effective. Nevertheless, we strongly recommend further theoretical thinking about the expected value of such heuristics and of other more intuitive decision strategies in this context, as well as empirical assessments of the mechanisms by which inducing such decision strategies may impact the quality and outcome of values clarification. © 2011 John Wiley & Sons Ltd.

  12. On the suitability of fast and frugal heuristics for designing values clarification methods in patient decision aids: a critical analysis

    PubMed Central

    Pieterse, Arwen H.; de Vries, Marieke

    2011-01-01

    Abstract Background  Increasingly, patient decision aids and values clarification methods (VCMs) are being developed to support patients in making preference‐sensitive health‐care decisions. Many VCMs encourage extensive deliberation about options, without solid theoretical or empirical evidence showing that deliberation is advantageous. Research suggests that simple, fast and frugal heuristic decision strategies sometimes result in better judgments and decisions. Durand et al. have developed two fast and frugal heuristic‐based VCMs. Objective  To critically analyse the suitability of the ‘take the best’ (TTB) and ‘tallying’ fast and frugal heuristics in the context of patient decision making. Strategy  Analysis of the structural similarities between the environments in which the TTB and tallying heuristics have been proven successful and the context of patient decision making and of the potential of these heuristic decision processes to support patient decision making. Conclusion  The specific nature of patient preference‐sensitive decision making does not seem to resemble environments in which the TTB and tallying heuristics have proven successful. Encouraging patients to consider less rather than more relevant information potentially even deteriorates their values clarification process. Values clarification methods promoting the use of more intuitive decision strategies may sometimes be more effective. Nevertheless, we strongly recommend further theoretical thinking about the expected value of such heuristics and of other more intuitive decision strategies in this context, as well as empirical assessments of the mechanisms by which inducing such decision strategies may impact the quality and outcome of values clarification. PMID:21902770

  13. Revisiting classical design in engineering from a perspective of frugality.

    PubMed

    Rao, Balkrishna C

    2017-05-01

    The conservative nature of design in engineering has typically unleashed products fabricated with generous amounts of raw materials. This is epitomized by the factor of safety whose values higher than unity suggests various uncertainties of design that are tackled through material padding. This effort proposes a new factor of safety called the factor of frugality that could be used in ecodesign and which addresses both rigors of the classical design process and quantification of savings in materials going into a product. An example of frugal shaft design together with some other cases has been presented to explain the working of the factor of frugality . Adoption of the frugality factor would entail a change in design philosophy whereby designers would constantly make avail of a rigorous design process coupled with material-saving schemes for realizing products that are benign to the environment. Such a change in the foundations of design would abet the stewardship of earth in avoiding planetary boundaries since engineering influences a significant proportion of human endeavors.

  14. Frugal innovation in medicine for low resource settings.

    PubMed

    Tran, Viet-Thi; Ravaud, Philippe

    2016-07-07

    Whilst it is clear that technology is crucial to advance healthcare: innovation in medicine is not just about high-tech tools, new procedures or genome discoveries. In constrained environments, healthcare providers often create unexpected solutions to provide adequate healthcare to patients. These inexpensive but effective frugal innovations may be imperfect, but they have the power to ensure that health is within reach of everyone. Frugal innovations are not limited to low-resource settings: ingenuous ideas can be adapted to offer simpler and disruptive alternatives to usual care all around the world, representing the concept of "reverse innovation". In this article, we discuss the different types of frugal innovations, illustrated with examples from the literature, and argue for the need to give voice to this neglected type of innovation in medicine.

  15. Bounded Rationality, Emotions and Older Adult Decision Making: Not so Fast and yet so Frugal

    ERIC Educational Resources Information Center

    Hanoch, Yaniv; Wood, Stacey; Rice, Thomas

    2007-01-01

    Herbert Simon's work on bounded rationality has had little impact on researchers studying older adults' decision making. This omission is surprising, as human constraints on computation and memory are exacerbated in older adults. The study of older adults' decision-making processes could benefit from employing a bounded rationality perspective,…

  16. Frugal Construction Standards. Final [Report].

    ERIC Educational Resources Information Center

    SMART Schools Clearinghouse, Tallahassee, FL.

    This booklet provides best practice recommendations for building functional and frugal schools in Florida. Seventeen best practice construction recommendations are addressed, including recommendations for sitework, concrete, masonry, metals, wood and plastics, thermal and moisture protection, doors and windows, finishes, equipment, furnishings,…

  17. Fabrication of stainless steel spherical anodes for use with boat-mounted boom electroshocker

    USGS Publications Warehouse

    Martinez, Patrick J.; Tiffan, Kenneth F.

    1992-01-01

    A frugal method of fabricating spherical anodes from stainless steel mixing bowls is presented. We believe that the purported mechanical disadvantages of using spherical electrodes are largely unfounded.

  18. Habitual control of goal selection in humans

    PubMed Central

    Cushman, Fiery; Morris, Adam

    2015-01-01

    Humans choose actions based on both habit and planning. Habitual control is computationally frugal but adapts slowly to novel circumstances, whereas planning is computationally expensive but can adapt swiftly. Current research emphasizes the competition between habits and plans for behavioral control, yet many complex tasks instead favor their integration. We consider a hierarchical architecture that exploits the computational efficiency of habitual control to select goals while preserving the flexibility of planning to achieve those goals. We formalize this mechanism in a reinforcement learning setting, illustrate its costs and benefits, and experimentally demonstrate its spontaneous application in a sequential decision-making task. PMID:26460050

  19. Fast or Frugal, but Not Both: Decision Heuristics Under Time Pressure

    PubMed Central

    2017-01-01

    Heuristics are simple, yet effective, strategies that people use to make decisions. Because heuristics do not require all available information, they are thought to be easy to implement and to not tax limited cognitive resources, which has led heuristics to be characterized as fast-and-frugal. We question this monolithic conception of heuristics by contrasting the cognitive demands of two popular heuristics, Tallying and Take-the-Best. We contend that heuristics that are frugal in terms of information usage may not always be fast because of the attentional control required to implement this focus in certain contexts. In support of this hypothesis, we find that Take-the-Best, while being more frugal in terms of information usage, is slower to implement and fares worse under time pressure manipulations than Tallying. This effect is then reversed when search costs for Take-the-Best are reduced by changing the format of the stimuli. These findings suggest that heuristics are heterogeneous and should be unpacked according to their cognitive demands to determine the circumstances a heuristic best applies. PMID:28557503

  20. Fast or frugal, but not both: Decision heuristics under time pressure.

    PubMed

    Bobadilla-Suarez, Sebastian; Love, Bradley C

    2018-01-01

    Heuristics are simple, yet effective, strategies that people use to make decisions. Because heuristics do not require all available information, they are thought to be easy to implement and to not tax limited cognitive resources, which has led heuristics to be characterized as fast-and-frugal. We question this monolithic conception of heuristics by contrasting the cognitive demands of two popular heuristics, Tallying and Take-the-Best. We contend that heuristics that are frugal in terms of information usage may not always be fast because of the attentional control required to implement this focus in certain contexts. In support of this hypothesis, we find that Take-the-Best, while being more frugal in terms of information usage, is slower to implement and fares worse under time pressure manipulations than Tallying. This effect is then reversed when search costs for Take-the-Best are reduced by changing the format of the stimuli. These findings suggest that heuristics are heterogeneous and should be unpacked according to their cognitive demands to determine the circumstances a heuristic best applies. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Psychological Plausibility of the Theory of Probabilistic Mental Models and the Fast and Frugal Heuristics

    ERIC Educational Resources Information Center

    Dougherty, Michael R.; Franco-Watkins, Ana M.; Thomas, Rick

    2008-01-01

    The theory of probabilistic mental models (PMM; G. Gigerenzer, U. Hoffrage, & H. Kleinbolting, 1991) has had a major influence on the field of judgment and decision making, with the most recent important modifications to PMM theory being the identification of several fast and frugal heuristics (G. Gigerenzer & D. G. Goldstein, 1996). These…

  2. Not So Fast! (and Not So Frugal!): Rethinking the Recognition Heuristic

    ERIC Educational Resources Information Center

    Oppenheimer, Daniel M.

    2003-01-01

    The "fast and frugal" approach to reasoning (Gigerenzer, G., & Todd, P. M. (1999). "Simple heuristics that make us smart." New York: Oxford University Press) claims that individuals use non-compensatory strategies in judgment--the idea that only one cue is taken into account in reasoning. The simplest and most important of these heuristics…

  3. Fast or Frugal, but Not Both: Decision Heuristics under Time Pressure

    ERIC Educational Resources Information Center

    Bobadilla-Suarez, Sebastian; Love, Bradley C.

    2018-01-01

    Heuristics are simple, yet effective, strategies that people use to make decisions. Because heuristics do not require all available information, they are thought to be easy to implement and to not tax limited cognitive resources, which has led heuristics to be characterized as fast-and-frugal. We question this monolithic conception of heuristics…

  4. Fast and Frugal Heuristics Are Plausible Models of Cognition: Reply to Dougherty, Franco-Watkins, and Thomas (2008)

    ERIC Educational Resources Information Center

    Gigerenzer, Gerd; Hoffrage, Ulrich; Goldstein, Daniel G.

    2008-01-01

    M. R. Dougherty, A. M. Franco-Watkins, and R. Thomas (2008) conjectured that fast and frugal heuristics need an automatic frequency counter for ordering cues. In fact, only a few heuristics order cues, and these orderings can arise from evolutionary, social, or individual learning, none of which requires automatic frequency counting. The idea that…

  5. Reconsidering "evidence" for fast-and-frugal heuristics.

    PubMed

    Hilbig, Benjamin E

    2010-12-01

    In several recent reviews, authors have argued for the pervasive use of fast-and-frugal heuristics in human judgment. They have provided an overview of heuristics and have reiterated findings corroborating that such heuristics can be very valid strategies leading to high accuracy. They also have reviewed previous work that implies that simple heuristics are actually used by decision makers. Unfortunately, concerning the latter point, these reviews appear to be somewhat incomplete. More important, previous conclusions have been derived from investigations that bear some noteworthy methodological limitations. I demonstrate these by proposing a new heuristic and provide some novel critical findings. Also, I review some of the relevant literature often not-or only partially-considered. Overall, although some fast-and-frugal heuristics indeed seem to predict behavior at times, there is little to no evidence for others. More generally, the empirical evidence available does not warrant the conclusion that heuristics are pervasively used.

  6. Global Lessons In Frugal Innovation To Improve Health Care Delivery In The United States.

    PubMed

    Bhatti, Yasser; Taylor, Andrea; Harris, Matthew; Wadge, Hester; Escobar, Erin; Prime, Matt; Patel, Hannah; Carter, Alexander W; Parston, Greg; Darzi, Ara W; Udayakumar, Krishna

    2017-11-01

    In a 2015 global study of low-cost or frugal innovations, we identified five leading innovations that scaled successfully in their original contexts and that may provide insights for scaling such innovations in the United States. We describe common themes among these diverse innovations, critical factors for their translation to the United States to improve the efficiency and quality of health care, and lessons for the implementation and scaling of other innovations. We highlight promising trends in the United States that support adapting these innovations, including growing interest in moving care out of health care facilities and into community and home settings; the growth of alternative payment models and incentives to experiment with new approaches to population health and care delivery; and the increasing use of diverse health professionals, such as community health workers and advanced practice providers. Our findings should inspire policy makers and health care professionals and inform them about the potential for globally sourced frugal innovations to benefit US health care.

  7. Compact and Hybrid Feature Description for Building Extraction

    NASA Astrophysics Data System (ADS)

    Li, Z.; Liu, Y.; Hu, Y.; Li, P.; Ding, Y.

    2017-05-01

    Building extraction in aerial orthophotos is crucial for various applications. Currently, deep learning has been shown to be successful in addressing building extraction with high accuracy and high robustness. However, quite a large number of samples is required in training a classifier when using deep learning model. In order to realize accurate and semi-interactive labelling, the performance of feature description is crucial, as it has significant effect on the accuracy of classification. In this paper, we bring forward a compact and hybrid feature description method, in order to guarantees desirable classification accuracy of the corners on the building roof contours. The proposed descriptor is a hybrid description of an image patch constructed from 4 sets of binary intensity tests. Experiments show that benefiting from binary description and making full use of color channels, this descriptor is not only computationally frugal, but also accurate than SURF for building extraction.

  8. Assessing the empirical validity of the "take-the-best" heuristic as a model of human probabilistic inference.

    PubMed

    Bröder, A

    2000-09-01

    The boundedly rational 'Take-The-Best" heuristic (TTB) was proposed by G. Gigerenzer, U. Hoffrage, and H. Kleinbölting (1991) as a model of fast and frugal probabilistic inferences. Although the simple lexicographic rule proved to be successful in computer simulations, direct empirical demonstrations of its adequacy as a psychological model are lacking because of several methodical problems. In 4 experiments with a total of 210 participants, this question was addressed. Whereas Experiment 1 showed that TTB is not valid as a universal hypothesis about probabilistic inferences, up to 28% of participants in Experiment 2 and 53% of participants in Experiment 3 were classified as TTB users. Experiment 4 revealed that investment costs for information seem to be a relevant factor leading participants to switch to a noncompensatory TTB strategy. The observed individual differences in strategy use imply the recommendation of an idiographic approach to decision-making research.

  9. Adaptive control of turbulence intensity is accelerated by frugal flow sampling.

    PubMed

    Quinn, Daniel B; van Halder, Yous; Lentink, David

    2017-11-01

    The aerodynamic performance of vehicles and animals, as well as the productivity of turbines and energy harvesters, depends on the turbulence intensity of the incoming flow. Previous studies have pointed at the potential benefits of active closed-loop turbulence control. However, it is unclear what the minimal sensory and algorithmic requirements are for realizing this control. Here we show that very low-bandwidth anemometers record sufficient information for an adaptive control algorithm to converge quickly. Our online Newton-Raphson algorithm tunes the turbulence in a recirculating wind tunnel by taking readings from an anemometer in the test section. After starting at 9% turbulence intensity, the algorithm converges on values ranging from 10% to 45% in less than 12 iterations within 1% accuracy. By down-sampling our measurements, we show that very-low-bandwidth anemometers record sufficient information for convergence. Furthermore, down-sampling accelerates convergence by smoothing gradients in turbulence intensity. Our results explain why low-bandwidth anemometers in engineering and mechanoreceptors in biology may be sufficient for adaptive control of turbulence intensity. Finally, our analysis suggests that, if certain turbulent eddy sizes are more important to control than others, frugal adaptive control schemes can be particularly computationally effective for improving performance. © 2017 The Author(s).

  10. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    NASA Astrophysics Data System (ADS)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  11. Allocation Methods for Use in the Accrual of Manpower Costs.

    DTIC Science & Technology

    1983-06-01

    planners more frugal in their use of military manpower (OB1, 1973). Generally Accepted Accounting Principles ( GAAP ) recognize accrual basis accounting...time. Examples of this type of allocation are depreciation or amortization of long term assets (Fremgen and Liao, 1981). It is this second concept of...financing is that the relatively "soft dollars" of the future will make it easier to contribute. A "soft dollar" is the depreciated value of the dollar

  12. Using Computational Cognitive Modeling to Diagnose Possible Sources of Aviation Error

    NASA Technical Reports Server (NTRS)

    Byrne, M. D.; Kirlik, Alex

    2003-01-01

    We present a computational model of a closed-loop, pilot-aircraft-visual scene-taxiway system created to shed light on possible sources of taxi error. Creating the cognitive aspects of the model using ACT-R required us to conduct studies with subject matter experts to identify experiential adaptations pilots bring to taxiing. Five decision strategies were found, ranging from cognitively-intensive but precise, to fast, frugal but robust. We provide evidence for the model by comparing its behavior to a NASA Ames Research Center simulation of Chicago O'Hare surface operations. Decision horizons were highly variable; the model selected the most accurate strategy given time available. We found a signature in the simulation data of the use of globally robust heuristics to cope with short decision horizons as revealed by errors occurring most frequently at atypical taxiway geometries or clearance routes. These data provided empirical support for the model.

  13. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE PAGES

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.; ...

    2017-11-26

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  14. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  15. Feeling frugal: socioeconomic status, acculturation, and cultural health beliefs among women of Mexican descent.

    PubMed

    Borrayo, Evelinn A; Jenkins, Sharon Rae

    2003-05-01

    Psychosocial and socioeconomic variables are often confounded. The authors combined quantitative with grounded theory analysis to investigate influences of acculturation, socioeconomic status (SES), and cultural health beliefs on Mexican-descent women's preventive health behaviors. In 5 focus group interviews sampling across levels of acculturation and SES, women expressing more traditional Mexican health beliefs about breast cancer screening were of lower SES and were less U.S. acculturated. However, SES and acculturation were uncorrelated with screening behaviors. Qualitative analysis generated hypotheses about joint influences of SES and traditional health beliefs; for example, low-SES women may learn frugal habits as part of their cultural traditions that influence their health care decision making, magnifying SES-imposed structural restrictions on health care access.

  16. The venom optimization hypothesis revisited.

    PubMed

    Morgenstern, David; King, Glenn F

    2013-03-01

    Animal venoms are complex chemical mixtures that typically contain hundreds of proteins and non-proteinaceous compounds, resulting in a potent weapon for prey immobilization and predator deterrence. However, because venoms are protein-rich, they come with a high metabolic price tag. The metabolic cost of venom is sufficiently high to result in secondary loss of venom whenever its use becomes non-essential to survival of the animal. The high metabolic cost of venom leads to the prediction that venomous animals may have evolved strategies for minimizing venom expenditure. Indeed, various behaviors have been identified that appear consistent with frugality of venom use. This has led to formulation of the "venom optimization hypothesis" (Wigger et al. (2002) Toxicon 40, 749-752), also known as "venom metering", which postulates that venom is metabolically expensive and therefore used frugally through behavioral control. Here, we review the available data concerning economy of venom use by animals with either ancient or more recently evolved venom systems. We conclude that the convergent nature of the evidence in multiple taxa strongly suggests the existence of evolutionary pressures favoring frugal use of venom. However, there remains an unresolved dichotomy between this economy of venom use and the lavish biochemical complexity of venom, which includes a high degree of functional redundancy. We discuss the evidence for biochemical optimization of venom as a means of resolving this conundrum. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Not so fast! (and not so frugal!): rethinking the recognition heuristic.

    PubMed

    Oppenheimer, Daniel M

    2003-11-01

    The 'fast and frugal' approach to reasoning (Gigerenzer, G., & Todd, P. M. (1999). Simple heuristics that make us smart. New York: Oxford University Press) claims that individuals use non-compensatory strategies in judgment--the idea that only one cue is taken into account in reasoning. The simplest and most important of these heuristics postulates that judgment sometimes relies solely on recognition. However, the studies that have investigated usage of the recognition heuristic have confounded recognition with other cues that could also lead to similar judgments. This paper tests whether mere recognition is actually driving the findings in support of the recognition heuristic. Two studies provide evidence that judgments do not conform to the recognition heuristic when these confounds are accounted for. Implications for the study of simple heuristics are discussed.

  18. Virginia's mowing experiments.

    DOT National Transportation Integrated Search

    1981-01-01

    With construction of the interstate and arterial highway systems nearing completion, the Department's major concern has shifted to maintenance. Because the highways must be maintained to very high standards, the reduced buying power means that frugal...

  19. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography.Our derivation, which is based on the rate-summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills mature pine trees.more » This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  20. Improving the MODIS Global Snow-Mapping Algorithm

    NASA Technical Reports Server (NTRS)

    Klein, Andrew G.; Hall, Dorothy K.; Riggs, George A.

    1997-01-01

    An algorithm (Snowmap) is under development to produce global snow maps at 500 meter resolution on a daily basis using data from the NASA MODIS instrument. MODIS, the Moderate Resolution Imaging Spectroradiometer, will be launched as part of the first Earth Observing System (EOS) platform in 1998. Snowmap is a fully automated, computationally frugal algorithm that will be ready to implement at launch. Forests represent a major limitation to the global mapping of snow cover as a forest canopy both obscures and shadows the snow underneath. Landsat Thematic Mapper (TM) and MODIS Airborne Simulator (MAS) data are used to investigate the changes in reflectance that occur as a forest stand becomes snow covered and to propose changes to the Snowmap algorithm that will improve snow classification accuracy forested areas.

  1. How Strong Medicine Saved Our Schools from Financial Ruin.

    ERIC Educational Resources Information Center

    Corona, Peter

    1989-01-01

    Six years ago, the Emery Unified School District (Emeryville, CA) teetered on the brink of bankruptcy. Hard work and frugal management by teachers, administrators, and the community saved the school district from financial ruin. (SI)

  2. Design and usability of heuristic-based deliberation tools for women facing amniocentesis.

    PubMed

    Durand, Marie-Anne; Wegwarth, Odette; Boivin, Jacky; Elwyn, Glyn

    2012-03-01

    Evidence suggests that in decision contexts characterized by uncertainty and time constraints (e.g. health-care decisions), fast and frugal decision-making strategies (heuristics) may perform better than complex rules of reasoning. To examine whether it is possible to design deliberation components in decision support interventions using simple models (fast and frugal heuristics). The 'Take The Best' heuristic (i.e. selection of a 'most important reason') and 'The Tallying' integration algorithm (i.e. unitary weighing of pros and cons) were used to develop two deliberation components embedded in a Web-based decision support intervention for women facing amniocentesis testing. Ten researchers (recruited from 15), nine health-care providers (recruited from 28) and ten pregnant women (recruited from 14) who had recently been offered amniocentesis testing appraised evolving versions of 'your most important reason' (Take The Best) and 'weighing it up' (Tallying). Most researchers found the tools useful in facilitating decision making although emphasized the need for simple instructions and clear layouts. Health-care providers however expressed concerns regarding the usability and clarity of the tools. By contrast, 7 out of 10 pregnant women found the tools useful in weighing up the pros and cons of each option, helpful in structuring and clarifying their thoughts and visualizing their decision efforts. Several pregnant women felt that 'weighing it up' and 'your most important reason' were not appropriate when facing such a difficult and emotional decision. Theoretical approaches based on fast and frugal heuristics can be used to develop deliberation tools that provide helpful support to patients facing real-world decisions about amniocentesis. © 2011 Blackwell Publishing Ltd.

  3. Frugal Fun with Fungal Cultures.

    ERIC Educational Resources Information Center

    Sundberg, Marshall D.

    2001-01-01

    A home kitchen can serve as a stockroom to provide the supplies needed to culture fungi for classroom use. Provides some alternative media and cultural techniques along with two alternative classroom investigations that can be employed in elementary through college-level classrooms. (Author/SAH)

  4. The Mao Ethic and Environmental Quality

    ERIC Educational Resources Information Center

    Orleans, Leo A.; Suttmeier, Richard P.

    1970-01-01

    Reviews reports on efforts to improve the Communist Chinese natural environment. National campaigns for water and air pollution control, sanitation improvement, and industrial development are related to Mao Tse-tung's philosophy of frugality and comprehensive resource use. Concern is expressed regarding possible ecological consequences from…

  5. Design and usability of heuristic‐based deliberation tools for women facing amniocentesis

    PubMed Central

    Durand, Marie‐Anne; Wegwarth, Odette; Boivin, Jacky; Elwyn, Glyn

    2011-01-01

    Abstract Background  Evidence suggests that in decision contexts characterized by uncertainty and time constraints (e.g. health‐care decisions), fast and frugal decision‐making strategies (heuristics) may perform better than complex rules of reasoning. Objective  To examine whether it is possible to design deliberation components in decision support interventions using simple models (fast and frugal heuristics). Design  The ‘Take The Best’ heuristic (i.e. selection of a ‘most important reason’) and ‘The Tallying’ integration algorithm (i.e. unitary weighing of pros and cons) were used to develop two deliberation components embedded in a Web‐based decision support intervention for women facing amniocentesis testing. Ten researchers (recruited from 15), nine health‐care providers (recruited from 28) and ten pregnant women (recruited from 14) who had recently been offered amniocentesis testing appraised evolving versions of ‘your most important reason’ (Take The Best) and ‘weighing it up’ (Tallying). Results  Most researchers found the tools useful in facilitating decision making although emphasized the need for simple instructions and clear layouts. Health‐care providers however expressed concerns regarding the usability and clarity of the tools. By contrast, 7 out of 10 pregnant women found the tools useful in weighing up the pros and cons of each option, helpful in structuring and clarifying their thoughts and visualizing their decision efforts. Several pregnant women felt that ‘weighing it up’ and ‘your most important reason’ were not appropriate when facing such a difficult and emotional decision. Conclusion  Theoretical approaches based on fast and frugal heuristics can be used to develop deliberation tools that provide helpful support to patients facing real‐world decisions about amniocentesis. PMID:21241434

  6. The power of simplicity: a fast-and-frugal heuristics approach to performance science.

    PubMed

    Raab, Markus; Gigerenzer, Gerd

    2015-01-01

    Performance science is a fairly new multidisciplinary field that integrates performance domains such as sports, medicine, business, and the arts. To give its many branches a structure and its research a direction, it requires a theoretical framework. We demonstrate the applications of this framework with examples from sport and medicine. Because performance science deals mainly with situations of uncertainty rather than known risks, the needed framework can be provided by the fast-and-frugal heuristics approach. According to this approach, experts learn to rely on heuristics in an adaptive way in order to make accurate decisions. We investigate the adaptive use of heuristics in three ways: the descriptive study of the heuristics in the cognitive "adaptive toolbox;" the prescriptive study of their "ecological rationality," that is, the characterization of the situations in which a given heuristic works; and the engineering study of "intuitive design," that is, the design of transparent aids for making better decisions.

  7. The power of simplicity: a fast-and-frugal heuristics approach to performance science

    PubMed Central

    Raab, Markus; Gigerenzer, Gerd

    2015-01-01

    Performance science is a fairly new multidisciplinary field that integrates performance domains such as sports, medicine, business, and the arts. To give its many branches a structure and its research a direction, it requires a theoretical framework. We demonstrate the applications of this framework with examples from sport and medicine. Because performance science deals mainly with situations of uncertainty rather than known risks, the needed framework can be provided by the fast-and-frugal heuristics approach. According to this approach, experts learn to rely on heuristics in an adaptive way in order to make accurate decisions. We investigate the adaptive use of heuristics in three ways: the descriptive study of the heuristics in the cognitive “adaptive toolbox;” the prescriptive study of their “ecological rationality,” that is, the characterization of the situations in which a given heuristic works; and the engineering study of “intuitive design,” that is, the design of transparent aids for making better decisions. PMID:26579051

  8. Parameterization, sensitivity analysis, and inversion: an investigation using groundwater modeling of the surface-mined Tivoli-Guidonia basin (Metropolitan City of Rome, Italy)

    NASA Astrophysics Data System (ADS)

    La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto

    2016-09-01

    With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.

  9. The Priority Heuristic: Making Choices without Trade-Offs

    ERIC Educational Resources Information Center

    Brandstatter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph

    2006-01-01

    Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, the authors generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic…

  10. Fast and Frugal Framing Effects?

    ERIC Educational Resources Information Center

    Mccloy, Rachel; Beaman, C. Philip; Frosch, Caren A.; Goddard, Kate

    2010-01-01

    Using 3 experiments, we examine whether simple pairwise comparison judgments, involving the "recognition heuristic" (Goldstein & Gigerenzer, 2002), are sensitive to implicit cues to the nature of the comparison required. In Experiments 1 and 2, we show that participants frequently choose the recognized option of a pair if asked to make "larger"…

  11. On Dual Processing and Heuristic Approaches to Moral Cognition

    ERIC Educational Resources Information Center

    Lapsley, Daniel K.; Hill, Patrick L.

    2008-01-01

    We examine the implications of dual-processing theories of cognition for the moral domain, with particular emphasis upon "System 1" theories: the Social Intuitionist Model (Haidt), moral heuristics (Sunstein), fast-and-frugal moral heuristics (Gigerenzer), schema accessibility (Lapsley & Narvaez) and moral expertise (Narvaez). We argue that these…

  12. A Study of 8 Fundamental Moral Characteristics among Thai Undergraduate Students

    ERIC Educational Resources Information Center

    Ngammuk, Patariya

    2011-01-01

    The objective of this study is to explore the eight fundamental moral characteristics of undergraduate students in order to benefit instructional model development. The eight moral characteristics are diligence, frugality, honesty, discipline, politeness, cleanliness, unity and generosity. The study findings rank these eight moral characteristics…

  13. Your School Needs a Frugal Librarian!

    ERIC Educational Resources Information Center

    Johns, Sara Kelly

    2011-01-01

    School libraries are crucial to learning, but need resources to close the digital and print divide that widens as school budgets shrink. In this article, the author shares her perspective on stretching the library dollar. As budgets tighten and the use of library resources increases, school librarian's ingenuity, skill, and planning can ensure…

  14. The Frugal Librarian: Thriving in Tough Economic Times

    ERIC Educational Resources Information Center

    Smallwood, Carol, Ed.

    2011-01-01

    Fewer employees, shorter hours, diminished collection budgets, reduced programs and services--all at a time of record library usage. In this book, library expert Carol Smallwood demonstrates that despite the obvious downsides, the necessity of doing business differently can be positive, leading to partnering, sharing, and innovating. This…

  15. Successful Strategies for Rapidly Upgrading PTC Windchill 9.1 to Windchill 10.1 on a Light Budget

    NASA Technical Reports Server (NTRS)

    Shearrow, Charles A.

    2013-01-01

    Topics covered include: The Frugal Times Historical Upgrade Process; Planning for Possible Constraints; PTC Compatibility Matrix; In-Place Upgrade Process; Pre-Upgrade Activities; Upgrade Activities; Post Upgrade Activities; Results of the Upgrade; Tips for an Upgrade On a Shoestring Budget.

  16. True--or Not?

    ERIC Educational Resources Information Center

    Abilock, Debbie

    2012-01-01

    In this dizzying world of click-and-go wikified information, everyone uses fast-and-frugal skimming strategies to evaluate information daily. The challenge is to teach students to devise accurate rules that take advantage of new technology to quickly judge the quality of the information they want to use. The author suggests several digital reading…

  17. Frugal Flow

    ERIC Educational Resources Information Center

    Nortier, Richard

    2008-01-01

    The plumbing products most appropriate for a high-end hotel or executive restroom will differ from those most suited for school and university restrooms, where large numbers of boisterous students may charge through the doors all day long. However, installing plumbing that can stand up to rough-and-tough student use does not have to compromise…

  18. One-Reason Decision Making Unveiled: A Measurement Model of the Recognition Heuristic

    ERIC Educational Resources Information Center

    Hilbig, Benjamin E.; Erdfelder, Edgar; Pohl, Rudiger F.

    2010-01-01

    The fast-and-frugal recognition heuristic (RH) theory provides a precise process description of comparative judgments. It claims that, in suitable domains, judgments between pairs of objects are based on recognition alone, whereas further knowledge is ignored. However, due to the confound between recognition and further knowledge, previous…

  19. Feeling Frugal: Socioeconomic Status, Acculturation, and Cultural Health Beliefs among Women of Mexican Descent.

    ERIC Educational Resources Information Center

    Borrayo, Evelinn A.; Jenkins, Sharon Rae

    2003-01-01

    Investigates influences of acculturation, socioeconomic status (SES), and cultural health beliefs on Mexican-descent women's preventive health behaviors. In 5 focus group interviews sampling across levels of acculturation and SES, women expressing more traditional Mexican health beliefs about breast cancer screening were of lower SES and were less…

  20. Exploding Boxes

    ERIC Educational Resources Information Center

    Kinney; Jan

    2011-01-01

    How do you teach the "same old, same old" in an interesting and inexpensive way? Art teachers are forever looking for new angles on the good-old elements and principles. And, as budgets tighten, they are trying to be as frugal as possible while still holding their students' attention. Enter exploding boxes! In conceptualizing the three types of…

  1. Transforming clinical practice guidelines and clinical pathways into fast-and-frugal decision trees to improve clinical care strategies.

    PubMed

    Djulbegovic, Benjamin; Hozo, Iztok; Dale, William

    2018-02-27

    Contemporary delivery of health care is inappropriate in many ways, largely due to suboptimal Q5 decision-making. A typical approach to improve practitioners' decision-making is to develop evidence-based clinical practice guidelines (CPG) by guidelines panels, who are instructed to use their judgments to derive practice recommendations. However, mechanisms for the formulation of guideline judgments remains a "black-box" operation-a process with defined inputs and outputs but without sufficient knowledge of its internal workings. Increased explicitness and transparency in the process can be achieved by implementing CPG as clinical pathways (CPs) (also known as clinical algorithms or flow-charts). However, clinical recommendations thus derived are typically ad hoc and developed by experts in a theory-free environment. As any recommendation can be right (true positive or negative), or wrong (false positive or negative), the lack of theoretical structure precludes the quantitative assessment of the management strategies recommended by CPGs/CPs. To realize the full potential of CPGs/CPs, they need to be placed on more solid theoretical grounds. We believe this potential can be best realized by converting CPGs/CPs within the heuristic theory of decision-making, often implemented as fast-and-frugal (FFT) decision trees. This is possible because FFT heuristic strategy of decision-making can be linked to signal detection theory, evidence accumulation theory, and a threshold model of decision-making, which, in turn, allows quantitative analysis of the accuracy of clinical management strategies. Fast-and-frugal provides a simple and transparent, yet solid and robust, methodological framework connecting decision science to clinical care, a sorely needed missing link between CPGs/CPs and patient outcomes. We therefore advocate that all guidelines panels express their recommendations as CPs, which in turn should be converted into FFTs to guide clinical care. © 2018 John Wiley & Sons, Ltd.

  2. Adaptive Flexibility and Maladaptive Routines in Selecting Fast and Frugal Decision Strategies

    ERIC Educational Resources Information Center

    Broder, Arndt; Schiffer, Stefanie

    2006-01-01

    Decision routines unburden the cognitive capacity of the decision maker. In changing environments, however, routines may become maladaptive. In 2 experiments with a hypothetical stock market game (n = 241), the authors tested whether decision routines tend to persist at the level of decision strategies rather than at the level of options in…

  3. Jack Pine

    Treesearch

    William Dent Sterrett

    1920-01-01

    Jack pine is a very frugal tree in its climatic and soil requirements. The northern limit of its natural range is within 14 degrees of the Arctic Circle and the southern is marked by the southern shores of Lake Michigan. No other North American pine grows naturally so far north and all the others grow farther south. It develops commercial stands and reproduces itself...

  4. FRUGAL Act

    THOMAS, 113th Congress

    Rep. Ruiz, Raul [D-CA-36

    2014-09-18

    House - 09/18/2014 Referred to the Committee on Ways and Means, and in addition to the Committees on Oversight and Government Reform, and Financial Services, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within... (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  5. Fluency Heuristic: A Model of How the Mind Exploits a By-Product of Information Retrieval

    ERIC Educational Resources Information Center

    Hertwig, Ralph; Herzog, Stefan M.; Schooler, Lael J.; Reimer, Torsten

    2008-01-01

    Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the…

  6. We favor formal models of heuristics rather than lists of loose dichotomies: a reply to Evans and Over

    PubMed Central

    Gigerenzer, Gerd

    2009-01-01

    In their comment on Marewski et al. (good judgments do not require complex cognition, 2009) Evans and Over (heuristic thinking and human intelligence: a commentary on Marewski, Gaissmaier and Gigerenzer, 2009) conjectured that heuristics can often lead to biases and are not error free. This is a most surprising critique. The computational models of heuristics we have tested allow for quantitative predictions of how many errors a given heuristic will make, and we and others have measured the amount of error by analysis, computer simulation, and experiment. This is clear progress over simply giving heuristics labels, such as availability, that do not allow for quantitative comparisons of errors. Evans and Over argue that the reason people rely on heuristics is the accuracy-effort trade-off. However, the comparison between heuristics and more effortful strategies, such as multiple regression, has shown that there are many situations in which a heuristic is more accurate with less effort. Finally, we do not see how the fast and frugal heuristics program could benefit from a dual-process framework unless the dual-process framework is made more precise. Instead, the dual-process framework could benefit if its two “black boxes” (Type 1 and Type 2 processes) were substituted by computational models of both heuristics and other processes. PMID:19784854

  7. A Signal-Detection Analysis of Fast-and-Frugal Trees

    ERIC Educational Resources Information Center

    Luan, Shenghua; Schooler, Lael J.; Gigerenzer, Gerd

    2011-01-01

    Models of decision making are distinguished by those that aim for an optimal solution in a world that is precisely specified by a set of assumptions (a so-called "small world") and those that aim for a simple but satisfactory solution in an uncertain world where the assumptions of optimization models may not be met (a so-called "large world"). Few…

  8. The Development of Adaptive Decision Making: Recognition-Based Inference in Children and Adolescents

    ERIC Educational Resources Information Center

    Horn, Sebastian S.; Ruggeri, Azzurra; Pachur, Thorsten

    2016-01-01

    Judgments about objects in the world are often based on probabilistic information (or cues). A frugal judgment strategy that utilizes memory (i.e., the ability to discriminate between known and unknown objects) as a cue for inference is the recognition heuristic (RH). The usefulness of the RH depends on the structure of the environment,…

  9. Fluent, Fast, and Frugal? A Formal Model Evaluation of the Interplay between Memory, Fluency, and Comparative Judgments

    ERIC Educational Resources Information Center

    Hilbig, Benjamin E.; Erdfelder, Edgar; Pohl, Rudiger F.

    2011-01-01

    A new process model of the interplay between memory and judgment processes was recently suggested, assuming that retrieval fluency--that is, the speed with which objects are recognized--will determine inferences concerning such objects in a single-cue fashion. This aspect of the fluency heuristic, an extension of the recognition heuristic, has…

  10. Within-person adaptivity in frugal judgments from memory.

    PubMed

    Filevich, Elisa; Horn, Sebastian S; Kühn, Simone

    2017-12-22

    Humans can exploit recognition memory as a simple cue for judgment. The utility of recognition depends on the interplay with the environment, particularly on its predictive power (validity) in a domain. It is, therefore, an important question whether people are sensitive to differences in recognition validity between domains. Strategic, intra-individual changes in the reliance on recognition have not been investigated so far. The present study fills this gap by scrutinizing within-person changes in using a frugal strategy, the recognition heuristic (RH), across two task domains that differed in recognition validity. The results showed adaptive changes in the reliance on recognition between domains. However, these changes were neither associated with the individual recognition validities nor with corresponding changes in these validities. These findings support a domain-adaptivity explanation, suggesting that people have broader intuitions about the usefulness of recognition across different domains that are nonetheless sufficiently robust for adaptive decision making. The analysis of metacognitive confidence reports mirrored and extended these results. Like RH use, confidence ratings covaried with task domain, but not with individual recognition validities. The changes in confidence suggest that people may have metacognitive access to information about global differences between task domains, but not to individual cue validities.

  11. A lack of appetite for information and computation. Simple heuristics in food choice.

    PubMed

    Schulte-Mecklenbeck, Michael; Sohn, Matthias; de Bellis, Emanuel; Martin, Nathalie; Hertwig, Ralph

    2013-12-01

    The predominant, but largely untested, assumption in research on food choice is that people obey the classic commandments of rational behavior: they carefully look up every piece of relevant information, weight each piece according to subjective importance, and then combine them into a judgment or choice. In real world situations, however, the available time, motivation, and computational resources may simply not suffice to keep these commandments. Indeed, there is a large body of research suggesting that human choice is often better accommodated by heuristics-simple rules that enable decision making on the basis of a few, but important, pieces of information. We investigated the prevalence of such heuristics in a computerized experiment that engaged participants in a series of choices between two lunch dishes. Employing MouselabWeb, a process-tracing technique, we found that simple heuristics described an overwhelmingly large proportion of choices, whereas strategies traditionally deemed rational were barely apparent in our data. Replicating previous findings, we also observed that visual stimulus segments received a much larger proportion of attention than any nutritional values did. Our results suggest that, consistent with human behavior in other domains, people make their food choices on the basis of simple and informationally frugal heuristics. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Violent Systems: Defeating Terrorists, Insurgents, and Other Non-State Adversaries

    DTIC Science & Technology

    2004-03-01

    LTTE must be proactive in dealing with donor organizations in the Tamil Diaspora, whereas it takes a defensive, or even reactive, approach to...processing system that influence action, be they conscious or not, be they rational or not, be they distributed across an organization or not. They must...heuristics and biases, ecological rationality, fast and frugal heuristics, metaphor and analogy, the storytelling mind, “hot” emotional cognition

  13. DISCOVERING FRUGAL INNOVATIONS THROUGH DELIVERING EARLY CHILDHOOD HOME-VISITING INTERVENTIONS IN LOW-RESOURCE TRIBAL COMMUNITIES.

    PubMed

    Barlow, Allison; McDaniel, Judy A; Marfani, Farha; Lowe, Anne; Keplinger, Cassie; Beltangady, Moushumi; Goklish, Novalene

    2018-05-01

    Early childhood home-visiting has been shown to yield the greatest impact for the lowest income, highest disparity families. Yet, poor communities generally experience fractured systems of care, a paucity of providers, and limited resources to deliver intensive home-visiting models to families who stand to benefit most. This article explores lessons emerging from the recent Tribal Maternal and Infant Early Childhood Home Visiting (MIECHV) legislation supporting delivery of home-visiting interventions in low-income, hard-to-reach American Indian and Alaska Native communities. We draw experience from four diverse tribal communities that participated in the Tribal MIECHV Program and overcame socioeconomic, geographic, and structural challenges that called for both early childhood home-visiting services and increased the difficulty of delivery. Key innovations are described, including unique community engagement, recruitment and retention strategies, expanded case management roles of home visitors to overcome fragmented care systems, contextual demands for employing paraprofessional home visitors, and practical advances toward streamlined evaluation approaches. We draw on the concept of "frugal innovation" to explain how the experience of Tribal MIECHV participation has led to more efficient, effective, and culturally informed early childhood home-visiting service delivery, with lessons for future dissemination to underserved communities in the United States and abroad. © 2018 Michigan Association for Infant Mental Health.

  14. East Europe Report, Economic and Industrial Affairs.

    DTIC Science & Technology

    1984-06-18

    Possibilities of Domestic Reserves"] [Text] It is public knowledge that we must be frugal with energy. The price increases of raw materials and the...dissemination of the skills required in order to harness our acquired store of theoretical knowledge in practical settings. 3. The reform has initiated a...flight and is an enthusiast whose professional knowledge is among the highest. 90 About ROMBAC, With Love Midway in my notes of the discussion I had

  15. Simple heuristics in over-the-counter drug choices: a new hint for medical education and practice

    PubMed Central

    Riva, Silvia; Monti, Marco; Antonietti, Alessandro

    2011-01-01

    Introduction Over-the-counter (OTC) drugs are widely available and often purchased by consumers without advice from a health care provider. Many people rely on self-management of medications to treat common medical conditions. Although OTC medications are regulated by the National and the International Health and Drug Administration, many people are unaware of proper dosing, side effects, adverse drug reactions, and possible medication interactions. Purpose This study examined how subjects make their decisions to select an OTC drug, evaluating the role of cognitive heuristics which are simple and adaptive rules that help the decision-making process of people in everyday contexts. Subjects and methods By analyzing 70 subjects’ information-search and decision-making behavior when selecting OTC drugs, we examined the heuristics they applied in order to assess whether simple decision-making processes were also accurate and relevant. Subjects were tested with a sequence of two experimental tests based on a computerized Java system devised to analyze participants’ choices in a virtual environment. Results We found that subjects’ information-search behavior reflected the use of fast and frugal heuristics. In addition, although the heuristics which correctly predicted subjects’ decisions implied significantly fewer cues on average than the subjects did in the information-search task, they were accurate in describing order of information search. A simple combination of a fast and frugal tree and a tallying rule predicted more than 78% of subjects’ decisions. Conclusion The current emphasis in health care is to shift some responsibility onto the consumer through expansion of self medication. To know which cognitive mechanisms are behind the choice of OTC drugs is becoming a relevant purpose of current medical education. These findings have implications both for the validity of simple heuristics describing information searches in the field of OTC drug choices and for current medical education, which has to prepare competent health specialists to orientate and support the choices of their patients. PMID:23745077

  16. Accurate perceptions do not need complete information to reflect reality.

    PubMed

    Mousavi, Shabnam; Funder, David C

    2017-01-01

    Social reality of a group emerges from interpersonal perceptions and beliefs put to action under a host of environmental conditions. By extending the study of fast-and-frugal heuristics, we view social perceptions as judgment tools and assert that perceptions are ecologically rational to the degree that they adapt to the social reality. We maintain that the veracity of both stereotypes and base rates, as judgment tools, can be determined solely by accuracy research.

  17. JPRS Report, East Asia, Southeast Asia, Vietnam: TAP CHI CONG SAN, No. 5, May; No. 6, June; No. 7, July; No. 8, August 1988.

    DTIC Science & Technology

    1989-01-30

    between tradition and renovation is a basic characteristic of Marxism. One can only achieve a deep understanding of the Marxist dialectic by...simple, diligent and frugal way of life. But these characteristics disappeared very quickly before the new virtues could be established. One...talking about dogmatism in theoretical thinking within our country, attention must be given to the following characteristics : 1) It is the product of

  18. How do we know if a clinical practice guideline is good? A response to Djulbegovic and colleagues' use of fast-and-frugal decision trees to improve clinical care strategies.

    PubMed

    Mercuri, Mathew

    2018-04-17

    Clinical practice guidelines (CPGs) and clinical pathways have become important tools for improving the uptake of evidence-based care. Where CPGs are good, adherence to the recommendations within is thought to result in improved patient outcomes. However, the usefulness of such tools for improving patient important outcomes depends both on adherence to the guideline and whether or not the CPG in question is good. This begs the question of what it is that makes a CPG good? In this issue of the Journal, Djulbegovic and colleagues offer a theory to help guide the development of CPGs. The "fast-and-frugal tree" (FFT) heuristic theory is purported to provide the theoretical structure needed to quantitatively assess clinical guidelines in practice, something that the lack of theory to guide CPG development has precluded. In this paper, I examine the role of FFTs in providing an adequate theoretical framework for developing CPGs. In my view, positioning guideline development within the FFT framework may help with problems related to adherence. However, I believe that FTTs fall short in providing panel members with the theoretical basis needed to justify which factors should be considered when developing a CPG, how information on those factors derived from research studies should be interpreted, and how those factors should be integrated into the recommendation. © 2018 John Wiley & Sons, Ltd.

  19. Sepia ink as a surrogate for colloid transport tests in porous media

    NASA Astrophysics Data System (ADS)

    Soto-Gómez, Diego; Pérez-Rodríguez, Paula; López-Periago, J. Eugenio; Paradelo, Marcos

    2016-08-01

    We examined the suitability of the ink of Sepia officinalis as a surrogate for transport studies of microorganisms and microparticles in porous media. Sepia ink is an organic pigment consisted on a suspension of eumelanin, and that has several advantages for its use as a promising material for introducing the frugal-innovation in the fields of public health and environmental research: very low cost, non-toxic, spherical shape, moderate polydispersivity, size near large viruses, non-anomalous electrokinetic behavior, low retention in the soil, and high stability. Electrokinetic determinations and transport experiments in quartz sand columns and soil columns were done with purified suspensions of sepia ink. Influence of ionic strength on the electrophoretic mobility of ink particles showed the typical behavior of polystyrene latex spheres. Breakthrough curve (BTC) and retention profile (RP) in quartz sand columns showed a depth dependent and blocking adsorption model with an increase in adsorption rates with the ionic strength. Partially saturated transport through undisturbed soil showed less retention than in quartz sand, and matrix exclusion was also observed. Quantification of ink in leachate fractions by light absorbance is direct, but quantification in the soil profile with moderate to high organic matter content was rather cumbersome. We concluded that sepia ink is a suitable cheap surrogate for exploring transport of pathogenic viruses, bacteria and particulate contaminants in groundwater, and could be used for developing frugal-innovation related with the assessment of soil and aquifer filtration function, and monitoring of water filtration systems in low-income regions.

  20. Optimized adhesives for strong, lightweight, damage-resistant, nanocomposite materials: new insights from natural materials

    NASA Astrophysics Data System (ADS)

    Hansma, P. K.; Turner, P. J.; Ruoff, R. S.

    2007-01-01

    From our investigations of natural composite materials such as abalone shell and bone we have learned the following. (1) Nature is frugal with resources: it uses just a few per cent glue, by weight, to glue together composite materials. (2) Nature does not avoid voids. (3) Nature makes optimized glues with sacrificial bonds and hidden length. We discuss how optimized adhesives combined with high specific stiffness/strength structures such as carbon nanotubes or graphene sheets could yield remarkably strong, lightweight, and damage-resistant materials.

  1. Revision of the Mesoamerican species of Calolydella Townsend (Diptera: Tachinidae) and description of twenty-three new species reared from caterpillars in Area de Conservación Guanacaste, northwestern Costa Rica

    PubMed Central

    Fleming, AJ; Wood, D. Monty; Smith, M. Alex; Hallwachs, Winnie; Janzen, Daniel H

    2018-01-01

    Abstract Background Twenty-three new species of the genus Calolydella Townsend, 1927 (Diptera: Tachinidae) are described, all reared from multiple species of wild-caught caterpillars across a wide variety of families (Lepidoptera: Crambidae; Erebidae; Geometridae; Hesperiidae; Lycaenidae; Nymphalidae; Pieridae; Riodinidae; and Sphingidae). All caterpillars were collected within Area de Conservación Guanacaste (ACG), in northwestern Costa Rica. This study provides a concise description of each new species using morphology, life history, molecular data, and photographic documentation. In addition to the new species, we also provide a generic redescription and revised key to species of the genus Calolydella from Central and South America. New information The following 23 new species of Calolydella are described by Fleming and Wood: C. adelinamoralesae sp. n., C. alexanderjamesi sp. n., C. argentea sp. n., C. aureofacies sp. n., C. bicolor sp. n., C. bifissus sp. n., C. crocata sp. n., C. destituta sp. n., C. discalis sp. n., C. erasmocoronadoi sp. n., C. felipechavarriai sp. n., C. fredriksjobergi sp. n., C. inflatipalpis sp. n., C. interrupta sp. n., C. nigripalpis sp. n., C. omissa sp. n., C. ordinalis sp. n., C. renemalaisei sp. n., C. susanaroibasae sp. n., C. tanyadapkeyae sp. n., C. tenebrosa sp. n., C. timjamesi sp. n., C. virginiajamesae sp. n. Lydella frugale Curran, 1934 is proposed as a new synonym of Pygophorinia peruviana Townsend, 1927, syn. n., under the combination Calolydella frugale (Curran, 1934), comb. n. PMID:29674932

  2. Testing take-the-best in new and changing environments.

    PubMed

    Lee, Michael D; Blanco, Gabrielle; Bo, Nikole

    2017-08-01

    Take-the-best is a decision-making strategy that chooses between alternatives, by searching the cues representing the alternatives in order of cue validity, and choosing the alternative with the first discriminating cue. Theoretical support for take-the-best comes from the "fast and frugal" approach to modeling cognition, which assumes decision-making strategies need to be fast to cope with a competitive world, and be simple to be robust to uncertainty and environmental change. We contribute to the empirical evaluation of take-the-best in two ways. First, we generate four new environments-involving bridge lengths, hamburger prices, theme park attendances, and US university rankings-supplementing the relatively limited number of naturally cue-based environments previously considered. We find that take-the-best is as accurate as rival decision strategies that use all of the available cues. Secondly, we develop 19 new data sets characterizing the change in cities and their populations in four countries. We find that take-the-best maintains its accuracy and limited search as the environments change, even if cue validities learned in one environment are used to make decisions in another. Once again, we find that take-the-best is as accurate as rival strategies that use all of the cues. We conclude that these new evaluations support the theoretical claims of the accuracy, frugality, and robustness for take-the-best, and that the new data sets provide a valuable resource for the more general study of the relationship between effective decision-making strategies and the environments in which they operate.

  3. Training the max-margin sequence model with the relaxed slack variables.

    PubMed

    Niu, Lingfeng; Wu, Jianmin; Shi, Yong

    2012-09-01

    Sequence models are widely used in many applications such as natural language processing, information extraction and optical character recognition, etc. We propose a new approach to train the max-margin based sequence model by relaxing the slack variables in this paper. With the canonical feature mapping definition, the relaxed problem is solved by training a multiclass Support Vector Machine (SVM). Compared with the state-of-the-art solutions for the sequence learning, the new method has the following advantages: firstly, the sequence training problem is transformed into a multiclassification problem, which is more widely studied and already has quite a few off-the-shelf training packages; secondly, this new approach reduces the complexity of training significantly and achieves comparable prediction performance compared with the existing sequence models; thirdly, when the size of training data is limited, by assigning different slack variables to different microlabel pairs, the new method can use the discriminative information more frugally and produces more reliable model; last but not least, by employing kernels in the intermediate multiclass SVM, nonlinear feature space can be easily explored. Experimental results on the task of named entity recognition, information extraction and handwritten letter recognition with the public datasets illustrate the efficiency and effectiveness of our method. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  5. Exhaustive geographic search with mobile robots along space-filling curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spires, S.V.; Goldsmith, S.Y.

    1998-03-01

    Swarms of mobile robots can be tasked with searching a geographic region for targets of interest, such as buried land mines. The authors assume that the individual robots are equipped with sensors tuned to the targets of interest, that these sensors have limited range, and that the robots can communicate with one another to enable cooperation. How can a swarm of cooperating sensate robots efficiently search a given geographic region for targets in the absence of a priori information about the target`s locations? Many of the obvious approaches are inefficient or lack robustness. One efficient approach is to have themore » robots traverse a space-filling curve. For many geographic search applications, this method is energy-frugal, highly robust, and provides guaranteed coverage in a finite time that decreases as the reciprocal of the number of robots sharing the search task. Furthermore, it minimizes the amount of robot-to-robot communication needed for the robots to organize their movements. This report presents some preliminary results from applying the Hilbert space-filling curve to geographic search by mobile robots.« less

  6. Meta-modelling, visualization and emulation of multi-dimensional data for virtual production intelligence

    NASA Astrophysics Data System (ADS)

    Schulz, Wolfgang; Hermanns, Torsten; Al Khawli, Toufik

    2017-07-01

    Decision making for competitive production in high-wage countries is a daily challenge where rational and irrational methods are used. The design of decision making processes is an intriguing, discipline spanning science. However, there are gaps in understanding the impact of the known mathematical and procedural methods on the usage of rational choice theory. Following Benjamin Franklin's rule for decision making formulated in London 1772, he called "Prudential Algebra" with the meaning of prudential reasons, one of the major ingredients of Meta-Modelling can be identified finally leading to one algebraic value labelling the results (criteria settings) of alternative decisions (parameter settings). This work describes the advances in Meta-Modelling techniques applied to multi-dimensional and multi-criterial optimization by identifying the persistence level of the corresponding Morse-Smale Complex. Implementations for laser cutting and laser drilling are presented, including the generation of fast and frugal Meta-Models with controlled error based on mathematical model reduction Reduced Models are derived to avoid any unnecessary complexity. Both, model reduction and analysis of multi-dimensional parameter space are used to enable interactive communication between Discovery Finders and Invention Makers. Emulators and visualizations of a metamodel are introduced as components of Virtual Production Intelligence making applicable the methods of Scientific Design Thinking and getting the developer as well as the operator more skilled.

  7. KMC 2: fast and resource-frugal k-mer counting.

    PubMed

    Deorowicz, Sebastian; Kokot, Marek; Grabowski, Szymon; Debudaj-Grabysz, Agnieszka

    2015-05-15

    Building the histogram of occurrences of every k-symbol long substring of nucleotide data is a standard step in many bioinformatics applications, known under the name of k-mer counting. Its applications include developing de Bruijn graph genome assemblers, fast multiple sequence alignment and repeat detection. The tremendous amounts of NGS data require fast algorithms for k-mer counting, preferably using moderate amounts of memory. We present a novel method for k-mer counting, on large datasets about twice faster than the strongest competitors (Jellyfish 2, KMC 1), using about 12 GB (or less) of RAM. Our disk-based method bears some resemblance to MSPKmerCounter, yet replacing the original minimizers with signatures (a carefully selected subset of all minimizers) and using (k, x)-mers allows to significantly reduce the I/O and a highly parallel overall architecture allows to achieve unprecedented processing speeds. For example, KMC 2 counts the 28-mers of a human reads collection with 44-fold coverage (106 GB of compressed size) in about 20 min, on a 6-core Intel i7 PC with an solid-state disk. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Darning, doylies and dancing: the work of the Leeds Association of Girls' Clubs (1904-1913).

    PubMed

    Jones, Helen M F

    2011-01-01

    The Leeds Association of Girls' Clubs (LAGC) was set up by a group of women, including Hilda Hargrove, Dr Lucy Buckley and Mary and Margaret Harvey, to promote collaboration between the city's girls' clubs. The organisation epitomised women working in partnership whilst reflecting their differing philanthropic and political interests. However LAGC's collaborative approach resulted in liberal consensus which downplayed the significance of girls' working conditions. Throughout the decade LAGC's focus was its annual competitions. These featured utilitarian and decorative handicrafts (darning and doylies) enshrining both frugality and aspiration, alongside dance and drill which channelled girls' vigour. Nevertheless, LAGC's resilience resulted in an organisation which is still in existence.

  9. Simple heuristics in over-the-counter drug choices: a new hint for medical education and practice.

    PubMed

    Riva, Silvia; Monti, Marco; Antonietti, Alessandro

    2011-01-01

    Over-the-counter (OTC) drugs are widely available and often purchased by consumers without advice from a health care provider. Many people rely on self-management of medications to treat common medical conditions. Although OTC medications are regulated by the National and the International Health and Drug Administration, many people are unaware of proper dosing, side effects, adverse drug reactions, and possible medication interactions. This study examined how subjects make their decisions to select an OTC drug, evaluating the role of cognitive heuristics which are simple and adaptive rules that help the decision-making process of people in everyday contexts. By analyzing 70 subjects' information-search and decision-making behavior when selecting OTC drugs, we examined the heuristics they applied in order to assess whether simple decision-making processes were also accurate and relevant. Subjects were tested with a sequence of two experimental tests based on a computerized Java system devised to analyze participants' choices in a virtual environment. We found that subjects' information-search behavior reflected the use of fast and frugal heuristics. In addition, although the heuristics which correctly predicted subjects' decisions implied significantly fewer cues on average than the subjects did in the information-search task, they were accurate in describing order of information search. A simple combination of a fast and frugal tree and a tallying rule predicted more than 78% of subjects' decisions. The current emphasis in health care is to shift some responsibility onto the consumer through expansion of self medication. To know which cognitive mechanisms are behind the choice of OTC drugs is becoming a relevant purpose of current medical education. These findings have implications both for the validity of simple heuristics describing information searches in the field of OTC drug choices and for current medical education, which has to prepare competent health specialists to orientate and support the choices of their patients.

  10. That's not how the learning works - the paradox of Reverse Innovation: a qualitative study.

    PubMed

    Harris, Matthew; Weisberger, Emily; Silver, Diana; Dadwal, Viva; Macinko, James

    2016-07-05

    There are significant differences in the meaning and use of the term 'Reverse Innovation' between industry circles, where the term originated, and health policy circles where the term has gained traction. It is often conflated with other popularized terms such as Frugal Innovation, Co-development and Trickle-up Innovation. Compared to its use in the industrial sector, this conceptualization of Reverse Innovation describes a more complex, fragmented process, and one with no particular institution in charge. It follows that the way in which the term 'Reverse Innovation', specifically, is understood and used in the healthcare space is worthy of examination. Between September and December 2014, we conducted eleven in-depth face-to-face or telephone interviews with key informants from innovation, health and social policy circles, experts in international comparative policy research and leaders in the Reverse Innovation space in the United States. Interviews were open-ended with guiding probes into the barriers and enablers to Reverse Innovation in the US context, specifically also informants' experience and understanding of the term Reverse Innovation. Interviews were recorded, transcribed and analyzed thematically using the process of constant comparison. We describe three main themes derived from the interviews. First, 'Reverse Innovation,' the term, has marketing currency to convince policy-makers that may be wary of learning from or adopting innovations from unexpected sources, in this case Low-Income Countries. Second, the term can have the opposite effect - by connoting frugality, or innovation arising from necessity as opposed to good leadership, the proposed innovation may be associated with poor quality, undermining potential translation into other contexts. Finally, the term 'Reverse Innovation' is a paradox - it breaks down preconceptions of the directionality of knowledge and learning, whilst simultaneously reinforcing it. We conclude that this term means different things to different people and should be used strategically, and with some caution, depending on the audience.

  11. Heuristics: foundations for a novel approach to medical decision making.

    PubMed

    Bodemer, Nicolai; Hanoch, Yaniv; Katsikopoulos, Konstantinos V

    2015-03-01

    Medical decision-making is a complex process that often takes place during uncertainty, that is, when knowledge, time, and resources are limited. How can we ensure good decisions? We present research on heuristics-simple rules of thumb-and discuss how medical decision-making can benefit from these tools. We challenge the common view that heuristics are only second-best solutions by showing that they can be more accurate, faster, and easier to apply in comparison to more complex strategies. Using the example of fast-and-frugal decision trees, we illustrate how heuristics can be studied and implemented in the medical context. Finally, we suggest how a heuristic-friendly culture supports the study and application of heuristics as complementary strategies to existing decision rules.

  12. The Emergence of Personalized Health Technology.

    PubMed

    Allen, Luke Nelson; Christie, Gillian Pepall

    2016-05-10

    Personalized health technology is a noisy new entrant to the health space, yet to make a significant impact on population health but seemingly teeming with potential. Devices including wearable fitness trackers and healthy-living apps are designed to help users quantify and improve their health behaviors. Although the ethical issues surrounding data privacy have received much attention, little is being said about the impact on socioeconomic health inequalities. Populations who stand to benefit the most from these technologies are unable to afford, access, or use them. This paper outlines the negative impact that these technologies will have on inequalities unless their user base can be radically extended to include vulnerable populations. Frugal innovation and public-private partnership are discussed as the major means for reaching this end.

  13. Disk-based k-mer counting on a PC

    PubMed Central

    2013-01-01

    Background The k-mer counting problem, which is to build the histogram of occurrences of every k-symbol long substring in a given text, is important for many bioinformatics applications. They include developing de Bruijn graph genome assemblers, fast multiple sequence alignment and repeat detection. Results We propose a simple, yet efficient, parallel disk-based algorithm for counting k-mers. Experiments show that it usually offers the fastest solution to the considered problem, while demanding a relatively small amount of memory. In particular, it is capable of counting the statistics for short-read human genome data, in input gzipped FASTQ file, in less than 40 minutes on a PC with 16 GB of RAM and 6 CPU cores, and for long-read human genome data in less than 70 minutes. On a more powerful machine, using 32 GB of RAM and 32 CPU cores, the tasks are accomplished in less than half the time. No other algorithm for most tested settings of this problem and mammalian-size data can accomplish this task in comparable time. Our solution also belongs to memory-frugal ones; most competitive algorithms cannot efficiently work on a PC with 16 GB of memory for such massive data. Conclusions By making use of cheap disk space and exploiting CPU and I/O parallelism we propose a very competitive k-mer counting procedure, called KMC. Our results suggest that judicious resource management may allow to solve at least some bioinformatics problems with massive data on a commodity personal computer. PMID:23679007

  14. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  15. Atmosphere of Freedom: Sixty Years at the NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bugos, Glenn E.; Launius, Roger (Technical Monitor)

    2000-01-01

    Throughout Ames History, four themes prevail: a commitment to hiring the best people; cutting-edge research tools; project management that gets things done faster, better and cheaper; and outstanding research efforts that serve the scientific professions and the nation. More than any other NASA Center, Ames remains shaped by its origins in the NACA (National Advisory Committee for Aeronautics). Not that its missions remain the same. Sure, Ames still houses the world's greatest collection of wind tunnels and simulation facilities, its aerodynamicists remain among the best in the world, and pilots and engineers still come for advice on how to build better aircraft. But that is increasingly part of Ames' past. Ames people have embraced two other missions for its future. First, intelligent systems and information science will help NASA use new tools in supercomputing, networking, telepresence and robotics. Second, astrobiology will explore lore the prospects for life on Earth and beyond. Both new missions leverage Ames long-standing expertise in computation and in the life sciences, as well as its relations with the computing and biotechnology firms working in the Silicon Valley community that has sprung up around the Center. Rather than the NACA missions, it is the NACA culture that still permeates Ames. The Ames way of research management privileges the scientists and engineers working in the laboratories. They work in an atmosphere of freedom, laced with the expectation of integrity and responsibility. Ames researchers are free to define their research goals and define how they contribute to the national good. They are expected to keep their fingers on the pulse of their disciplines, to be ambitious yet frugal in organizing their efforts, and to always test their theories in the laboratory or in the field. Ames' leadership ranks, traditionally, are cultivated within this scientific community. Rather than manage and supervise these researchers, Ames leadership merely guided them, represents them to NASA headquarters and the world outside, then steps out of the way before they get run over.

  16. The Priority Heuristic: Making Choices Without Trade-Offs

    PubMed Central

    Brandstätter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph

    2010-01-01

    Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, we generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic predicts (i) Allais' paradox, (ii) risk aversion for gains if probabilities are high, (iii) risk seeking for gains if probabilities are low (lottery tickets), (iv) risk aversion for losses if probabilities are low (buying insurance), (v) risk seeking for losses if probabilities are high, (vi) certainty effect, (vii) possibility effect, and (viii) intransitivities. We test how accurately the heuristic predicts people's choices, compared to previously proposed heuristics and three modifications of expected utility theory: security-potential/aspiration theory, transfer-of-attention-exchange model, and cumulative prospect theory. PMID:16637767

  17. Medical mobile technologies – what is needed for a sustainable and scalable implementation on a global scale?

    PubMed Central

    Lundin, Johan; Dumont, Guy

    2017-01-01

    ABSTRACT Current advances within medical technology show great potential from a global health perspective. Inexpensive, effective solutions to common problems within diagnostics, medical procedures and access to medical information are emerging within almost all fields of medicine. The innovations can benefit health care both in resource-limited and in resource-rich settings. However, there is a big gap between the proof-of-concept stage and implementation. This article will give examples of promising solutions, with special focus on mobile image- and sensor-based diagnostics. We also discuss how technology and frugal innovations could be made sustainable and widely available. Finally, a list of critical factors for success is presented, based on both our own experiences and the literature. PMID:28838308

  18. Traffic protection in MPLS networks using an off-line flow optimization model

    NASA Astrophysics Data System (ADS)

    Krzesinski, Anthony E.; Muller, Karen E.

    2002-07-01

    MPLS-based recovery is intended to effect rapid and complete restoration of traffic affected by a fault in an MPLS network. Two MPLS-based recovery models have been proposed: IP re-routing which establishes recovery paths on demand, and protection switching which works with pre-established recovery paths. IP re-routing is robust and frugal since no resources are pre-committed but is inherently slower than protection switching which is intended to offer high reliability to premium services where fault recovery takes place at the 100 ms time scale. We present a model of protection switching in MPLS networks. A variant of the flow deviation method is used to find and capacitate a set of optimal label switched paths. The traffic is routed over a set of working LSPs. Global repair is implemented by reserving a set of pre-established recovery LSPs. An analytic model is used to evaluate the MPLS-based recovery mechanisms in response to bi-directional link failures. A simulation model is used to evaluate the MPLS recovery cycle in terms of the time needed to restore the traffic after a uni-directional link failure. The models are applied to evaluate the effectiveness of protection switching in networks consisting of between 20 and 100 nodes.

  19. Affordable passive solar homes - low-cost, compact designs. [Glossary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowther, R.L.

    1984-01-01

    The designs and plans of this book present total, integrative, energy design. They carefully integrate site, architecture, and interior for various population segments that meet a frugal budget. The book is divided into two sections. The first part gives data concerning design, construction, site, climatic factors, materials, interiors, financing, and other home ownership factors that enhance affordability. Basic information on the design assumptions and considerations incorporate into the homes is presented, along with passive solar systems descriptions. The second part presents designs and plans with a brief review of considerations that serve defined human living needs, as well single-family, attached,more » or multiple residential configurations. The plans are based on a dimensional grid using 4-foot and 2-foot (1.2 meter and .61 meter) increments compatible with economic standard lumber and materials sizes.« less

  20. Saving can save from death anxiety: mortality salience and financial decision-making.

    PubMed

    Zaleskiewicz, Tomasz; Gasiorowska, Agata; Kesebir, Pelin

    2013-01-01

    Four studies tested the idea that saving money can buffer death anxiety and constitute a more effective buffer than spending money. Saving can relieve future-related anxiety and provide people with a sense of control over their fate, thereby rendering death thoughts less threatening. Study 1 found that participants primed with both saving and spending reported lower death fear than controls. Saving primes, however, were associated with significantly lower death fear than spending primes. Study 2 demonstrated that mortality primes increase the attractiveness of more frugal behaviors in save-or-spend dilemmas. Studies 3 and 4 found, in two different cultures (Polish and American), that the activation of death thoughts prompts people to allocate money to saving as opposed to spending. Overall, these studies provided evidence that saving protects from existential anxiety, and probably more so than spending.

  1. Medical Humanities: The Rx for Uncertainty?

    PubMed

    Ofri, Danielle

    2017-12-01

    While medical students often fear the avalanche of knowledge they are required to learn during training, it is learning to translate that knowledge into wisdom that is the greatest challenge of becoming a doctor. Part of that challenge is learning to tolerate ambiguity and uncertainty, a difficult feat for doctors who are taught to question anything that is not evidence based or peer reviewed. The medical humanities specialize in this ambiguity and uncertainty, which are hallmarks of actual clinical practice but rarely addressed in medical education. The humanities also force reflection and contemplation-skills that are crucial to thoughtful decision making and to personal wellness. Beyond that, the humanities add a dose of joy and beauty to a training process that is notoriously frugal in these departments. Well integrated, the humanities can be the key to transforming medical knowledge into clinical wisdom.

  2. Reduce, reuse, and recycle: developmental evolution of trait diversification.

    PubMed

    Preston, Jill C; Hileman, Lena C; Cubas, Pilar

    2011-03-01

    A major focus of evolutionary developmental (evo-devo) studies is to determine the genetic basis of variation in organismal form and function, both of which are fundamental to biological diversification. Pioneering work on metazoan and flowering plant systems has revealed conserved sets of genes that underlie the bauplan of organisms derived from a common ancestor. However, the extent to which variation in the developmental genetic toolkit mirrors variation at the phenotypic level is an active area of research. Here we explore evidence from the angiosperm evo-devo literature supporting the frugal use of genes and genetic pathways in the evolution of developmental patterning. In particular, these examples highlight the importance of genetic pleiotropy in different developmental modules, thus reducing the number of genes required in growth and development, and the reuse of particular genes in the parallel evolution of ecologically important traits.

  3. Beauty and simplicity: the power of fine art and moral teaching on education in seventeenth-century Holland.

    PubMed

    Dekker, J J H

    2009-04-01

    Seventeenth century Dutch genre painting played a major role in the promotion of the pursuit of family and educational virtues. Packing moralistic messages in fine paintings was considered as a very effective moralistic communication policy in a culture in which sending such moralising paintings and drawings on education and domestic virtues, so contributing to the reconciliation of the existing tensions, or, in the words of Simon Schama, embarrassment between beauty and the promoted virtues of frugality and simplicity. A broad middle class created its own private surrounding in which morality and enjoying the beauty of moralising on the family and parenting went together, as is made clear by the analysis of a series of representative images. Dutch parents, moralists, and painters knew the power of beauty in moralising on the family.

  4. Biochar-based water treatment systems as a potential low-cost and sustainable technology for clean water provision.

    PubMed

    Gwenzi, Willis; Chaukura, Nhamo; Noubactep, Chicgoua; Mukome, Fungai N D

    2017-07-15

    Approximately 600 million people lack access to safe drinking water, hence achieving Sustainable Development Goal 6 (Ensure availability and sustainable management of water and sanitation for all by 2030) calls for rapid translation of recent research into practical and frugal solutions within the remaining 13 years. Biochars, with excellent capacity to remove several contaminants from aqueous solutions, constitute an untapped technology for drinking water treatment. Biochar water treatment has several potential merits compared to existing low-cost methods (i.e., sand filtration, boiling, solar disinfection, chlorination): (1) biochar is a low-cost and renewable adsorbent made using readily available biomaterials and skills, making it appropriate for low-income communities; (2) existing methods predominantly remove pathogens, but biochars remove chemical, biological and physical contaminants; (3) biochars maintain organoleptic properties of water, while existing methods generate carcinogenic by-products (e.g., chlorination) and/or increase concentrations of chemical contaminants (e.g., boiling). Biochars have co-benefits including provision of clean energy for household heating and cooking, and soil application of spent biochar improves soil quality and crop yields. Integrating biochar into the water and sanitation system transforms linear material flows into looped material cycles, consistent with terra preta sanitation. Lack of design information on biochar water treatment, and environmental and public health risks constrain the biochar technology. Seven hypotheses for future research are highlighted under three themes: (1) design and optimization of biochar water treatment; (2) ecotoxicology and human health risks associated with contaminant transfer along the biochar-soil-food-human pathway, and (3) life cycle analyses of carbon and energy footprints of biochar water treatment systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Frugal Droplet Microfluidics Using Consumer Opto-Electronics.

    PubMed

    Frot, Caroline; Taccoen, Nicolas; Baroud, Charles N

    2016-01-01

    The maker movement has shown how off-the-shelf devices can be combined to perform operations that, until recently, required expensive specialized equipment. Applying this philosophy to microfluidic devices can play a fundamental role in disseminating these technologies outside specialist labs and into industrial use. Here we show how nanoliter droplets can be manipulated using a commercial DVD writer, interfaced with an Arduino electronic controller. We couple the optical setup with a droplet generation and manipulation device based on the "confinement gradients" approach. This device uses regions of different depths to generate and transport the droplets, which further simplifies the operation and reduces the need for precise flow control. The use of robust consumer electronics, combined with open source hardware, leads to a great reduction in the price of the device, as well as its footprint, without reducing its performance compared with the laboratory setup.

  6. When language comprehension goes wrong for the right reasons: Good-enough, underspecified, or shallow language processing.

    PubMed

    Christianson, Kiel

    2016-01-01

    This paper contains an overview of language processing that can be described as "good enough", "underspecified", or "shallow". The central idea is that a nontrivial proportion of misunderstanding, misinterpretation, and miscommunication can be attributed not to random error, but instead to processing preferences of the human language processing system. In other words, the very architecture of the language processor favours certain types of processing errors because in a majority of instances, this "fast and frugal", less effortful processing is good enough to support communication. By way of historical background, connections are made between this relatively recent facet of psycholinguistic study, other recent language processing models, and related concepts in other areas of cognitive science. Finally, the nine papers included in this special issue are introduced as representative of novel explorations of good-enough, or underspecified, language processing.

  7. The economics of motion perception and invariants of visual sensitivity.

    PubMed

    Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael

    2007-06-21

    Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.

  8. Heuristic thinking and human intelligence: a commentary on Marewski, Gaissmaier and Gigerenzer.

    PubMed

    Evans, Jonathan St B T; Over, David E

    2010-05-01

    Marewski, Gaissmaier and Gigerenzer (2009) present a review of research on fast and frugal heuristics, arguing that complex problems are best solved by simple heuristics, rather than the application of knowledge and logical reasoning. We argue that the case for such heuristics is overrated. First, we point out that heuristics can often lead to biases as well as effective responding. Second, we show that the application of logical reasoning can be both necessary and relatively simple. Finally, we argue that the evidence for a logical reasoning system that co-exists with simpler heuristic forms of thinking is overwhelming. Not only is it implausible a priori that we would have evolved such a system that is of no use to us, but extensive evidence from the literature on dual processing in reasoning and judgement shows that many problems can only be solved when this form of reasoning is used to inhibit and override heuristic thinking.

  9. Fluency heuristic: a model of how the mind exploits a by-product of information retrieval.

    PubMed

    Hertwig, Ralph; Herzog, Stefan M; Schooler, Lael J; Reimer, Torsten

    2008-09-01

    Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the most of an automatic by-product of retrieval from memory, namely, retrieval fluency. In 4 experiments, the authors show that retrieval fluency can be a proxy for real-world quantities, that people can discriminate between two objects' retrieval fluencies, and that people's inferences are in line with the fluency heuristic (in particular fast inferences) and with experimentally manipulated fluency. The authors conclude that the fluency heuristic may be one tool in the mind's repertoire of strategies that artfully probes memory for encapsulated frequency information that can veridically reflect statistical regularities in the world. (c) 2008 APA, all rights reserved.

  10. The priority heuristic: making choices without trade-offs.

    PubMed

    Brandstätter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph

    2006-04-01

    Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, the authors generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic predicts (a) the Allais paradox, (b) risk aversion for gains if probabilities are high, (c) risk seeking for gains if probabilities are low (e.g., lottery tickets), (d) risk aversion for losses if probabilities are low (e.g., buying insurance), (e) risk seeking for losses if probabilities are high, (f) the certainty effect, (g) the possibility effect, and (h) intransitivities. The authors test how accurately the heuristic predicts people's choices, compared with previously proposed heuristics and 3 modifications of expected utility theory: security-potential/aspiration theory, transfer-of-attention-exchange model, and cumulative prospect theory. ((c) 2006 APA, all rights reserved).

  11. Frugal Droplet Microfluidics Using Consumer Opto-Electronics

    PubMed Central

    Frot, Caroline; Taccoen, Nicolas; Baroud, Charles N.

    2016-01-01

    The maker movement has shown how off-the-shelf devices can be combined to perform operations that, until recently, required expensive specialized equipment. Applying this philosophy to microfluidic devices can play a fundamental role in disseminating these technologies outside specialist labs and into industrial use. Here we show how nanoliter droplets can be manipulated using a commercial DVD writer, interfaced with an Arduino electronic controller. We couple the optical setup with a droplet generation and manipulation device based on the “confinement gradients” approach. This device uses regions of different depths to generate and transport the droplets, which further simplifies the operation and reduces the need for precise flow control. The use of robust consumer electronics, combined with open source hardware, leads to a great reduction in the price of the device, as well as its footprint, without reducing its performance compared with the laboratory setup. PMID:27560139

  12. Saving Can Save from Death Anxiety: Mortality Salience and Financial Decision-Making

    PubMed Central

    Zaleskiewicz, Tomasz; Gasiorowska, Agata; Kesebir, Pelin

    2013-01-01

    Four studies tested the idea that saving money can buffer death anxiety and constitute a more effective buffer than spending money. Saving can relieve future-related anxiety and provide people with a sense of control over their fate, thereby rendering death thoughts less threatening. Study 1 found that participants primed with both saving and spending reported lower death fear than controls. Saving primes, however, were associated with significantly lower death fear than spending primes. Study 2 demonstrated that mortality primes increase the attractiveness of more frugal behaviors in save-or-spend dilemmas. Studies 3 and 4 found, in two different cultures (Polish and American), that the activation of death thoughts prompts people to allocate money to saving as opposed to spending. Overall, these studies provided evidence that saving protects from existential anxiety, and probably more so than spending. PMID:24244497

  13. NASA/JPL Solar System Educators Program: Twelve Years of Success and Looking Forward

    NASA Astrophysics Data System (ADS)

    Ferrari, K.; NASA/JPL Solar System Educators Program

    2011-12-01

    Since 1999, the NASA/JPL Solar System Educators Program (SSEP) has been the model of a successful master teacher volunteer program. Integrating nationwide volunteers in this professional development program helped optimize agency funding set aside for education. Through the efforts of these volunteers, teachers across the country became familiarized with NASA's STEM (Science, Technology, Engineering and Mathematics) educational materials, schools added these products to their curriculum and students benefitted. The years since 1999 have brought about many changes. There have been advancements in technology that allow more opportunities for telecon and web based learning methods. Along with those advancements have also come significant challenges. With NASA budgets for education shrinking, this already frugal program has become more spartan. Teachers face their own hardships with school budget cuts, limited classroom time and little support for professional development. In order for SSEP to remain viable in the face of these challenges, the program management, mission funders and volunteers themselves are working together to find ways of maintaining the quality that made the program a success and at the same time incorporate new, cost-effective methods of delivery. The group will also seek new partnerships to provide enhancements that will aid educators in advancing their careers at the same time as they receive professional development. By working together and utilizing the talent and experience of these master teachers, the Solar System Educators Program can enjoy a revitalization that will meet the needs of today's educators at the same time as renewing the enthusiasm of the volunteers.

  14. Downscaling the Local Weather Above Glaciers in Complex Topography

    NASA Astrophysics Data System (ADS)

    Horak, Johannes; Hofer, Marlis; Gutmann, Ethan; Gohm, Alexander; Rotach, Mathias

    2017-04-01

    Glaciers have experienced a substantial ice-volume loss during the 20th century. To study their response to climate change, process-based glacier mass-balance models (PBGMs) are employed, which require a faithful representation of the state of the atmosphere above the glacier at high spatial and temporal resolution. Glaciers are usually located in complex topography where weather stations are scarce or not existent at all due to the remoteness of such sites and the associated high cost of maintenance. Furthermore. the effective resolution of global circulation models is too large to adequately capture the local topography and represent local weather, which is prerequisite for atmospheric input used by PBGMs. Dynamical downscaling is a physically consistent but computationally expensive approach to bridge the scale gap between GCM output and input needed by PBGMs, while statistical downscaling is faster but requires measurements for training. Both methods have their merits, however, a computationally frugal approach that does not rely on measurements is desirable, especially for long term studies of glacier response to future climate. In this study the intermediate complexity atmospheric research model (ICAR) is employed (Gutmann et al., 2016). It simplifies the wind field physics by relying on analytical solutions derived with linear theory. ICAR then advects atmospheric quantities within this wind field. This allows for computationally fast downscaling and yields a physically consistent set of atmospheric variables. First results obtained from downscaling air temperature, precipitation amount, relative humidity and wind speed to 4 × 4 km2 are presented. Preliminary ICAR is applied for a six month simulation period during five years and evaluated for three domains located in very distinct climates, namely the Southern Alps of New Zealand, the Cordillera Blanca in Peru and the European Alps using ERA Interim reanalysis data (ERAI) as forcing data set. The evaluation is based on determining the added value of the ICAR simulations - with ERAI output as a reference - in representing the local-scale weather measured at several automatic weather stations. For precipitation amount in particular, data by the Global Precipitation Measurement project are used in a fuzzy verification approach. The results indicate that ICAR provides added value for the Southern Alps of New Zealand in the case of precipitation and relative humidity, for the Cordillera Blanca and the European Alps for wind speed and, at certain locations in the European Alps, for precipitation. In order to more comprehensively investigate the physical plausibility of skill obtained for specific weather situations, the spatio-temporal evolution of the wind field resulting from the ICAR dynamics is analysed for individual case studies. To the authors knowledge this is the first study that specifically investigates the multi-variable consistency of ICAR for different climates, an important prerequisite for all applications which require multi-variable or multi-site input. References: Gutmann, E., Barstad, I., Clark, M., Arnold, J., and Rasmussen, R. (2016). The Intermediate Complexity Atmospheric Research Model (ICAR). Journal of Hydrometeorology, 17(3), 957-973.

  15. Active learning for solving the incomplete data problem in facial age classification by the furthest nearest-neighbor criterion.

    PubMed

    Wang, Jian-Gang; Sung, Eric; Yau, Wei-Yun

    2011-07-01

    Facial age classification is an approach to classify face images into one of several predefined age groups. One of the difficulties in applying learning techniques to the age classification problem is the large amount of labeled training data required. Acquiring such training data is very costly in terms of age progress, privacy, human time, and effort. Although unlabeled face images can be obtained easily, it would be expensive to manually label them on a large scale and getting the ground truth. The frugal selection of the unlabeled data for labeling to quickly reach high classification performance with minimal labeling efforts is a challenging problem. In this paper, we present an active learning approach based on an online incremental bilateral two-dimension linear discriminant analysis (IB2DLDA) which initially learns from a small pool of labeled data and then iteratively selects the most informative samples from the unlabeled set to increasingly improve the classifier. Specifically, we propose a novel data selection criterion called the furthest nearest-neighbor (FNN) that generalizes the margin-based uncertainty to the multiclass case and which is easy to compute, so that the proposed active learning algorithm can handle a large number of classes and large data sizes efficiently. Empirical experiments on FG-NET and Morph databases together with a large unlabeled data set for age categorization problems show that the proposed approach can achieve results comparable or even outperform a conventionally trained active classifier that requires much more labeling effort. Our IB2DLDA-FNN algorithm can achieve similar results much faster than random selection and with fewer samples for age categorization. It also can achieve comparable results with active SVM but is much faster than active SVM in terms of training because kernel methods are not needed. The results on the face recognition database and palmprint/palm vein database showed that our approach can handle problems with large number of classes. Our contributions in this paper are twofold. First, we proposed the IB2DLDA-FNN, the FNN being our novel idea, as a generic on-line or active learning paradigm. Second, we showed that it can be another viable tool for active learning of facial age range classification.

  16. Behavior analysis in consumer affairs: Retail and consumer response to publicizing food price information

    PubMed Central

    Greene, Brandon F.; Rouse, Mark; Green, Richard B.; Clay, Connie

    1984-01-01

    A popular program among consumer action groups involves publicizing comparative food price information (CFPI) gathered from retail stores. Its significance is based on the assumption that publishing CFPI maximizes retail competition (i.e., moderates price levels or price increases) and occasions more frugal store selections among consumers. We tested these assumptions during a 2-year analysis. Specifically, we monitored the prices of two distinct market baskets in the supermarkets of two midwestern cities (target and contrast cities). Following a lengthy baseline, we published the prices of only one of the market baskets at stores in the target city in the local newspaper on five different occasions. The results suggested that reductions in price inflation occurred for both market baskets at the independently operated target stores. The corporate chain stores were not similarly affected. In addition, surveys indicated that many consumers used the CFPI as a basis for store selection. Finally, the analysis included a discussion of the politics, economics, and future of CFPI programs. PMID:16795672

  17. Severe and enduring anorexia nervosa (SEED-AN): a qualitative study of patients with 20+ years of anorexia nervosa.

    PubMed

    Robinson, Paul H; Kukucska, Roza; Guidetti, Giulia; Leavey, Gerard

    2015-07-01

    Little is known about how patients with long-term eating disorders manage their clinical problems. We carried out a preliminary qualitative study (using Thematic Analysis) of patients with severe and enduring anorexia nervosa (SEED-AN) in which we undertook recorded interviews in eight participants whose conditions had lasted 20-40 years. We found 15 principle features in physical, psychological, social, family, occupational and treatment realms. Psychological and social realms were most affected. Severe physical problems were reported. They described feelings of unworthiness, frugality regarding money and obsessive time-keeping. Persisting with negligible social networks, participants described depression and hopelessness, while somehow achieving a sense of pride at their endurance and survival in spite of the eating disorder. They emphasized the importance of professional help in managing their care. The severe and enduring description, often reserved for people with psychotic illness, is appropriately applied to SEED-AN, which has major impacts in all realms. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.

  18. Convenience and the hierarchy of meal preparation. Cooking and domestic education in the Netherlands, 1910-1930.

    PubMed

    Verriet, Jon

    2015-11-01

    The concept of convenience in food products and meal preparation has changed rapidly during the twentieth century. However, there is little investigation into the way attitudes towards this concept have changed, which curbs our understanding of the importance of, and need for, convenience today. This paper uses the magazine of the Dutch schools of domestic education to examine their stance on convenience in meal preparation during the 1910s and 1920s. Recipes and articles are quantitatively and qualitatively analysed to estimate the importance of convenience in food preparation and consumption. The results of this analysis show that there was a hierarchy of values with regard to food choice: convenience was definitely valued, but matters of frugality and nutrition generally dominated. This provides not just a nuanced image of the role of domestic education (demanding yet flexible), but it also gives insight into the mechanics of food choice, which may at least partly still apply today. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Fluent, fast, and frugal? A formal model evaluation of the interplay between memory, fluency, and comparative judgments.

    PubMed

    Hilbig, Benjamin E; Erdfelder, Edgar; Pohl, Rüdiger F

    2011-07-01

    A new process model of the interplay between memory and judgment processes was recently suggested, assuming that retrieval fluency-that is, the speed with which objects are recognized-will determine inferences concerning such objects in a single-cue fashion. This aspect of the fluency heuristic, an extension of the recognition heuristic, has remained largely untested due to methodological difficulties. To overcome the latter, we propose a measurement model from the class of multinomial processing tree models that can estimate true single-cue reliance on recognition and retrieval fluency. We applied this model to aggregate and individual data from a probabilistic inference experiment and considered both goodness of fit and model complexity to evaluate different hypotheses. The results were relatively clear-cut, revealing that the fluency heuristic is an unlikely candidate for describing comparative judgments concerning recognized objects. These findings are discussed in light of a broader theoretical view on the interplay of memory and judgment processes.

  20. After the Liverpool Care Pathway—development of heuristics to guide end of life care for people with dementia: protocol of the ALCP study

    PubMed Central

    Davies, N; Manthorpe, J; Sampson, E L; Iliffe, S

    2015-01-01

    Introduction End of life care guidance for people with dementia is lacking and this has been made more problematic in England with the removal of one of the main end of life care guidelines which offered some structure, the Liverpool Care Pathway. This guidance gap may be eased with the development of heuristics (rules of thumb) which offer a fast and frugal form of decision-making. Objective To develop a toolkit of heuristics (rules of thumb) for practitioners to use when caring for people with dementia at the end of life. Method and analysis A mixed-method study using a co-design approach to develop heuristics in three phases. In phase 1, we will conduct at least six focus groups with family carers, health and social care practitioners from both hospital and community care services, using the ‘think-aloud’ method to understand decision-making processes and to develop a set of heuristics. The focus group topic guide will be developed from the findings of a previous study of 46 interviews of family carers about quality end-of-life care for people with dementia and a review of the literature. A multidisciplinary development team of health and social care practitioners will synthesise the findings from the focus groups to devise and refine a toolkit of heuristics. Phase 2 will test the use of heuristics in practice in five sites: one general practice, one community nursing team, one hospital ward and two palliative care teams working in the community. Phase 3 will evaluate and further refine the toolkit of heuristics through group interviews, online questionnaires and semistructured interviews. Ethics and dissemination This study has received ethical approval from a local NHS research ethics committee (Rec ref: 15/LO/0156). The findings of this study will be presented in peer-reviewed publications and national and international conferences. PMID:26338688

  1. [The Essenes and medicine].

    PubMed

    Kottek, Samuel

    2011-01-01

    The Essenes were a Jewish sect, which flourished around the first century. We have limited our study to hygienic and medical aspects, as documented in the works of Josephus Flavius, Philo of Alexandria, and Pliny the Elder; Josephus and Philo were personally in contact with these sectarian Jews. We have described the regimen of life of these communities, who lived in strictly organised fashion, their meals taken in common, their bathing in cold water, their clothing, the Sabbath rest, the lavatories, and more. Most Essenes remained single, they adopted however small children, and educated them in accordance to their principles. There was no private property, but old people and sick residents were taken care of by the community. The Essenes, as well as the Therapeuts described by Philo, were knowledgeable in medical lore, they treasured old books and studied the virtues of medicinal plants. There is no clear-cut consensus whether the Essenes, the Therapeuts, and the Qumran residents were one and the same sect, or whether they were similar sub-sects. The calm, strictly regulated and frugal way of life of the Essenes enabled them to attain old age, often beyond 100 years.

  2. A Blind Spot in Research on Foreign Language Effects in Judgment and Decision-Making.

    PubMed

    Polonioli, Andrea

    2018-01-01

    One of the most fascinating topics of current investigation in the literature on judgment and decision-making concerns the exploration of foreign language effects (henceforth, FLE). Specifically, recent research suggests that presenting information in a foreign language helps reasoners make better choices. However, this piece aims at making scholars aware of a blind spot in this stream of research. In particular, research on FLE has imported only one view of judgment and decision-making, in which the heuristics that people use are seen as conducive to biases and, in turn, to costly mistakes. But heuristics are not necessarily a liability, and this article indicates two routes to push forward research on FLE in judgment and decision-making. First, research on FLE should be expanded to explore also classes of fast and frugal heuristics, which have been shown to lead to accurate predictions in several contexts characterized by uncertainty. Second, research on FLE should be open to challenge the interpretations given to previous FLE findings, since alternative accounts are plausible and not ruled out by evidence.

  3. Cancer care, money, and the value of life: whose justice? Which rationality?

    PubMed

    Sulmasy, Daniel P

    2007-01-10

    Cost-containment in oncology is a moral issue. While economists use the word "rationing" to describe all limitations on resource utilization that result from human choice, the ordinary language distinction between allocation and rationing is morally meaningful and can help oncologists to determine their proper moral role in cost-containment. It is argued that oncologists should not be required to ration at the bedside, nor should they be given financial incentives to practice frugally, nor should they be subjected to a variety of bureaucratic mechanisms to control costs indirectly. In addition, it is argued that the fact that treatments have a price does not logically imply that patients have a price. Cost-effectiveness analysis is often suggested as a means of deciding how best to allocate resources, but some of its many ethical limitations are discussed. The alternative is an open, public, participatory process about how to ration care, abandoning the formulaic pretenses of cost-effectiveness analysis, but with a commitment to reason, good will, and common sense. Oncologists would then be free to advocate for their patients within the constraints imposed by this public process.

  4. Frugal cannibals: how consuming conspecific tissues can provide conditional benefits to wood frog tadpoles ( Lithobates sylvaticus)

    NASA Astrophysics Data System (ADS)

    Jefferson, Dale M.; Hobson, Keith A.; Demuth, Brandon S.; Ferrari, Maud C. O.; Chivers, Douglas P.

    2014-04-01

    Tadpoles show considerable behavioral plasticity. When population densities become high, tadpoles often become cannibalistic, likely in response to intense competition. Conspecific tissues are potentially an ideal diet by composition and should greatly improve growth and development. However, the potential release of alarm cues from the tissues of injured conspecifics may act to deter potential cannibals from feeding. We conducted multiple feeding experiments to test the relative effects that a diet of conspecifics has on tadpole growth and development. Results indicate that while conspecific tissues represent a better alternative to starvation and provide some benefits over low-protein diets, such a diet can have detrimental effects to tadpole growth and/or development relative to diets of similar protein content. Additionally, tadpoles raised individually appear to avoid consuming conspecific tissues and may continue to do so until they suffer from the effects of starvation. However, tadpoles readily fed upon conspecific tissues immediately when raised with competitors. These results suggest that cannibalism may occur as a result of competition rather than the specific quality of available diets, unless such diets lead to starvation.

  5. A Blind Spot in Research on Foreign Language Effects in Judgment and Decision-Making

    PubMed Central

    Polonioli, Andrea

    2018-01-01

    One of the most fascinating topics of current investigation in the literature on judgment and decision-making concerns the exploration of foreign language effects (henceforth, FLE). Specifically, recent research suggests that presenting information in a foreign language helps reasoners make better choices. However, this piece aims at making scholars aware of a blind spot in this stream of research. In particular, research on FLE has imported only one view of judgment and decision-making, in which the heuristics that people use are seen as conducive to biases and, in turn, to costly mistakes. But heuristics are not necessarily a liability, and this article indicates two routes to push forward research on FLE in judgment and decision-making. First, research on FLE should be expanded to explore also classes of fast and frugal heuristics, which have been shown to lead to accurate predictions in several contexts characterized by uncertainty. Second, research on FLE should be open to challenge the interpretations given to previous FLE findings, since alternative accounts are plausible and not ruled out by evidence. PMID:29662457

  6. Using flatbed scanners in the undergraduate optics laboratory—An example of frugal science

    NASA Astrophysics Data System (ADS)

    Koopman, Thomas; Gopal, Venkatesh

    2017-05-01

    We describe the use of a low-cost commercial flatbed scanner in the undergraduate teaching laboratory to image large (˜25 cm) interference and diffraction patterns in two dimensions. Such scanners usually have an 8-bit linear photosensor array that can scan large areas (˜28 cm × 22 cm) at very high spatial resolutions (≥100 Megapixels), which makes them versatile large-format imaging devices. We describe how the scanner can be used to image interference and diffraction from rectangular single-slit, double-slit, and circular apertures. The experiments are very simple to setup and require no specialized components besides a small laser and a flatbed scanner. Due to the presence of Automatic Gain Control in the scanner, which we were not able to override, we were unable to get an excellent fit to the data. Interestingly, we found that the less-than-ideal data were actually pedagogically superior as it forced the students to think about the process of data acquisition in much greater detail instead of simply performing the experiment mechanically.

  7. Mutual learning and reverse innovation--where next?

    PubMed

    Crisp, Nigel

    2014-03-28

    There is a clear and evident need for mutual learning in global health systems. It is increasingly recognized that innovation needs to be sourced globally and that we need to think in terms of co-development as ideas are developed and spread from richer to poorer countries and vice versa. The Globalization and Health journal's ongoing thematic series, "Reverse innovation in global health systems: learning from low-income countries" illustrates how mutual learning and ideas about so-called "reverse innovation" or "frugal innovation" are being developed and utilized by researchers and practitioners around the world. The knowledge emerging from the series is already catalyzing change and challenging the status quo in global health. The path to truly "global innovation flow", although not fully established, is now well under way. Mobilization of knowledge and resources through continuous communication and awareness raising can help sustain this movement. Global health learning laboratories, where partners can support each other in generating and sharing lessons, have the potential to construct solutions for the world. At the heart of this dialogue is a focus on creating practical local solutions and, simultaneously, drawing out the lessons for the whole world.

  8. Evidence accumulation in decision making: unifying the "take the best" and the "rational" models.

    PubMed

    Lee, Michael D; Cummins, Tarrant D R

    2004-04-01

    An evidence accumulation model of forced-choice decision making is proposed to unify the fast and frugal take the best (TTB) model and the alternative rational (RAT) model with which it is usually contrasted. The basic idea is to treat the TTB model as a sequential-sampling process that terminates as soon as any evidence in favor of a decision is found and the rational approach as a sequential-sampling process that terminates only when all available information has been assessed. The unified TTB and RAT models were tested in an experiment in which participants learned to make correct judgments for a set of real-world stimuli on the basis of feedback, and were then asked to make additional judgments without feedback for cases in which the TTB and the rational models made different predictions. The results show that, in both experiments, there was strong intraparticipant consistency in the use of either the TTB or the rational model but large interparticipant differences in which model was used. The unified model is shown to be able to capture the differences in decision making across participants in an interpretable way and is preferred by the minimum description length model selection criterion.

  9. Frugal Chemoprevention: Targeting Nrf2 with Foods Rich in Sulforaphane

    PubMed Central

    Yang, Li; Palliyaguru, Dushani L.; Kensler, Thomas W.

    2015-01-01

    With the properties of efficacy, safety, tolerability, practicability and low cost, foods containing bioactive phytochemicals are gaining significant attention as elements of chemoprevention strategies against cancer. Sulforaphane [1-isothiocyanato-4-(methylsulfinyl)butane], a naturally occurring isothiocyanate produced by cruciferous vegetables such as broccoli, is found to be a highly promising chemoprevention agent against not only variety of cancers such as breast, prostate, colon, skin, lung, stomach or bladder carcinogenesis, but also cardiovascular disease, neurodegenerative diseases, and diabetes. For reasons of experimental exigency, pre-clinical studies have focused principally on sulforaphane itself, while clinical studies have relied on broccoli sprout preparations rich in either sulforaphane or its biogenic precursor, glucoraphanin. Substantive subsequent evaluation of sulforaphane pharmacokinetics and pharmacodynamics has been undertaken using either pure compound or food matrices. Sulforaphane affects multiple targets in cells. One key molecular mechanism of action for sulforaphane entails activation of the Nrf2- Keap1 signaling pathway although other actions contribute to the broad spectrum of efficacy in different animal models. This review summarizes the current status of pre-clinical chemoprevention studies with sulforaphane and highlights the progress and challenges for the application of foods rich in sulforaphane and/or glucoraphanin in the arena of clinical chemoprevention. PMID:26970133

  10. Carbon dioxide emissions, economic growth, energy use, and urbanization in Saudi Arabia: evidence from the ARDL approach and impulse saturation break tests.

    PubMed

    Raggad, Bechir

    2018-05-01

    This study investigates the existence of long-run relationship between CO 2 emissions, economic growth, energy use, and urbanization in Saudi Arabia over the period 1971-2014. The autoregressive distributed lag (ARDL) approach with structural breaks, where structural breaks are identified with the recently impulse saturation break tests, is applied to conduct the analysis. The bounds test result supports the existence of long-run relationship among the variables. The existence of environmental Kuznets curve (EKC) hypothesis has also been tested. The results reveal the non-validity of the EKC hypothesis for Saudi Arabia as the relationship between GDP and pollution is positive in both the short and the long run. Moreover, energy use increases pollution both in short and long run in the country. On the contrary, the results show a negative and significant impact of urbanization on carbon emissions in Saudi Arabia, which means that urban development is not an obstacle to the improvement of environmental quality. Consequently, policy-makers in Saudi Arabia should consider the efficiency enhancement, frugality in energy consumption, and especially increase the share of renewable energies in the total energy mix.

  11. The rapid reproducers paradox: population control and individual procreative rights.

    PubMed

    Wissenburg, M

    1998-01-01

    This article argues that population policies need to be evaluated from macro and micro perspectives and to consider individual rights. Ecological arguments that are stringent conditions of liberal democracy are assessed against a moral standard. The moral standard is applied to a series of reasons for limiting procreative rights in the cause of sustainability. The focus is directly on legally enforced antinatalist measures and not on indirect policies with incentives and disincentives. The explicit assumption is that population policy violates the fairness to individuals for societal gain and that population policies are incompatible with stringent conditions of liberal democracy. The author identifies the individual-societal tradeoff as the "rapid reproducers paradox." The perfect sustainable population level is either not possible or is a repugnant alternative. 12 ecological arguments are presented, and none are found compatible with notions of a liberal democracy. Three alternative antinatalist options are the acceptance of less rigid and still coercive policies, amendments to the conception of liberal democracy, or loss of hope and choice of noncoercive solutions to sustainability, none of which is found viable. If voluntary abstinence and distributive solutions fail, then frugal demand options and technological supply options both will be necessary.

  12. ParkinsonNet: A Low-Cost Health Care Innovation With A Systems Approach From The Netherlands.

    PubMed

    Bloem, Bas R; Rompen, Lonneke; Vries, Nienke M de; Klink, Ab; Munneke, Marten; Jeurissen, Patrick

    2017-11-01

    ParkinsonNet, a low-cost innovation to optimize care for patients with Parkinson disease, was developed in 2004 as a network of physical therapists in several regions in the Netherlands. Since that time, the network has achieved full national reach, with 70 regional networks and around 3,000 specifically trained professionals from 12 disciplines. Key elements include the empowerment of professionals who are highly trained and specialized in Parkinson disease, the empowerment of patients by education and consultation, and the empowerment of integrated multidisciplinary teams to better address and manage the disease. Studies have found that the ParkinsonNet approach leads to outcomes that are at least as good as, if not better than, outcomes from usual care. One study found a 50 percent reduction in hip fractures and fewer inpatient admissions. Other studies suggest that ParkinsonNet leads to modest but important cost savings (at least US$439 per patient annually). These cost savings outweigh the costs of building and maintaining the network. Because of ParkinsonNet's success, the program has now spread to several other countries and serves as a model of a successful and scalable frugal innovation.

  13. Streamlining On-Demand Access to Joint Polar Satellite System (JPSS) Data Products for Weather Forecasting

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Tislin, D.

    2017-12-01

    Observations from the Joint Polar Satellite System (JPSS) support National Weather Service (NWS) forecasters, whose Advanced Weather Interactive Processing System (AWIPS) Data Delivery (DD) will access JPSS data products on demand from the National Environmental Satellite, Data, and Information Service (NESDIS) Product Distribution and Access (PDA) service. Based on the Open Geospatial Consortium (OGC) Web Coverage Service, this on-demand service promises broad interoperability and frugal use of data networks by serving only the data that a user needs. But the volume, velocity, and variety of JPSS data products impose several challenges to such a service. It must be efficient to handle large volumes of complex, frequently updated data, and to fulfill many concurrent requests. It must offer flexible data handling and delivery, to work with a diverse and changing collection of data, and to tailor its outputs into products that users need, with minimal coordination between provider and user communities. It must support 24x7 operation, with no pauses in incoming data or user demand; and it must scale to rapid changes in data volume, variety, and demand as new satellites launch, more products come online, and users rely increasingly on the service. We are addressing these challenges in order to build an efficient and effective on-demand JPSS data service. For example, on-demand subsetting by many users at once may overload a server's processing capacity or its disk bandwidth - unless alleviated by spatial indexing, geolocation transforms, or pre-tiling and caching. Filtering by variable (/ band / layer) may also alleviate network loads, and provide fine-grained variable selection; to that end we are investigating how best to provide random access into the variety of spatiotemporal JPSS data products. Finally, producing tailored products (derivatives, aggregations) can boost flexibility for end users; but some tailoring operations may impose significant server loads. Operating this service in a cloud computing environment allows cost-effective scaling during the development and early deployment phases - and perhaps beyond. We will discuss how NESDIS and NWS are assessing and addressing these challenges to provide timely and effective access to JPSS data products for weather forecasters throughout the country.

  14. After the Liverpool Care Pathway--development of heuristics to guide end of life care for people with dementia: protocol of the ALCP study.

    PubMed

    Davies, N; Manthorpe, J; Sampson, E L; Iliffe, S

    2015-09-02

    End of life care guidance for people with dementia is lacking and this has been made more problematic in England with the removal of one of the main end of life care guidelines which offered some structure, the Liverpool Care Pathway. This guidance gap may be eased with the development of heuristics (rules of thumb) which offer a fast and frugal form of decision-making. To develop a toolkit of heuristics (rules of thumb) for practitioners to use when caring for people with dementia at the end of life. A mixed-method study using a co-design approach to develop heuristics in three phases. In phase 1, we will conduct at least six focus groups with family carers, health and social care practitioners from both hospital and community care services, using the 'think-aloud' method to understand decision-making processes and to develop a set of heuristics. The focus group topic guide will be developed from the findings of a previous study of 46 interviews of family carers about quality end-of-life care for people with dementia and a review of the literature. A multidisciplinary development team of health and social care practitioners will synthesise the findings from the focus groups to devise and refine a toolkit of heuristics. Phase 2 will test the use of heuristics in practice in five sites: one general practice, one community nursing team, one hospital ward and two palliative care teams working in the community. Phase 3 will evaluate and further refine the toolkit of heuristics through group interviews, online questionnaires and semistructured interviews. This study has received ethical approval from a local NHS research ethics committee (Rec ref: 15/LO/0156). The findings of this study will be presented in peer-reviewed publications and national and international conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. Marshall Space Flight Center Materials and Processes Laboratory

    NASA Technical Reports Server (NTRS)

    Tramel, Terri L.

    2012-01-01

    Marshall?s Materials and Processes Laboratory has been a core capability for NASA for over fifty years. MSFC has a proven heritage and recognized expertise in materials and manufacturing that are essential to enable and sustain space exploration. Marshall provides a "systems-wise" capability for applied research, flight hardware development, and sustaining engineering. Our history of leadership and achievements in materials, manufacturing, and flight experiments includes Apollo, Skylab, Mir, Spacelab, Shuttle (Space Shuttle Main Engine, External Tank, Reusable Solid Rocket Motor, and Solid Rocket Booster), Hubble, Chandra, and the International Space Station. MSFC?s National Center for Advanced Manufacturing, NCAM, facilitates major M&P advanced manufacturing partnership activities with academia, industry and other local, state and federal government agencies. The Materials and Processes Laborato ry has principal competencies in metals, composites, ceramics, additive manufacturing, materials and process modeling and simulation, space environmental effects, non-destructive evaluation, and fracture and failure analysis provide products ranging from materials research in space to fully integrated solutions for large complex systems challenges. Marshall?s materials research, development and manufacturing capabilities assure that NASA and National missions have access to cutting-edge, cost-effective engineering design and production options that are frugal in using design margins and are verified as safe and reliable. These are all critical factors in both future mission success and affordability.

  16. Two contrasted future scenarios for the French agro-food system.

    PubMed

    Billen, Gilles; Le Noë, Julia; Garnier, Josette

    2018-10-01

    Narratives of two prospective scenarios for the future of French agriculture were elaborated by pushing several trends already acting on the dynamics of the current system to their logical end. The first one pursues the opening and specialization characterizing the long-term evolution of the last 50 years of most French agricultural regions, while the second assumes a shift, already perceptible through weak signals, towards more autonomy at the farm and regional scales, a reconnection of crop and livestock farming and a more frugal human diet. A procedure is proposed to translate these qualitative narratives into a quantitative description of the corresponding nutrient fluxes using the GRAFS (Generalized Representation of Agro-Food Systems) methodology, thus allowing a comprehensive exploration of the agronomical and environmental performance of these two scenarios. The results show that the pursuit of the opening and specialization of French agriculture, even complying with regulations regarding reasoned fertilization, would result in considerable environmental burdens namely in terms of water pollution. The scenario generalizing organic farming practices, reconnection of crop and livestock farming systems and a demitarian human diet makes it possible to meet the future national food demand while still exporting significant amounts of cereals to the international market and ensuring better groundwater quality in most French regions. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Price-Minimizing Behaviors in a Cohort of Smokers before and after a Cigarette Tax Increase.

    PubMed

    Betzner, Anne; Boyle, Raymond G; St Claire, Ann W

    2016-06-17

    Cigarette tax increases result in a reduced demand for cigarettes and increased efforts by smokers to reduce their cost of smoking. Less is known about how smokers think about their expenditures for cigarettes and the possible mechanisms that underlie price-minimizing behaviors. In-depth longitudinal interviews were conducted with Minnesota smokers to explore the factors that influence smokers' decisions one month prior to a $1.75 cigarette tax increase and again one and three months after the increase. A total of 42 were sampled with 35 completed interviews at all three time points, resulting in 106 interviews across all participants at all time points. A qualitative descriptive approach examined smoking and buying habits, as well as reasons behind these decisions. A hierarchy of ways to save money on cigarettes included saving the most money by changing to roll your own pipe tobacco, changing to a cheaper brand, cutting down or quitting, changing to cigarillos, and buying online. Using coupons, shopping around, buying by the carton, changing the style of cigarette, and stocking up prior to the tax increase were described as less effective. Five factors emerged as impacting smokers' efforts to save money on cigarettes after the tax: brand loyalty, frugality, addiction, stress, and acclimation.

  18. Frugal chemoprevention: targeting Nrf2 with foods rich in sulforaphane.

    PubMed

    Yang, Li; Palliyaguru, Dushani L; Kensler, Thomas W

    2016-02-01

    With the properties of efficacy, safety, tolerability, practicability and low cost, foods containing bioactive phytochemicals are gaining significant attention as elements of chemoprevention strategies against cancer. Sulforaphane [1-isothiocyanato-4-(methylsulfinyl)butane], a naturally occurring isothiocyanate produced by cruciferous vegetables such as broccoli, is found to be a highly promising chemoprevention agent against not only a variety of cancers such as breast, prostate, colon, skin, lung, stomach or bladder, but also cardiovascular disease, neurodegenerative diseases, and diabetes. For reasons of experimental exigency, preclinical studies have focused principally on sulforaphane itself, while clinical studies have relied on broccoli sprout preparations rich in either sulforaphane or its biogenic precursor, glucoraphanin. Substantive subsequent evaluation of sulforaphane pharmacokinetics and pharmacodynamics has been undertaken using either pure compound or food matrices. Sulforaphane affects multiple targets in cells. One key molecular mechanism of action for sulforaphane entails activation of the Nrf2-Keap1 signaling pathway although other actions contribute to the broad spectrum of efficacy in different animal models. This review summarizes the current status of pre-clinical chemoprevention studies with sulforaphane and highlights the progress and challenges for the application of foods rich in sulforaphane and/or glucoraphanin in the arena of clinical chemoprevention. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Long-term and short-term action-effect links and their impact on effect monitoring.

    PubMed

    Wirth, Robert; Steinhauser, Robert; Janczyk, Markus; Steinhauser, Marco; Kunde, Wilfried

    2018-04-23

    People aim to produce effects in the environment, and according to ideomotor theory, actions are selected and executed via anticipations of their effects. Further, to ensure that an action has been successful and an effect has been realized, we must be able to monitor the consequences of our actions. However, action-effect links might vary between situations, some might apply for a majority of situations, while others might only apply to special occasions. With a combination of behavioral and electrophysiological markers, we show that monitoring of self-produced action effects interferes with other tasks, and that the length of effect monitoring is determined by both, long-term action-effect links that hold for most situations, and short-term action-effect links that emerge from a current setting. Effect monitoring is fast and frugal when these action-effect links allow for valid anticipation of action effects, but otherwise effect monitoring takes longer and delays a subsequent task. Specific influences of long-term and short-term links on the P1/N1 and P3a further allow to dissect the temporal dynamics of when these links interact for the purpose of effect monitoring. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Control of amphibious weed ipomoea (Ipomoea carnea) by utilizing it for the extraction of volatile fatty acids as energy precursors.

    PubMed

    Rafiq Kumar, M; Tauseef, S M; Abbasi, Tasneem; Abbasi, S A

    2015-01-01

    Volatile fatty acids (VFAs), comprising mainly of acetic acid and lesser quantities of propionic and butyric acids, are generated when zoomass or phytomass is acted upon by acidogenic and acetogenic microorganisms. VFAs can be utilized by methanogens under anaerobic conditions to generate flammable methane-carbon dioxide mixtures known as 'biogas'. Acting on the premise that this manner of VFA utilization for generating relatively clean energy can be easily accomplished in a controlled fashion in conventional biogas plants as well as higher-rate anaerobic digesters, we have carried out studies aimed to generate VFAs from the pernicious weed ipomoea (Ipomoea carnea). The VFA extraction was accomplished by a simple yet effective technology, appropriate for use even by laypersons. For this acid-phase reactors were set, to which measured quantities of ipomoea leaves were charged along with water inoculated with cow dung. The reactors were stirred intermittently. It was found that VFA production started within hours of the mixing of the reactants and peaked by the 10(th) or 11(th) day in all the reactors, effecting a conversion of over 10% of the biomass into VFAs. The reactor performance had good reproducibility and the process appeared easily controllable, frugal and robust.

  1. Control of amphibious weed ipomoea (Ipomoea carnea) by utilizing it for the extraction of volatile fatty acids as energy precursors

    PubMed Central

    Rafiq Kumar, M.; Tauseef, S.M.; Abbasi, Tasneem; Abbasi, S.A.

    2014-01-01

    Volatile fatty acids (VFAs), comprising mainly of acetic acid and lesser quantities of propionic and butyric acids, are generated when zoomass or phytomass is acted upon by acidogenic and acetogenic microorganisms. VFAs can be utilized by methanogens under anaerobic conditions to generate flammable methane–carbon dioxide mixtures known as ‘biogas’. Acting on the premise that this manner of VFA utilization for generating relatively clean energy can be easily accomplished in a controlled fashion in conventional biogas plants as well as higher-rate anaerobic digesters, we have carried out studies aimed to generate VFAs from the pernicious weed ipomoea (Ipomoea carnea). The VFA extraction was accomplished by a simple yet effective technology, appropriate for use even by laypersons. For this acid-phase reactors were set, to which measured quantities of ipomoea leaves were charged along with water inoculated with cow dung. The reactors were stirred intermittently. It was found that VFA production started within hours of the mixing of the reactants and peaked by the 10th or 11th day in all the reactors, effecting a conversion of over 10% of the biomass into VFAs. The reactor performance had good reproducibility and the process appeared easily controllable, frugal and robust. PMID:25685545

  2. The Role of Sister Cities' Staff Exchanges in Developing "Learning Cities": Exploring Necessary and Sufficient Conditions in Social Capital Development Utilizing Proportional Odds Modeling.

    PubMed

    Buckley, Patrick Henry; Takahashi, Akio; Anderson, Amy

    2015-06-24

    In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building "learning cities" through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met.

  3. The Role of Sister Cities’ Staff Exchanges in Developing “Learning Cities”: Exploring Necessary and Sufficient Conditions in Social Capital Development Utilizing Proportional Odds Modeling

    PubMed Central

    Buckley, Patrick Henry; Takahashi, Akio; Anderson, Amy

    2015-01-01

    In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building “learning cities” through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met. PMID:26114245

  4. Price-Minimizing Behaviors in a Cohort of Smokers before and after a Cigarette Tax Increase

    PubMed Central

    Betzner, Anne; Boyle, Raymond G.; St. Claire, Ann W.

    2016-01-01

    Cigarette tax increases result in a reduced demand for cigarettes and increased efforts by smokers to reduce their cost of smoking. Less is known about how smokers think about their expenditures for cigarettes and the possible mechanisms that underlie price-minimizing behaviors. In-depth longitudinal interviews were conducted with Minnesota smokers to explore the factors that influence smokers’ decisions one month prior to a $1.75 cigarette tax increase and again one and three months after the increase. A total of 42 were sampled with 35 completed interviews at all three time points, resulting in 106 interviews across all participants at all time points. A qualitative descriptive approach examined smoking and buying habits, as well as reasons behind these decisions. A hierarchy of ways to save money on cigarettes included saving the most money by changing to roll your own pipe tobacco, changing to a cheaper brand, cutting down or quitting, changing to cigarillos, and buying online. Using coupons, shopping around, buying by the carton, changing the style of cigarette, and stocking up prior to the tax increase were described as less effective. Five factors emerged as impacting smokers’ efforts to save money on cigarettes after the tax: brand loyalty, frugality, addiction, stress, and acclimation. PMID:27322301

  5. Cutaneous water loss and the development of the stratum corneum of nestling house sparrows (Passer domesticus) from desert and mesic environments.

    PubMed

    Muñoz-Garcia, Agustí; Williams, Joseph B

    2011-01-01

    Evaporation through the skin contributes to more than half of the total water loss in birds. Therefore, we expect the regulation of cutaneous water loss (CWL) to be crucial for birds, especially those that live in deserts, to maintain a normal state of hydration. Previous studies in adult birds showed that modifications of the lipid composition of the stratum corneum (SC), the outer layer of the epidermis, were associated with changes in rates of CWL. However, few studies have examined the ontogeny of CWL and the lipids of the SC in nestling birds. In this study, we measured CWL and the lipid composition of the SC during development of nestlings from two populations of house sparrows, one from the deserts of Saudi Arabia and the other from mesic Ohio. We found that desert and mesic nestlings followed different developmental trajectories for CWL. Desert nestlings seemed to make a more frugal use of water than did mesic nestlings. To regulate CWL, nestlings appeared to modify the lipid composition of the SC during ontogeny. Our results also suggest a tighter regulation of CWL in desert nestlings, presumably as a result of the stronger selection pressures to which nestlings are exposed in deserts.

  6. Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography

    PubMed Central

    Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji

    2013-01-01

    OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418

  7. The application of generalized, cyclic, and modified numerical integration algorithms to problems of satellite orbit computation

    NASA Technical Reports Server (NTRS)

    Chesler, L.; Pierce, S.

    1971-01-01

    Generalized, cyclic, and modified multistep numerical integration methods are developed and evaluated for application to problems of satellite orbit computation. Generalized methods are compared with the presently utilized Cowell methods; new cyclic methods are developed for special second-order differential equations; and several modified methods are developed and applied to orbit computation problems. Special computer programs were written to generate coefficients for these methods, and subroutines were written which allow use of these methods with NASA's GEOSTAR computer program.

  8. How should "hot" players in basketball be defended? The use of fast-and-frugal heuristics by basketball coaches and players in response to streakiness.

    PubMed

    Csapo, Peter; Avugos, Simcha; Raab, Markus; Bar-Eli, Michael

    2015-01-01

    Previous research has shown that changes in shot difficulty may have rendered the hot-hand effect in basketball unobservable and are potentially a result of defensive adjustments. However, it has not been directly analysed whether strategic changes indeed take place in response to streakiness and whether they are effective with respect to winning games. The current work consists of an experimental study with 18 professional coaches and 20 players based on video sequences from National Basketball Association games, where the shown player displayed a streaky performance in half of the sequences. While coaches were asked to devise a defensive strategy after each viewed sequence, players had to assume the role of the shown player and decide whether to shoot or pass the ball. We find that coaches tended to increase the defensive pressure significantly more often on presumably hot players and thus make use of the hot-hand heuristic. Meanwhile, players chose to shoot more frequently in low-pressure and streaky situations but selected "pass" regardless of the previous performance when they faced increased defensive pressure. Assuming that a streaky player's performance is indeed elevated during hot phases, hot-hand behaviour can be considered adaptive in certain situations as it led hot players to pass instead of shoot.

  9. Does unilateral insular resection disturb personality? A study with epileptic patients.

    PubMed

    Hébert-Seropian, Benjamin; Boucher, Olivier; Sénéchal, Carole; Rouleau, Isabelle; Bouthillier, Alain; Lepore, Franco; Nguyen, Dang Khoa

    2017-09-01

    The insula is now regarded as a potential site of epileptogenesis in drug-resistant epilepsy, and the advent of microsurgical techniques has allowed insular cortectomy to become a treatment of choice when the insular cortex is involved in the seizure focus. However, considering the evidence of an insular role in socio-emotional processing, it remains unknown whether these cortical resections disturb personality and social behavior as experienced in daily life. We examined such changes in a group of patients (n=19) who underwent epilepsy surgery involving partial or complete resection of the insula, and compared them to a group of patients who underwent standard temporal lobe epilepsy (TLE) surgery (n=19) as a lesion-control group. Participants were assessed on the Iowa Scales of Personality Change, filled by a close relative at least six months after surgery. While postoperative changes did not significantly differ between groups on any of the ISPC items, insular resections were associated with mild but significant increases in irritability, emotional lability, anxiety, and frugality postoperatively, which, apart from anxiety, were not significant among TLE patients. Our results are congruent with the idea that the insula contributes to emotion processing. To our knowledge, this study is the first to systematically assess personality changes in a consecutive sample of patients with insular resections. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Phytoremediation of Alberta oil sand tailings using native plants and fungal endophytes

    NASA Astrophysics Data System (ADS)

    Repas, T.; Germida, J.; Kaminskyj, S.

    2012-04-01

    Fungal endophytes colonize host plants without causing disease. Some endophytes confer plant tolerance to harsh environments. One such endophyte, Trichoderma harzianum strain TSTh20-1, was isolated from a plant growing on Athabasca oil sand tailings. Tailing sands are a high volume waste product from oil sand extraction that the industry is required to remediate. Tailing sands are low in organic carbon and mineral nutrients, and are hydrophobic due to residual polyaromatic hydrocarbons. Typically, tailing sands are remediated by planting young trees in large quantities of mulch plus mineral fertilizer, which is costly and labour intensive. In greenhouse trials, TSTh20-1 supports growth of tomato seedlings on tailing sands without fertilizer. The potential use of TSTh20-1 in combination with native grasses and forbs to remediate under field conditions is being assessed. Twenty-three commercially available plant species are being screened for seed germination and growth on tailing sands in the presence of TSTh20-1. The best candidates from this group will be used in greenhouse and small scale field trials. Potential mechanisms that contribute to endophyte-induced plant growth promotion, such as plant hormone production, stress tolerance, mineral solubilization, and uptake are also being assessed. As well, TSTh20-1 appears to be remarkably frugal in its nutrient requirements and the possibility that this attribute is characteristic of other plant-fungal endophytes from harsh environments is under study.

  11. Précis of Simple heuristics that make us smart.

    PubMed

    Todd, P M; Gigerenzer, G

    2000-10-01

    How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), we explore fast and frugal heuristics--simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data--that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program.

  12. Sampling and assessment accuracy in mate choice: a random-walk model of information processing in mating decision.

    PubMed

    Castellano, Sergio; Cermelli, Paolo

    2011-04-07

    Mate choice depends on mating preferences and on the manner in which mate-quality information is acquired and used to make decisions. We present a model that describes how these two components of mating decision interact with each other during a comparative evaluation of prospective mates. The model, with its well-explored precedents in psychology and neurophysiology, assumes that decisions are made by the integration over time of noisy information until a stopping-rule criterion is reached. Due to this informational approach, the model builds a coherent theoretical framework for developing an integrated view of functions and mechanisms of mating decisions. From a functional point of view, the model allows us to investigate speed-accuracy tradeoffs in mating decision at both population and individual levels. It shows that, under strong time constraints, decision makers are expected to make fast and frugal decisions and to optimally trade off population-sampling accuracy (i.e. the number of sampled males) against individual-assessment accuracy (i.e. the time spent for evaluating each mate). From the proximate-mechanism point of view, the model makes testable predictions on the interactions of mating preferences and choosiness in different contexts and it might be of compelling empirical utility for a context-independent description of mating preference strength. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Genomes in Turmoil: Frugality Drives Microbial Community Structure in Extremely Acidic Environments

    NASA Astrophysics Data System (ADS)

    Holmes, D. S.

    2016-12-01

    Extremely acidic environments (To gain insight into these issues, we have conducted deep bioinformatic analyses, including metabolic reconstruction of key assimilatory pathways, phylogenomics and network scrutiny of >160 genomes of acidophiles, including representatives from Archaea, Bacteria and Eukarya and at least ten metagenomes of acidic environments [Cardenas JP, et al. pp 179-197 in Acidophiles, eds R. Quatrini and D. B. Johnson, Caister Academic Press, UK (2016)]. Results yielded valuable insights into cellular processes, including carbon and nitrogen management and energy production, linking biogeochemical processes to organismal physiology. They also provided insight into the evolutionary forces that shape the genomic structure of members of acidophile communities. Niche partitioning can explain diversity patterns in rapidly changing acidic environments such as bioleaching heaps. However, in spatially and temporally homogeneous acidic environments genome flux appears to provide deeper insight into the composition and evolution of acidic consortia. Acidophiles have undergone genome streamlining by gene loss promoting mutual coexistence of species that exploit complementarity use of scarce resources consistent with the Black Queen hypothesis [Morris JJ et al. mBio 3: e00036-12 (2012)]. Acidophiles also have a large pool of accessory genes (the microbial super-genome) that can be accessed by horizontal gene transfer. This further promotes dependency relationships as drivers of community structure and the evolution of keystone species. Acknowledgements: Fondecyt 1130683; Basal CCTE PFB16

  14. On finite element methods for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Aziz, A. K.; Werschulz, A. G.

    1979-01-01

    The numerical solution of the Helmholtz equation is considered via finite element methods. A two-stage method which gives the same accuracy in the computed gradient as in the computed solution is discussed. Error estimates for the method using a newly developed proof are given, and the computational considerations which show this method to be computationally superior to previous methods are presented.

  15. Methods for computing water-quality loads at sites in the U.S. Geological Survey National Water Quality Network

    USGS Publications Warehouse

    Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.

    2017-10-24

    The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.

  16. 29 CFR 548.500 - Methods of computation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Computation of Overtime Pay § 548.500 Methods of computation. The methods of computing overtime pay on the basic rates for piece... pay at the regular rate. Example 1. Under an employment agreement the basic rate to be used in...

  17. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna...

  18. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  19. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  20. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  1. Decomposition method for fast computation of gigapixel-sized Fresnel holograms on a graphics processing unit cluster.

    PubMed

    Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu

    2018-04-20

    A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.

  2. 26 CFR 1.669(a)-3 - Tax computed by the exact throwback method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Tax computed by the exact throwback method. 1... Taxable Years Beginning Before January 1, 1969 § 1.669(a)-3 Tax computed by the exact throwback method. (a... compute the tax, on amounts deemed distributed under section 666, by the exact throwback method provided...

  3. Control mechanism of double-rotator-structure ternary optical computer

    NASA Astrophysics Data System (ADS)

    Kai, SONG; Liping, YAN

    2017-03-01

    Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.

  4. A Novel Method to Compute Breathing Volumes via Motion Capture Systems: Design and Experimental Trials.

    PubMed

    Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio

    2017-10-01

    Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2  = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2  = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.

  5. Discovering Synergistic Drug Combination from a Computational Perspective.

    PubMed

    Ding, Pingjian; Luo, Jiawei; Liang, Cheng; Xiao, Qiu; Cao, Buwen; Li, Guanghui

    2018-03-30

    Synergistic drug combinations play an important role in the treatment of complex diseases. The identification of effective drug combination is vital to further reduce the side effects and improve therapeutic efficiency. In previous years, in vitro method has been the main route to discover synergistic drug combinations. However, many limitations of time and resource consumption lie within the in vitro method. Therefore, with the rapid development of computational models and the explosive growth of large and phenotypic data, computational methods for discovering synergistic drug combinations are an efficient and promising tool and contribute to precision medicine. It is the key of computational methods how to construct the computational model. Different computational strategies generate different performance. In this review, the recent advancements in computational methods for predicting effective drug combination are concluded from multiple aspects. First, various datasets utilized to discover synergistic drug combinations are summarized. Second, we discussed feature-based approaches and partitioned these methods into two classes including feature-based methods in terms of similarity measure, and feature-based methods in terms of machine learning. Third, we discussed network-based approaches for uncovering synergistic drug combinations. Finally, we analyzed and prospected computational methods for predicting effective drug combinations. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  6. Numerical computation of diffusion on a surface.

    PubMed

    Schwartz, Peter; Adalsteinsson, David; Colella, Phillip; Arkin, Adam Paul; Onsum, Matthew

    2005-08-09

    We present a numerical method for computing diffusive transport on a surface derived from image data. Our underlying discretization method uses a Cartesian grid embedded boundary method for computing the volume transport in a region consisting of all points a small distance from the surface. We obtain a representation of this region from image data by using a front propagation computation based on level set methods for solving the Hamilton-Jacobi and eikonal equations. We demonstrate that the method is second-order accurate in space and time and is capable of computing solutions on complex surface geometries obtained from image data of cells.

  7. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  8. A novel patient-specific model to compute coronary fractional flow reserve.

    PubMed

    Kwon, Soon-Sung; Chung, Eui-Chul; Park, Jin-Seo; Kim, Gook-Tae; Kim, Jun-Woo; Kim, Keun-Hong; Shin, Eun-Seok; Shim, Eun Bo

    2014-09-01

    The fractional flow reserve (FFR) is a widely used clinical index to evaluate the functional severity of coronary stenosis. A computer simulation method based on patients' computed tomography (CT) data is a plausible non-invasive approach for computing the FFR. This method can provide a detailed solution for the stenosed coronary hemodynamics by coupling computational fluid dynamics (CFD) with the lumped parameter model (LPM) of the cardiovascular system. In this work, we have implemented a simple computational method to compute the FFR. As this method uses only coronary arteries for the CFD model and includes only the LPM of the coronary vascular system, it provides simpler boundary conditions for the coronary geometry and is computationally more efficient than existing approaches. To test the efficacy of this method, we simulated a three-dimensional straight vessel using CFD coupled with the LPM. The computed results were compared with those of the LPM. To validate this method in terms of clinically realistic geometry, a patient-specific model of stenosed coronary arteries was constructed from CT images, and the computed FFR was compared with clinically measured results. We evaluated the effect of a model aorta on the computed FFR and compared this with a model without the aorta. Computationally, the model without the aorta was more efficient than that with the aorta, reducing the CPU time required for computing a cardiac cycle to 43.4%. Copyright © 2014. Published by Elsevier Ltd.

  9. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  10. Vectorization on the star computer of several numerical methods for a fluid flow problem

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  11. The Effect of Computer Assisted and Computer Based Teaching Methods on Computer Course Success and Computer Using Attitudes of Students

    ERIC Educational Resources Information Center

    Tosun, Nilgün; Suçsuz, Nursen; Yigit, Birol

    2006-01-01

    The purpose of this research was to investigate the effects of the computer-assisted and computer-based instructional methods on students achievement at computer classes and on their attitudes towards using computers. The study, which was completed in 6 weeks, were carried out with 94 sophomores studying in formal education program of Primary…

  12. Waveform inversion with source encoding for breast sound speed reconstruction in ultrasound computed tomography.

    PubMed

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  13. Advanced Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure

    DTIC Science & Technology

    2017-01-05

    AFRL-AFOSR-JP-TR-2017-0002 Advanced Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure Manabu...Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386...UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report for the project titled ’Advanced Computational Methods for Optimization of

  14. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  15. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  16. Cooking up a culinary identity for Belgium. Gastrolinguistics in two Belgian cookbooks (19th century).

    PubMed

    Parys, Nathalie

    2013-12-01

    The notion of cookbooks as socio-historic markers in a society is generally accepted within food studies. As both representations and prescriptions of food practices, perceived habits and attitudes towards food, they represent a certain identity for their readers. This paper investigates the nature of the identity that Belgian cookbooks constructed through their rhetoric. An important part of this study is to explore how and to what extent explicit reference to Belgium was made. To this end recipe titles/labels and recipe comments used in two leading bourgeois cookbooks from nineteenth-century Belgium were subjected to a quantitative and qualitative content analysis. The analysis showed that clear attention was paid to national culinary preferences. In terms of a domestic culinary corpus, it became apparent that both the Dutch and French editions of these cookbooks promoted dishes that were ascribed a Belgian origin. Internationality, however, was also an important building block of Belgian culinary identity. It was part of the desire of Belgian bourgeoisie to connect with an international elite. It fit into the 'search for sophistication', which was also expressed through the high representation of the more costly meats and sweet dishes. In addition, other references associated with bourgeois norms and values, such as family, convenience and frugality, were additional building blocks of Belgian culinary identity. Other issues such as tradition, innovation and health, were also matters of concerns to these Belgian cookbooks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Students aggress against professors in reaction to receiving poor grades: an effect moderated by student narcissism and self-esteem.

    PubMed

    Vaillancourt, Tracy

    2013-01-01

    Laboratory evidence about whether students' evaluations of teaching (SETs) are valid is lacking. Results from three (3) independent studies strongly confirm that "professors" who were generous with their grades were rewarded for their favor with higher SETs, while professors who were frugal were punished with lower SETs (Study 1, d = 1.51; Study 2, d = 1.59; Study 3, partial η(2) = .26). This result was found even when the feedback was manipulated to be more or less insulting (Study 3). Consistent with laboratory findings on direct aggression, results also indicated that, when participants were given a poorer feedback, higher self-esteem (Study 1 and Study 2) and higher narcissism (Study 1) were associated with them giving lower (more aggressive) evaluations of the "professor." Moreover, consistent with findings on self-serving biases, participants higher in self-esteem who were in the positive grade/feedback condition exhibited a self-enhancing bias by giving their "professor" higher evaluations (Study 1 and Study 2). The aforementioned relationships were not moderated by the professor's sex or rank (teaching assistant vs.professor). Results provide evidence that (1) students do aggress against professors through poor teaching evaluations, (2) threatened egotism among individuals with high self-esteem is associated with more aggression, especially when coupled with high narcissism, and (3) self-enhancing biases are robust among those with high self-esteem. © 2012 Wiley Periodicals, Inc.

  18. Mediterranean diet pyramid today. Science and cultural updates.

    PubMed

    Bach-Faig, Anna; Berry, Elliot M; Lairon, Denis; Reguant, Joan; Trichopoulou, Antonia; Dernini, Sandro; Medina, F Xavier; Battino, Maurizio; Belahsen, Rekia; Miranda, Gemma; Serra-Majem, Lluís

    2011-12-01

    To present the Mediterranean diet (MD) pyramid: a lifestyle for today. A new graphic representation has been conceived as a simplified main frame to be adapted to the different nutritional and socio-economic contexts of the Mediterranean region. This review gathers updated recommendations considering the lifestyle, dietary, sociocultural, environmental and health challenges that the current Mediterranean populations are facing. Mediterranean region and its populations. Many innovations have arisen since previous graphical representations of the MD. First, the concept of composition of the 'main meals' is introduced to reinforce the plant-based core of the dietary pattern. Second, frugality and moderation is emphasised because of the major public health challenge of obesity. Third, qualitative cultural and lifestyle elements are taken into account, such as conviviality, culinary activities, physical activity and adequate rest, along with proportion and frequency recommendations of food consumption. These innovations are made without omitting other items associated with the production, selection, processing and consumption of foods, such as seasonality, biodiversity, and traditional, local and eco-friendly products. Adopting a healthy lifestyle and preserving cultural elements should be considered in order to acquire all the benefits from the MD and preserve this cultural heritage. Considering the acknowledgment of the MD as an Intangible Cultural Heritage of Humanity by UNESCO (2010), and taking into account its contribution to health and general well-being, we hope to contribute to a much better adherence to this healthy dietary pattern and its way of life with this new graphic representation.

  19. The development of adaptive decision making: Recognition-based inference in children and adolescents.

    PubMed

    Horn, Sebastian S; Ruggeri, Azzurra; Pachur, Thorsten

    2016-09-01

    Judgments about objects in the world are often based on probabilistic information (or cues). A frugal judgment strategy that utilizes memory (i.e., the ability to discriminate between known and unknown objects) as a cue for inference is the recognition heuristic (RH). The usefulness of the RH depends on the structure of the environment, particularly the predictive power (validity) of recognition. Little is known about developmental differences in use of the RH. In this study, the authors examined (a) to what extent children and adolescents recruit the RH when making judgments, and (b) around what age adaptive use of the RH emerges. Primary schoolchildren (M = 9 years), younger adolescents (M = 12 years), and older adolescents (M = 17 years) made comparative judgments in task environments with either high or low recognition validity. Reliance on the RH was measured with a hierarchical multinomial model. Results indicated that primary schoolchildren already made systematic use of the RH. However, only older adolescents adaptively adjusted their strategy use between environments and were better able to discriminate between situations in which the RH led to correct versus incorrect inferences. These findings suggest that the use of simple heuristics does not progress unidirectionally across development but strongly depends on the task environment, in line with the perspective of ecological rationality. Moreover, adaptive heuristic inference seems to require experience and a developed base of domain knowledge. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. [Living in abundance in the ancient and modern worlds from a medical and cultural-historical point of view].

    PubMed

    Mertz, D P

    2014-06-01

    Comparative investigations centre on attitudes of demand and consumption in ethnic groups living in affluence, beginning with the first pre-Christian century in the Roman Empire on the one hand and in Western countries in the post-industrial age of hight-tech in times of far advanced globalization on the other. In this context medical, psycho-social and socio-economical aspects will be treated considering ideal and cultural breaks. Renowned Roman and Greek historians, physicians and philosophers are vouching as witnesses of the times for developments in the antique world with their literary works, in excerpts and verbatim. Obviously general moral decay is a side effect of any affluence. Even in the antiquitiy the "ideology of renewal" proclaimed by the Emperor Augustus died away mostly in emptiness just as do the appeals for improving one's state of health for surviving directed to all citizens in our time. With the rise of Rome as a world power general relative affluence was widespread to such an extent that diseases caused by affluence have occured as mass phenomena. The old Roman virtues of temperance and frugality turned into greed and addiction to pleasure. In this way the Roman people under the banner of affluence degenerated into a society of leisure time, consumption, fun and throwaway mentality. The decline of the Empire was predetermined. The promise of affluence which modern Europe is addicted to is demanding its price following the principle of causality. "How the pictures resemble each other!"

  1. An evaluation method of computer usability based on human-to-computer information transmission model.

    PubMed

    Ogawa, K

    1992-01-01

    This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.

  2. Method for transferring data from an unsecured computer to a secured computer

    DOEpatents

    Nilsen, Curt A.

    1997-01-01

    A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

  3. Computational methods for constructing protein structure models from 3D electron microscopy maps.

    PubMed

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-10-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  5. Computer program CDCID: an automated quality control program using CDC update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, G.L.; Aguilar, F.

    1984-04-01

    A computer program, CDCID, has been developed in coordination with a quality control program to provide a highly automated method of documenting changes to computer codes at EG and G Idaho, Inc. The method uses the standard CDC UPDATE program in such a manner that updates and their associated documentation are easily made and retrieved in various formats. The method allows each card image of a source program to point to the document which describes it, who created the card, and when it was created. The method described is applicable to the quality control of computer programs in general. Themore » computer program described is executable only on CDC computing systems, but the program could be modified and applied to any computing system with an adequate updating program.« less

  6. Review on pen-and-paper-based observational methods for assessing ergonomic risk factors of computer work.

    PubMed

    Rahman, Mohd Nasrull Abdol; Mohamad, Siti Shafika

    2017-01-01

    Computer works are associated with Musculoskeletal Disorders (MSDs). There are several methods have been developed to assess computer work risk factor related to MSDs. This review aims to give an overview of current techniques available for pen-and-paper-based observational methods in assessing ergonomic risk factors of computer work. We searched an electronic database for materials from 1992 until 2015. The selected methods were focused on computer work, pen-and-paper observational methods, office risk factors and musculoskeletal disorders. This review was developed to assess the risk factors, reliability and validity of pen-and-paper observational method associated with computer work. Two evaluators independently carried out this review. Seven observational methods used to assess exposure to office risk factor for work-related musculoskeletal disorders were identified. The risk factors involved in current techniques of pen and paper based observational tools were postures, office components, force and repetition. From the seven methods, only five methods had been tested for reliability. They were proven to be reliable and were rated as moderate to good. For the validity testing, from seven methods only four methods were tested and the results are moderate. Many observational tools already exist, but no single tool appears to cover all of the risk factors including working posture, office component, force, repetition and office environment at office workstations and computer work. Although the most important factor in developing tool is proper validation of exposure assessment techniques, the existing observational method did not test reliability and validity. Futhermore, this review could provide the researchers with ways on how to improve the pen-and-paper-based observational method for assessing ergonomic risk factors of computer work.

  7. Medical image computing for computer-supported diagnostics and therapy. Advances and perspectives.

    PubMed

    Handels, H; Ehrhardt, J

    2009-01-01

    Medical image computing has become one of the most challenging fields in medical informatics. In image-based diagnostics of the future software assistance will become more and more important, and image analysis systems integrating advanced image computing methods are needed to extract quantitative image parameters to characterize the state and changes of image structures of interest (e.g. tumors, organs, vessels, bones etc.) in a reproducible and objective way. Furthermore, in the field of software-assisted and navigated surgery medical image computing methods play a key role and have opened up new perspectives for patient treatment. However, further developments are needed to increase the grade of automation, accuracy, reproducibility and robustness. Moreover, the systems developed have to be integrated into the clinical workflow. For the development of advanced image computing systems methods of different scientific fields have to be adapted and used in combination. The principal methodologies in medical image computing are the following: image segmentation, image registration, image analysis for quantification and computer assisted image interpretation, modeling and simulation as well as visualization and virtual reality. Especially, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients and will gain importance in diagnostic and therapy of the future. From a methodical point of view the authors identify the following future trends and perspectives in medical image computing: development of optimized application-specific systems and integration into the clinical workflow, enhanced computational models for image analysis and virtual reality training systems, integration of different image computing methods, further integration of multimodal image data and biosignals and advanced methods for 4D medical image computing. The development of image analysis systems for diagnostic support or operation planning is a complex interdisciplinary process. Image computing methods enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.

  8. Methodical and technological aspects of creation of interactive computer learning systems

    NASA Astrophysics Data System (ADS)

    Vishtak, N. M.; Frolov, D. A.

    2017-01-01

    The article presents a methodology for the development of an interactive computer training system for training power plant. The methods used in the work are a generalization of the content of scientific and methodological sources on the use of computer-based training systems in vocational education, methods of system analysis, methods of structural and object-oriented modeling of information systems. The relevance of the development of the interactive computer training systems in the preparation of the personnel in the conditions of the educational and training centers is proved. Development stages of the computer training systems are allocated, factors of efficient use of the interactive computer training system are analysed. The algorithm of work performance at each development stage of the interactive computer training system that enables one to optimize time, financial and labor expenditure on the creation of the interactive computer training system is offered.

  9. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  10. Seminar on Understanding Digital Control and Analysis in Vibration Test Systems

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The advantages of the digital methods over the analog vibration methods are demonstrated. The following topics are covered: (1) methods of computer-controlled random vibration and reverberation acoustic testing, (2) methods of computer-controlled sinewave vibration testing, and (3) methods of computer-controlled shock testing. General algorithms are described in the form of block diagrams and flow diagrams.

  11. Breast ultrasound computed tomography using waveform inversion with source encoding

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  12. Space-time VMS computation of wind-turbine rotor and tower aerodynamics

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; McIntyre, Spenser; Kostov, Nikolay; Kolesar, Ryan; Habluetzel, Casey

    2014-01-01

    We present the space-time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent flows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of flows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational flexibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

  13. Space-Time VMS Computation of Wind-Turbine Rotor and Tower Aerodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Spenser W.

    This thesis is on the space{time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent ows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of ows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational exibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

  14. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  15. An Assessment of Artificial Compressibility and Pressure Projection Methods for Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.

  16. Methods for Improving the User-Computer Interface. Technical Report.

    ERIC Educational Resources Information Center

    McCann, Patrick H.

    This summary of methods for improving the user-computer interface is based on a review of the pertinent literature. Requirements of the personal computer user are identified and contrasted with computer designer perspectives towards the user. The user's psychological needs are described, so that the design of the user-computer interface may be…

  17. Report of the Task Force on Computer Charging.

    ERIC Educational Resources Information Center

    Computer Co-ordination Group, Ottawa (Ontario).

    The objectives of the Task Force on Computer Charging as approved by the Committee of Presidents of Universities of Ontario were: (1) to identify alternative methods of costing computing services; (2) to identify alternative methods of pricing computing services; (3) to develop guidelines for the pricing of computing services; (4) to identify…

  18. The computational complexity of elliptic curve integer sub-decomposition (ISD) method

    NASA Astrophysics Data System (ADS)

    Ajeena, Ruma Kareem K.; Kamarulhaili, Hailiza

    2014-07-01

    The idea of the GLV method of Gallant, Lambert and Vanstone (Crypto 2001) is considered a foundation stone to build a new procedure to compute the elliptic curve scalar multiplication. This procedure, that is integer sub-decomposition (ISD), will compute any multiple kP of elliptic curve point P which has a large prime order n with two low-degrees endomorphisms ψ1 and ψ2 of elliptic curve E over prime field Fp. The sub-decomposition of values k1 and k2, not bounded by ±C√n , gives us new integers k11, k12, k21 and k22 which are bounded by ±C√n and can be computed through solving the closest vector problem in lattice. The percentage of a successful computation for the scalar multiplication increases by ISD method, which improved the computational efficiency in comparison with the general method for computing scalar multiplication in elliptic curves over the prime fields. This paper will present the mechanism of ISD method and will shed light mainly on the computation complexity of the ISD approach that will be determined by computing the cost of operations. These operations include elliptic curve operations and finite field operations.

  19. Human Expertise Helps Computer Classify Images

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1991-01-01

    Two-domain method of computational classification of images requires less computation than other methods for computational recognition, matching, or classification of images or patterns. Does not require explicit computational matching of features, and incorporates human expertise without requiring translation of mental processes of classification into language comprehensible to computer. Conceived to "train" computer to analyze photomicrographs of microscope-slide specimens of leucocytes from human peripheral blood to distinguish between specimens from healthy and specimens from traumatized patients.

  20. Computational methods for unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Thomas, J. L.

    1987-01-01

    Computational methods for unsteady transonic flows are surveyed with emphasis on prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed, and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

  1. Computation of backwater and discharge at width constrictions of heavily vegetated flood plains

    USGS Publications Warehouse

    Schneider, V.R.; Board, J.W.; Colson, B.E.; Lee, F.N.; Druffel, Leroy

    1977-01-01

    The U.S. Geological Survey, cooperated with the Federal Highway Administration and the State Highway Departments of Mississippi, Alabama, and Louisiana, to develop a proposed method for computing backwater and discharge at width constrictions of heavily vegetated flood plains. Data were collected at 20 single opening sites for 31 floods. Flood-plain width varied from 4 to 14 times the bridge opening width. The recurrence intervals of peak discharge ranged from a 2-year flood to greater than a 100-year flood, with a median interval of 6 years. Measured backwater ranged from 0.39 to 3.16 feet. Backwater computed by the present standard Geological Survey method averaged 29 percent less than the measured, and that computed by the currently used Federal Highway Administration method averaged 47 percent less than the measured. Discharge computed by the Survey method averaged 21 percent more then the measured. Analysis of data showed that the flood-plain widths and the Manning 's roughness coefficient are larger than those used to develop the standard methods. A method to more accurately compute backwater and discharge was developed. The difference between the contracted and natural water-surface profiles computed using standard step-backwater procedures is defined as backwater. The energy loss terms in the step-backwater procedure are computed as the product of the geometric mean of the energy slopes and the flow distance in the reach was derived from potential flow theory. The mean error was 1 percent when using the proposed method for computing backwater and 3 percent for computing discharge. (Woodard-USGS)

  2. Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.

    2011-01-01

    This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…

  3. Comparison of RCS prediction techniques, computations and measurements

    NASA Astrophysics Data System (ADS)

    Brand, M. G. E.; Vanewijk, L. J.; Klinker, F.; Schippers, H.

    1992-07-01

    Three calculation methods to predict radar cross sections (RCS) of three dimensional objects are evaluated by computing the radar cross sections of a generic wing inlet configuration. The following methods are applied: a three dimensional high frequency method, a three dimensional boundary element method, and a two dimensional finite difference time domain method. The results of the computations are compared with the data of measurements.

  4. An efficient method to compute spurious end point contributions in PO solutions. [Physical Optics

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.; Pistorius, Carl W. I.

    1987-01-01

    A method is given to compute the spurious endpoint contributions in the physical optics solution for electromagnetic scattering from conducting bodies. The method is applicable to general three-dimensional structures. The only information required to use the method is the radius of curvature of the body at the shadow boundary. Thus, the method is very efficient for numerical computations. As an illustration, the method is applied to several bodies of revolution to compute the endpoint contributions for backscattering in the case of axial incidence. It is shown that in high-frequency situations, the endpoint contributions obtained using the method are equal to the true endpoint contributions.

  5. A Comparison of Computational Aeroacoustic Prediction Methods for Transonic Rotor Noise

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Lyrintzis, Anastasios; Koutsavdis, Evangelos K.

    1996-01-01

    This paper compares two methods for predicting transonic rotor noise for helicopters in hover and forward flight. Both methods rely on a computational fluid dynamics (CFD) solution as input to predict the acoustic near and far fields. For this work, the same full-potential rotor code has been used to compute the CFD solution for both acoustic methods. The first method employs the acoustic analogy as embodied in the Ffowcs Williams-Hawkings (FW-H) equation, including the quadrupole term. The second method uses a rotating Kirchhoff formulation. Computed results from both methods are compared with one other and with experimental data for both hover and advancing rotor cases. The results are quite good for all cases tested. The sensitivity of both methods to CFD grid resolution and to the choice of the integration surface/volume is investigated. The computational requirements of both methods are comparable; in both cases these requirements are much less than the requirements for the CFD solution.

  6. A Computational Method to Determine Glucose Infusion Rates for Isoglycemic Intravenous Glucose Infusion Study.

    PubMed

    Choi, Karam; Lee, Jung Chan; Oh, Tae Jung; Kim, Myeungseon; Kim, Hee Chan; Cho, Young Min; Kim, Sungwan

    2016-01-01

    The results of the isoglycemic intravenous glucose infusion (IIGI) study need to mimic the dynamic glucose profiles during the oral glucose tolerance test (OGTT) to accurately calculate the incretin effect. The glucose infusion rates during IIGI studies have historically been determined by experienced research personnel using the manual ad-hoc method. In this study, a computational method was developed to automatically determine the infusion rates for IIGI study based on a glucose-dynamics model. To evaluate the computational method, 18 subjects with normal glucose tolerance underwent a 75 g OGTT. One-week later, Group 1 (n = 9) and Group 2 (n = 9) underwent IIGI studies using the ad-hoc method and the computational method, respectively. Both methods were evaluated using correlation coefficient, mean absolute relative difference (MARD), and root mean square error (RMSE) between the glucose profiles from the OGTT and the IIGI study. The computational method exhibited significantly higher correlation (0.95 ± 0.03 versus 0.86 ± 0.10, P = 0.019), lower MARD (8.72 ± 1.83% versus 13.11 ± 3.66%, P = 0.002), and lower RMSE (10.33 ± 1.99 mg/dL versus 16.84 ± 4.43 mg/dL, P = 0.002) than the ad-hoc method. The computational method can facilitate IIGI study, and enhance its accuracy and stability. Using this computational method, a high-quality IIGI study can be accomplished without the need for experienced personnel.

  7. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  8. Comparison of algorithms for computing the two-dimensional discrete Hartley transform

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Burton, John C.; Miller, Keith W.

    1989-01-01

    Three methods have been described for computing the two-dimensional discrete Hartley transform. Two of these employ a separable transform, the third method, the vector-radix algorithm, does not require separability. In-place computation of the vector-radix method is described. Operation counts and execution times indicate that the vector-radix method is fastest.

  9. A transient response analysis of the space shuttle vehicle during liftoff

    NASA Technical Reports Server (NTRS)

    Brunty, J. A.

    1990-01-01

    A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.

  10. Solution of partial differential equations on vector and parallel computers

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Voigt, R. G.

    1985-01-01

    The present status of numerical methods for partial differential equations on vector and parallel computers was reviewed. The relevant aspects of these computers are discussed and a brief review of their development is included, with particular attention paid to those characteristics that influence algorithm selection. Both direct and iterative methods are given for elliptic equations as well as explicit and implicit methods for initial boundary value problems. The intent is to point out attractive methods as well as areas where this class of computer architecture cannot be fully utilized because of either hardware restrictions or the lack of adequate algorithms. Application areas utilizing these computers are briefly discussed.

  11. A vectorized Lanczos eigensolver for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1990-01-01

    The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.

  12. Computational efficiency for the surface renewal method

    NASA Astrophysics Data System (ADS)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  13. New Computational Methods for the Prediction and Analysis of Helicopter Noise

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper describes several new methods to predict and analyze rotorcraft noise. These methods are: 1) a combined computational fluid dynamics and Kirchhoff scheme for far-field noise predictions, 2) parallel computer implementation of the Kirchhoff integrations, 3) audio and visual rendering of the computed acoustic predictions over large far-field regions, and 4) acoustic tracebacks to the Kirchhoff surface to pinpoint the sources of the rotor noise. The paper describes each method and presents sample results for three test cases. The first case consists of in-plane high-speed impulsive noise and the other two cases show idealized parallel and oblique blade-vortex interactions. The computed results show good agreement with available experimental data but convey much more information about the far-field noise propagation. When taken together, these new analysis methods exploit the power of new computer technologies and offer the potential to significantly improve our prediction and understanding of rotorcraft noise.

  14. Re-Computation of Numerical Results Contained in NACA Report No. 496

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III

    2015-01-01

    An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.

  15. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  16. An efficient method for computing unsteady transonic aerodynamics of swept wings with control surfaces

    NASA Technical Reports Server (NTRS)

    Liu, D. D.; Kao, Y. F.; Fung, K. Y.

    1989-01-01

    A transonic equivalent strip (TES) method was further developed for unsteady flow computations of arbitrary wing planforms. The TES method consists of two consecutive correction steps to a given nonlinear code such as LTRAN2; namely, the chordwise mean flow correction and the spanwise phase correction. The computation procedure requires direct pressure input from other computed or measured data. Otherwise, it does not require airfoil shape or grid generation for given planforms. To validate the computed results, four swept wings of various aspect ratios, including those with control surfaces, are selected as computational examples. Overall trends in unsteady pressures are established with those obtained by XTRAN3S codes, Isogai's full potential code and measured data by NLR and RAE. In comparison with these methods, the TES has achieved considerable saving in computer time and reasonable accuracy which suggests immediate industrial applications.

  17. COMSAC: Computational Methods for Stability and Control. Part 1

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

  18. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  19. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Method of computing coverage. 80.771 Section 80.771 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method...

  20. Computer-aided drug discovery.

    PubMed

    Bajorath, Jürgen

    2015-01-01

    Computational approaches are an integral part of interdisciplinary drug discovery research. Understanding the science behind computational tools, their opportunities, and limitations is essential to make a true impact on drug discovery at different levels. If applied in a scientifically meaningful way, computational methods improve the ability to identify and evaluate potential drug molecules, but there remain weaknesses in the methods that preclude naïve applications. Herein, current trends in computer-aided drug discovery are reviewed, and selected computational areas are discussed. Approaches are highlighted that aid in the identification and optimization of new drug candidates. Emphasis is put on the presentation and discussion of computational concepts and methods, rather than case studies or application examples. As such, this contribution aims to provide an overview of the current methodological spectrum of computational drug discovery for a broad audience.

  1. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  2. Ways of achieving continuous service from computers

    NASA Technical Reports Server (NTRS)

    Quinn, M. J., Jr.

    1974-01-01

    This paper outlines the methods used in the real-time computer complex to keep computers operating. Methods include selectover, high-speed restart, and low-speed restart. The hardware and software needed to implement these methods is discussed as well as the system recovery facility, alternate device support, and timeout. In general, methods developed while supporting the Gemini, Apollo, and Skylab space missions are presented.

  3. Estimating costs and performance of systems for machine processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ballard, R. J.; Eastwood, L. F., Jr.

    1977-01-01

    This paper outlines a method for estimating computer processing times and costs incurred in producing information products from digital remotely sensed data. The method accounts for both computation and overhead, and may be applied to any serial computer. The method is applied to estimate the cost and computer time involved in producing Level II Land Use and Vegetative Cover Maps for a five-state midwestern region. The results show that the amount of data to be processed overloads some example computer systems, but that the processing is feasible on others.

  4. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  5. Computational and mathematical methods in brain atlasing.

    PubMed

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  6. 12 CFR 227.25 - Unfair balance computation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25 Section 227.25 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in paragraph...

  7. Efficient Predictions of Excited State for Nanomaterials Using Aces 3 and 4

    DTIC Science & Technology

    2017-12-20

    by first-principle methods in the software package ACES by using large parallel computers, growing tothe exascale. 15. SUBJECT TERMS Computer...modeling, excited states, optical properties, structure, stability, activation barriers first principle methods , parallel computing 16. SECURITY...2 Progress with new density functional methods

  8. Classical versus Computer Algebra Methods in Elementary Geometry

    ERIC Educational Resources Information Center

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  9. Computational composite mechanics for aerospace propulsion structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1986-01-01

    Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial fabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating (1) complex composite structural behavior in general and (2) specific aerospace propulsion structural components in particular.

  10. Computational composite mechanics for aerospace propulsion structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1987-01-01

    Specialty methods are presented for the computational simulation of specific composite behavior. These methods encompass all aspects of composite mechanics, impact, progressive fracture and component specific simulation. Some of these methods are structured to computationally simulate, in parallel, the composite behavior and history from the initial frabrication through several missions and even to fracture. Select methods and typical results obtained from such simulations are described in detail in order to demonstrate the effectiveness of computationally simulating: (1) complex composite structural behavior in general, and (2) specific aerospace propulsion structural components in particular.

  11. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  12. Computational methods for 2D materials: discovery, property characterization, and application design.

    PubMed

    Paul, J T; Singh, A K; Dong, Z; Zhuang, H; Revard, B C; Rijal, B; Ashton, M; Linscheid, A; Blonsky, M; Gluhovic, D; Guo, J; Hennig, R G

    2017-11-29

    The discovery of two-dimensional (2D) materials comes at a time when computational methods are mature and can predict novel 2D materials, characterize their properties, and guide the design of 2D materials for applications. This article reviews the recent progress in computational approaches for 2D materials research. We discuss the computational techniques and provide an overview of the ongoing research in the field. We begin with an overview of known 2D materials, common computational methods, and available cyber infrastructures. We then move onto the discovery of novel 2D materials, discussing the stability criteria for 2D materials, computational methods for structure prediction, and interactions of monolayers with electrochemical and gaseous environments. Next, we describe the computational characterization of the 2D materials' electronic, optical, magnetic, and superconducting properties and the response of the properties under applied mechanical strain and electrical fields. From there, we move on to discuss the structure and properties of defects in 2D materials, and describe methods for 2D materials device simulations. We conclude by providing an outlook on the needs and challenges for future developments in the field of computational research for 2D materials.

  13. Computational methods for 2D materials: discovery, property characterization, and application design

    NASA Astrophysics Data System (ADS)

    Paul, J. T.; Singh, A. K.; Dong, Z.; Zhuang, H.; Revard, B. C.; Rijal, B.; Ashton, M.; Linscheid, A.; Blonsky, M.; Gluhovic, D.; Guo, J.; Hennig, R. G.

    2017-11-01

    The discovery of two-dimensional (2D) materials comes at a time when computational methods are mature and can predict novel 2D materials, characterize their properties, and guide the design of 2D materials for applications. This article reviews the recent progress in computational approaches for 2D materials research. We discuss the computational techniques and provide an overview of the ongoing research in the field. We begin with an overview of known 2D materials, common computational methods, and available cyber infrastructures. We then move onto the discovery of novel 2D materials, discussing the stability criteria for 2D materials, computational methods for structure prediction, and interactions of monolayers with electrochemical and gaseous environments. Next, we describe the computational characterization of the 2D materials’ electronic, optical, magnetic, and superconducting properties and the response of the properties under applied mechanical strain and electrical fields. From there, we move on to discuss the structure and properties of defects in 2D materials, and describe methods for 2D materials device simulations. We conclude by providing an outlook on the needs and challenges for future developments in the field of computational research for 2D materials.

  14. Fractal Analysis of Rock Joint Profiles

    NASA Astrophysics Data System (ADS)

    Audy, Ondřej; Ficker, Tomáš

    2017-10-01

    Surface reliefs of rock joints are analyzed in geotechnics when shear strength of rocky slopes is estimated. The rock joint profiles actually are self-affine fractal curves and computations of their fractal dimensions require special methods. Many papers devoted to the fractal properties of these profiles were published in the past but only a few of those papers employed a convenient computational method that would have guaranteed a sound value of that dimension. As a consequence, anomalously low dimensions were presented. This contribution deals with two computational modifications that lead to sound fractal dimensions of the self-affine rock joint profiles. These are the modified box-counting method and the modified yard-stick method sometimes called the compass method. Both these methods are frequently applied to self-similar fractal curves but the self-affine profile curves due to their self-affine nature require modified computational procedures implemented in computer programs.

  15. Improved look-up table method of computer-generated holograms.

    PubMed

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  16. A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Willert, Jeffrey; Park, H.; Knoll, D. A.

    2014-10-01

    Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.

  17. Prediction of intestinal absorption and blood-brain barrier penetration by computational methods.

    PubMed

    Clark, D E

    2001-09-01

    This review surveys the computational methods that have been developed with the aim of identifying drug candidates likely to fail later on the road to market. The specifications for such computational methods are outlined, including factors such as speed, interpretability, robustness and accuracy. Then, computational filters aimed at predicting "drug-likeness" in a general sense are discussed before methods for the prediction of more specific properties--intestinal absorption and blood-brain barrier penetration--are reviewed. Directions for future research are discussed and, in concluding, the impact of these methods on the drug discovery process, both now and in the future, is briefly considered.

  18. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  19. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  20. A CUMULATIVE MIGRATION METHOD FOR COMPUTING RIGOROUS TRANSPORT CROSS SECTIONS AND DIFFUSION COEFFICIENTS FOR LWR LATTICES WITH MONTE CARLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhaoyuan Liu; Kord Smith; Benoit Forget

    2016-05-01

    A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices.more » Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.« less

  1. On computational methods for crashworthiness

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  2. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  3. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  4. Medical students’ attitudes and perspectives regarding novel computer-based practical spot tests compared to traditional practical spot tests

    PubMed Central

    Wijerathne, Buddhika; Rathnayake, Geetha

    2013-01-01

    Background Most universities currently practice traditional practical spot tests to evaluate students. However, traditional methods have several disadvantages. Computer-based examination techniques are becoming more popular among medical educators worldwide. Therefore incorporating the computer interface in practical spot testing is a novel concept that may minimize the shortcomings of traditional methods. Assessing students’ attitudes and perspectives is vital in understanding how students perceive the novel method. Methods One hundred and sixty medical students were randomly allocated to either a computer-based spot test (n=80) or a traditional spot test (n=80). The students rated their attitudes and perspectives regarding the spot test method soon after the test. The results were described comparatively. Results Students had higher positive attitudes towards the computer-based practical spot test compared to the traditional spot test. Their recommendations to introduce the novel practical spot test method for future exams and to other universities were statistically significantly higher. Conclusions The computer-based practical spot test is viewed as more acceptable to students than the traditional spot test. PMID:26451213

  5. Probabilistic methods for rotordynamics analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  6. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  7. An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Randal Scott

    CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest andmore » emerging HPC systems.« less

  8. A finite element method to compute three-dimensional equilibrium configurations of fluid membranes: Optimal parameterization, variational formulation and applications

    NASA Astrophysics Data System (ADS)

    Rangarajan, Ramsharan; Gao, Huajian

    2015-09-01

    We introduce a finite element method to compute equilibrium configurations of fluid membranes, identified as stationary points of a curvature-dependent bending energy functional under certain geometric constraints. The reparameterization symmetries in the problem pose a challenge in designing parametric finite element methods, and existing methods commonly resort to Lagrange multipliers or penalty parameters. In contrast, we exploit these symmetries by representing solution surfaces as normal offsets of given reference surfaces and entirely bypass the need for artificial constraints. We then resort to a Galerkin finite element method to compute discrete C1 approximations of the normal offset coordinate. The variational framework presented is suitable for computing deformations of three-dimensional membranes subject to a broad range of external interactions. We provide a systematic algorithm for computing large deformations, wherein solutions at subsequent load steps are identified as perturbations of previously computed ones. We discuss the numerical implementation of the method in detail and demonstrate its optimal convergence properties using examples. We discuss applications of the method to studying adhesive interactions of fluid membranes with rigid substrates and to investigate the influence of membrane tension in tether formation.

  9. Sub-domain methods for collaborative electromagnetic computations

    NASA Astrophysics Data System (ADS)

    Soudais, Paul; Barka, André

    2006-06-01

    In this article, we describe a sub-domain method for electromagnetic computations based on boundary element method. The benefits of the sub-domain method are that the computation can be split between several companies for collaborative studies; also the computation time can be reduced by one or more orders of magnitude especially in the context of parametric studies. The accuracy and efficiency of this technique is assessed by RCS computations on an aircraft air intake with duct and rotating engine mock-up called CHANNEL. Collaborative results, obtained by combining two sets of sub-domains computed by two companies, are compared with measurements on the CHANNEL mock-up. The comparisons are made for several angular positions of the engine to show the benefits of the method for parametric studies. We also discuss the accuracy of two formulations of the sub-domain connecting scheme using edge based or modal field expansion. To cite this article: P. Soudais, A. Barka, C. R. Physique 7 (2006).

  10. Application of theoretical methods to increase succinate production in engineered strains.

    PubMed

    Valderrama-Gomez, M A; Kreitmayer, D; Wolf, S; Marin-Sanguino, A; Kremling, A

    2017-04-01

    Computational methods have enabled the discovery of non-intuitive strategies to enhance the production of a variety of target molecules. In the case of succinate production, reviews covering the topic have not yet analyzed the impact and future potential that such methods may have. In this work, we review the application of computational methods to the production of succinic acid. We found that while a total of 26 theoretical studies were published between 2002 and 2016, only 10 studies reported the successful experimental implementation of any kind of theoretical knowledge. None of the experimental studies reported an exact application of the computational predictions. However, the combination of computational analysis with complementary strategies, such as directed evolution and comparative genome analysis, serves as a proof of concept and demonstrates that successful metabolic engineering can be guided by rational computational methods.

  11. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  12. Trapping of a micro-bubble by non-paraxial Gaussian beam: computation using the FDTD method.

    PubMed

    Sung, Seung-Yong; Lee, Yong-Gu

    2008-03-03

    Optical forces on a micro-bubble were computed using the Finite Difference Time Domain method. Non-paraxial Gaussian beam equation was used to represent the incident laser with high numerical aperture, common in optical tweezers. The electromagnetic field distribution around a micro-bubble was computed using FDTD method and the electromagnetic stress tensor on the surface of a micro-bubble was used to compute the optical forces. By the analysis of the computational results, interesting relations between the radius of the circular trapping ring and the corresponding stability of the trap were found.

  13. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  14. Introduction to computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    Computational aeroacoustics (CAA) is introduced, by presenting its definition, advantages, applications, and initial challenges. The effects of Mach number and Reynolds number on CAA are considered. The CAA method combines the methods of aeroacoustics and computational fluid dynamics.

  15. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  16. [Measurement of intracranial hematoma volume by personal computer].

    PubMed

    DU, Wanping; Tan, Lihua; Zhai, Ning; Zhou, Shunke; Wang, Rui; Xue, Gongshi; Xiao, An

    2011-01-01

    To explore the method for intracranial hematoma volume measurement by the personal computer. Forty cases of various intracranial hematomas were measured by the computer tomography with quantitative software and personal computer with Photoshop CS3 software, respectively. the data from the 2 methods were analyzed and compared. There was no difference between the data from the computer tomography and the personal computer (P>0.05). The personal computer with Photoshop CS3 software can measure the volume of various intracranial hematomas precisely, rapidly and simply. It should be recommended in the clinical medicolegal identification.

  17. Using quantum chemistry muscle to flex massive systems: How to respond to something perturbing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Colleen

    Computational chemistry uses the theoretical advances of quantum mechanics and the algorithmic and hardware advances of computer science to give insight into chemical problems. It is currently possible to do highly accurate quantum chemistry calculations, but the most accurate methods are very computationally expensive. Thus it is only feasible to do highly accurate calculations on small molecules, since typically more computationally efficient methods are also less accurate. The overall goal of my dissertation work has been to try to decrease the computational expense of calculations without decreasing the accuracy. In particular, my dissertation work focuses on fragmentation methods, intermolecular interactionsmore » methods, analytic gradients, and taking advantage of new hardware.« less

  18. Remote control system for high-perfomance computer simulation of crystal growth by the PFC method

    NASA Astrophysics Data System (ADS)

    Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.

  19. Developing a multimodal biometric authentication system using soft computing methods.

    PubMed

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  20. Computational simulation of concurrent engineering for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  1. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  2. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Astrophysics Data System (ADS)

    Chamis, C. C.; Singhal, S. N.

    1993-02-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  3. Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods, and Results for a User Study

    DTIC Science & Technology

    2016-11-01

    Display Design, Methods , and Results for a User Study by Christopher J Garneau and Robert F Erbacher Approved for public...NOV 2016 US Army Research Laboratory Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods ...January 2013–September 2015 4. TITLE AND SUBTITLE Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods

  4. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1992-01-01

    Research conducted during the period from July 1991 through December 1992 is covered. A method based upon the quasi-analytical approach was developed for computing the aerodynamic sensitivity coefficients of three dimensional wings in transonic and subsonic flow. In addition, the method computes for comparison purposes the aerodynamic sensitivity coefficients using the finite difference approach. The accuracy and validity of the methods are currently under investigation.

  5. Water demand forecasting: review of soft computing methods.

    PubMed

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  6. A fast object-oriented Matlab implementation of the Reproducing Kernel Particle Method

    NASA Astrophysics Data System (ADS)

    Barbieri, Ettore; Meo, Michele

    2012-05-01

    Novel numerical methods, known as Meshless Methods or Meshfree Methods and, in a wider perspective, Partition of Unity Methods, promise to overcome most of disadvantages of the traditional finite element techniques. The absence of a mesh makes meshfree methods very attractive for those problems involving large deformations, moving boundaries and crack propagation. However, meshfree methods still have significant limitations that prevent their acceptance among researchers and engineers, namely the computational costs. This paper presents an in-depth analysis of computational techniques to speed-up the computation of the shape functions in the Reproducing Kernel Particle Method and Moving Least Squares, with particular focus on their bottlenecks, like the neighbour search, the inversion of the moment matrix and the assembly of the stiffness matrix. The paper presents numerous computational solutions aimed at a considerable reduction of the computational times: the use of kd-trees for the neighbour search, sparse indexing of the nodes-points connectivity and, most importantly, the explicit and vectorized inversion of the moment matrix without using loops and numerical routines.

  7. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  8. A Review of Computational Methods in Materials Science: Examples from Shock-Wave and Polymer Physics

    PubMed Central

    Steinhauser, Martin O.; Hiermaier, Stefan

    2009-01-01

    This review discusses several computational methods used on different length and time scales for the simulation of material behavior. First, the importance of physical modeling and its relation to computer simulation on multiscales is discussed. Then, computational methods used on different scales are shortly reviewed, before we focus on the molecular dynamics (MD) method. Here we survey in a tutorial-like fashion some key issues including several MD optimization techniques. Thereafter, computational examples for the capabilities of numerical simulations in materials research are discussed. We focus on recent results of shock wave simulations of a solid which are based on two different modeling approaches and we discuss their respective assets and drawbacks with a view to their application on multiscales. Then, the prospects of computer simulations on the molecular length scale using coarse-grained MD methods are covered by means of examples pertaining to complex topological polymer structures including star-polymers, biomacromolecules such as polyelectrolytes and polymers with intrinsic stiffness. This review ends by highlighting new emerging interdisciplinary applications of computational methods in the field of medical engineering where the application of concepts of polymer physics and of shock waves to biological systems holds a lot of promise for improving medical applications such as extracorporeal shock wave lithotripsy or tumor treatment. PMID:20054467

  9. The use of National Weather Service Data to Compute the Dose to the MEOI.

    PubMed

    Vickers, Linda

    2018-05-01

    The Turner method is the "benchmark method" for computing the stability class that is used to compute the X/Q (s m). The Turner method should be used to ascertain the validity of X/Q results determined by other methods. This paper used site-specific meteorological data obtained from the National Weather Service. The Turner method described herein is simple, quick, accurate, and transparent because all of the data, calculations, and results are visible for verification and validation with published literature.

  10. Efficient method for computing the electronic transport properties of a multiterminal system

    NASA Astrophysics Data System (ADS)

    Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio

    2018-04-01

    We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.

  11. Interactive Computer Lessons for Introductory Economics: Guided Inquiry-From Supply and Demand to Women in the Economy.

    ERIC Educational Resources Information Center

    Miller, John; Weil, Gordon

    1986-01-01

    The interactive feature of computers is used to incorporate a guided inquiry method of learning introductory economics, extending the Computer Assisted Instruction (CAI) method beyond drills. (Author/JDH)

  12. A general method for calculating three-dimensional compressible laminar and turbulent boundary layers on arbitrary wings

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Ramsey, J. A.

    1977-01-01

    The method described utilizes a nonorthogonal coordinate system for boundary-layer calculations. It includes a geometry program that represents the wing analytically, and a velocity program that computes the external velocity components from a given experimental pressure distribution when the external velocity distribution is not computed theoretically. The boundary layer method is general, however, and can also be used for an external velocity distribution computed theoretically. Several test cases were computed by this method and the results were checked with other numerical calculations and with experiments when available. A typical computation time (CPU) on an IBM 370/165 computer for one surface of a wing which roughly consist of 30 spanwise stations and 25 streamwise stations, with 30 points across the boundary layer is less than 30 seconds for an incompressible flow and a little more for a compressible flow.

  13. Methods for Prediction of High-Speed Reacting Flows in Aerospace Propulsion

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    2014-01-01

    Research to develop high-speed airbreathing aerospace propulsion systems was underway in the late 1950s. A major part of the effort involved the supersonic combustion ramjet, or scramjet, engine. Work had also begun to develop computational techniques for solving the equations governing the flow through a scramjet engine. However, scramjet technology and the computational methods to assist in its evolution would remain apart for another decade. The principal barrier was that the computational methods needed for engine evolution lacked the computer technology required for solving the discrete equations resulting from the numerical methods. Even today, computer resources remain a major pacing item in overcoming this barrier. Significant advances have been made over the past 35 years, however, in modeling the supersonic chemically reacting flow in a scramjet combustor. To see how scramjet development and the required computational tools finally merged, we briefly trace the evolution of the technology in both areas.

  14. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    PubMed

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  15. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network

    PubMed Central

    Ghaderi, Forouzan; Ghaderi, Amir H.; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose. PMID:29188217

  16. On the minimum orbital intersection distance computation: a new effective method

    NASA Astrophysics Data System (ADS)

    Hedo, José M.; Ruíz, Manuel; Peláez, Jesús

    2018-06-01

    The computation of the Minimum Orbital Intersection Distance (MOID) is an old, but increasingly relevant problem. Fast and precise methods for MOID computation are needed to select potentially hazardous asteroids from a large catalogue. The same applies to debris with respect to spacecraft. An iterative method that strictly meets these two premises is presented.

  17. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  18. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  19. Overview of computational structural methods for modern military aircraft

    NASA Technical Reports Server (NTRS)

    Kudva, J. N.

    1992-01-01

    Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

  20. Research in Computational Astrobiology

    NASA Technical Reports Server (NTRS)

    Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.

    2003-01-01

    We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.

  1. Funding growth in an age of austerity.

    PubMed

    Hamel, Gary; Getz, Gary

    2004-01-01

    Everyone knows that corporate growth--true growth, not just agglomeration--springs from innovation. And the common wisdom is that companies must spend lavishly on R&D if they are to innovate at all. But in these fiscally cautious times, where every line item of every budget in every company is under intense scrutiny, many organizations are doing just the opposite. They tighten their belts, subject nascent product-development programs to rigorous screening, and train R&D staffers to think in business terms so the researchers will be better able to decide whether an idea for a product or service is worth pursuing in the first place. Such efficiency measures are commendable, say authors Gary Hamel and Gary Getz. But frugality is not a growth strategy, they point out, and, in truth, there is very little correlation between corporate performance and the amount spent on innovation. Companies like Southwest, Cemex, and Shell Chemicals have shown that businesses don't have to spend a fortune on R&D to reap the benefits of innovation. To produce more growth per dollar invested, companies must produce more innovation per dollar invested. Hamel and Getz explain how businesses can dramatically improve their innovation yields. They offer these five imperatives: Increase the number of innovators among existing employees (whatever their job titles) by involving them in innovation processes and events. Focus on developing truly radical ideas--ones that change customers' expectations and behaviors and industry economics--not just incremental ideas. Look for innovation sources outside the organization, as well as inside. Increase the learning from small, low-risk experiments. And commit to long-term, consistent development efforts.

  2. Information: fuel for change.

    PubMed

    Tinker, J

    1992-01-01

    Fundamental changes in the attitude and mentality of humanity are necessary to change the reckless course of human progress and achieve sustainable development. The UN Conference on Environment and Development has been unable to develop the requisite educational information for this purpose. The Northern environment ministers know that the pursuit of endless consumption is escalating energy use and unsustainable demands on the planet's resources. The ministers from the South know their forests and fisheries are declining, that much of their soil is becoming less fertile every year, and that their copper, timber, fish and bananas are being sold at a fraction of their real value. Environmentalists have been advocating more public awareness prior to and since the Stockholm Conference on the Human Environment in 1972. Saving the rainforest requires understanding the causes of deforestation: avaricious logging companies and poor peasants searching for land. The rich in the North need a new frugalism to replace the mindless consumerism, while the poor in the South need new alternatives to escape from the poverty that forces them to destroy their environment. These changes in attitude could come from nongovernmental organizations (NGOs) and the civil society. Governments cannot decree socially and politically sustainable development. Political pluralism indispensable within society means NGOs and community groups and responsive governments. Pluralisms of information means a wider range of newspapers and magazines and a wider spectrum of views in the media. Since 1989 there has been vigorous acceleration toward pluralism in the world powered by NGOs. The many voices of NGOs have generated the dramatic political changes of recent years. Alternative news agencies and research institutes must propagate information to elucidate sustainable development.

  3. [Medical education for laity: Plutarch's contribution].

    PubMed

    Jori, Alberto

    2009-01-01

    In his treatise De tuenda sanitate praecepta (Ygieina paraggelmata: Prescriptions for Health), the Greek philosopher Plutarch of Chaeronea (b. about 45 A.D., d. about 125 A.D.) pursues two aims, which have a deep pedagogical character and are closely connected. To begin with, he would like to provide both his colleagues, the "philosophers" (the equivalent of today's "intellectuals") and politicians with some sanitary/medical suggestions, so that they may adopt a healthy life-style, and consequently avoid disease to the best of their ability. Plutarch thus proposes that "philosophers" be made aware of the opportunity, or better yet, of the necessity of learning some medical notions: in their general education (paideia), his colleagues should allow medicine its adequate space, at least in regard to the practical side which relates to a "life-regimen". At the same time, Plutarch wishes to impart a moral teaching: in order to remain in good health we must distance ourselves from irrational impulses and social conventions which induce us to practice detrimental behaviours. In this context, the author stresses the need to respect the principles of moderation--both medical and ethical: those of frugality, self-control, and naturalness. His advice is still valid and effective today. Within the background of Plutarch's treatise there is yet a third, implicit aim: to urge the physicians not to imprison themselves in their professional specialization, but rather to also acquire a philosophical education. Such education would indeed allow them to achieve a whole, "holistic" picture of man, who is at the same time soul and body.

  4. Vaunting the independent amateur: Scientific American and the representation of lay scientists.

    PubMed

    Johnston, Sean F

    2018-04-01

    This paper traces how media representations encouraged enthusiasts, youth and skilled volunteers to participate actively in science and technology during the twentieth century. It assesses how distinctive discourses about scientific amateurs positioned them with respect to professionals in shifting political and cultural environments. In particular, the account assesses the seminal role of a periodical, Scientific American magazine, in shaping and championing an enduring vision of autonomous scientific enthusiasms. Between the 1920s and 1970s, editors Albert G. Ingalls and Clair L. Stong shepherded generations of adult 'amateur scientists'. Their columns and books popularized a vision of independent non-professional research that celebrated the frugal ingenuity and skills of inveterate tinkerers. Some of these attributes have found more recent expression in present-day 'maker culture'. The topic consequently is relevant to the historiography of scientific practice, science popularization and science education. Its focus on independent non-professionals highlights political dimensions of agency and autonomy that have often been implicit for such historical (and contemporary) actors. The paper argues that the Scientific American template of adult scientific amateurism contrasted with other representations: those promoted by earlier periodicals and by a science education organization, Science Service, and by the national demands for recruiting scientific labour during and after the Second World War. The evidence indicates that advocates of the alternative models had distinctive goals and adapted their narrative tactics to reach their intended audiences, which typically were conceived as young persons requiring instruction or mentoring. By contrast, the monthly Scientific American columns established a long-lived and stable image of the independent lay scientist.

  5. A comparison of methods for computing the sigma-coordinate pressure gradient force for flow over sloped terrain in a hybrid theta-sigma model

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Uccellini, L. W.

    1983-01-01

    In connection with the employment of the sigma coordinates introduced by Phillips (1957), problems can arise regarding an accurate finite-difference computation of the pressure gradient force. Over steeply sloped terrain, the calculation of the sigma-coordinate pressure gradient force involves computing the difference between two large terms of opposite sign which results in large truncation error. To reduce the truncation error, several finite-difference methods have been designed and implemented. The present investigation has the objective to provide another method of computing the sigma-coordinate pressure gradient force. Phillips' method is applied for the elimination of a hydrostatic component to a flux formulation. The new technique is compared with four other methods for computing the pressure gradient force. The work is motivated by the desire to use an isentropic and sigma-coordinate hybrid model for experiments designed to study flow near mountainous terrain.

  6. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    PubMed

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  7. Analytical approximate solutions for a general class of nonlinear delay differential equations.

    PubMed

    Căruntu, Bogdan; Bota, Constantin

    2014-01-01

    We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.

  8. Technology, Pedagogy, and Epistemology: Opportunities and Challenges of Using Computer Modeling and Simulation Tools in Elementary Science Methods

    ERIC Educational Resources Information Center

    Schwarz, Christina V.; Meyer, Jason; Sharma, Ajay

    2007-01-01

    This study infused computer modeling and simulation tools in a 1-semester undergraduate elementary science methods course to advance preservice teachers' understandings of computer software use in science teaching and to help them learn important aspects of pedagogy and epistemology. Preservice teachers used computer modeling and simulation tools…

  9. Iterative load-balancing method with multigrid level relaxation for particle simulation with short-range interactions

    NASA Astrophysics Data System (ADS)

    Furuichi, Mikito; Nishiura, Daisuke

    2017-10-01

    We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.

  10. High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster

    NASA Astrophysics Data System (ADS)

    Ikuno, Soichiro; Nakata, Susumu; Hirokawa, Yuta; Itoh, Taku

    2015-01-01

    High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation.

  11. Influence of computational domain size on the pattern formation of the phase field crystals

    NASA Astrophysics Data System (ADS)

    Starodumov, Ilya; Galenko, Peter; Alexandrov, Dmitri; Kropotin, Nikolai

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) represents one of the important directions of modern computational materials science. This method makes it possible to research the formation of stable or metastable crystal structures. In this paper, we study the effect of computational domain size on the crystal pattern formation obtained as a result of computer simulation by the PFC method. In the current report, we show that if the size of a computational domain is changed, the result of modeling may be a structure in metastable phase instead of pure stable state. The authors present a possible theoretical justification for the observed effect and provide explanations on the possible modification of the PFC method to account for this phenomenon.

  12. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne E.

    2013-01-01

    We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.

  13. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Method of computing annual volume of sales. 794.123 Section... of Sales § 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during the... gross volume of sales in excess of the amount specified in the statute, it is plain that its annual...

  14. A computational algorithm addressing how vessel length might depend on vessel diameter

    Treesearch

    Jing Cai; Shuoxin Zhang; Melvin T. Tyree

    2010-01-01

    The objective of this method paper was to examine a computational algorithm that may reveal how vessel length might depend on vessel diameter within any given stem or species. The computational method requires the assumption that vessels remain approximately constant in diameter over their entire length. When this method is applied to three species or hybrids in the...

  15. A Comparison of Three Methods for Computing Scale Score Conditional Standard Errors of Measurement. ACT Research Report Series, 2013 (7)

    ERIC Educational Resources Information Center

    Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu

    2013-01-01

    Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…

  16. Human-computer interface

    DOEpatents

    Anderson, Thomas G.

    2004-12-21

    The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

  17. In-situ trainable intrusion detection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Symons, Christopher T.; Beaver, Justin M.; Gillen, Rob

    A computer implemented method detects intrusions using a computer by analyzing network traffic. The method includes a semi-supervised learning module connected to a network node. The learning module uses labeled and unlabeled data to train a semi-supervised machine learning sensor. The method records events that include a feature set made up of unauthorized intrusions and benign computer requests. The method identifies at least some of the benign computer requests that occur during the recording of the events while treating the remainder of the data as unlabeled. The method trains the semi-supervised learning module at the network node in-situ, such thatmore » the semi-supervised learning modules may identify malicious traffic without relying on specific rules, signatures, or anomaly detection.« less

  18. A combined vector potential-scalar potential method for FE computation of 3D magnetic fields in electrical devices with iron cores

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1991-01-01

    A method of combined use of magnetic vector potential based finite-element (FE) formulations and magnetic scalar potential (MSP) based formulations for computation of three-dimensional magnetostatic fields is introduced. In this method, the curl-component of the magnetic field intensity is computed by a reduced magnetic vector potential. This field intensity forms the basic of a forcing function for a global magnetic scalar potential solution over the entire volume of the region. This method allows one to include iron portions sandwiched in between conductors within partitioned current-carrying subregions. The method is most suited for large-scale global-type 3-D magnetostatic field computations in electrical devices, and in particular rotating electric machinery.

  19. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter

    2015-08-24

    We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

  20. Robust and High Order Computational Method for Parachute and Air Delivery and MAV System

    DTIC Science & Technology

    2017-11-01

    Report: Robust and High Order Computational Method for Parachute and Air Delivery and MAV System The views, opinions and/or findings contained in this...University Title: Robust and High Order Computational Method for Parachute and Air Delivery and MAV System Report Term: 0-Other Email: xiaolin.li...model coupled with an incompressible fluid solver through the impulse method . Our approach to simulating the parachute system is based on the front

  1. A new computational strategy for identifying essential proteins based on network topological properties and biological information.

    PubMed

    Qin, Chao; Sun, Yongqi; Dong, Yadong

    2017-01-01

    Essential proteins are the proteins that are indispensable to the survival and development of an organism. Deleting a single essential protein will cause lethality or infertility. Identifying and analysing essential proteins are key to understanding the molecular mechanisms of living cells. There are two types of methods for predicting essential proteins: experimental methods, which require considerable time and resources, and computational methods, which overcome the shortcomings of experimental methods. However, the prediction accuracy of computational methods for essential proteins requires further improvement. In this paper, we propose a new computational strategy named CoTB for identifying essential proteins based on a combination of topological properties, subcellular localization information and orthologous protein information. First, we introduce several topological properties of the protein-protein interaction (PPI) network. Second, we propose new methods for measuring orthologous information and subcellular localization and a new computational strategy that uses a random forest prediction model to obtain a probability score for the proteins being essential. Finally, we conduct experiments on four different Saccharomyces cerevisiae datasets. The experimental results demonstrate that our strategy for identifying essential proteins outperforms traditional computational methods and the most recently developed method, SON. In particular, our strategy improves the prediction accuracy to 89, 78, 79, and 85 percent on the YDIP, YMIPS, YMBD and YHQ datasets at the top 100 level, respectively.

  2. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  3. Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations

    PubMed Central

    Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey

    2012-01-01

    Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254

  4. Computer-Based Radiographic Quantification of Joint Space Narrowing Progression Using Sequential Hand Radiographs: Validation Study in Rheumatoid Arthritis Patients from Multiple Institutions.

    PubMed

    Ichikawa, Shota; Kamishima, Tamotsu; Sutherland, Kenneth; Fukae, Jun; Katayama, Kou; Aoki, Yuko; Okubo, Takanobu; Okino, Taichi; Kaneda, Takahiko; Takagi, Satoshi; Tanimura, Kazuhide

    2017-10-01

    We have developed a refined computer-based method to detect joint space narrowing (JSN) progression with the joint space narrowing progression index (JSNPI) by superimposing sequential hand radiographs. The purpose of this study is to assess the validity of a computer-based method using images obtained from multiple institutions in rheumatoid arthritis (RA) patients. Sequential hand radiographs of 42 patients (37 females and 5 males) with RA from two institutions were analyzed by a computer-based method and visual scoring systems as a standard of reference. The JSNPI above the smallest detectable difference (SDD) defined JSN progression on the joint level. The sensitivity and specificity of the computer-based method for JSN progression was calculated using the SDD and a receiver operating characteristic (ROC) curve. Out of 314 metacarpophalangeal joints, 34 joints progressed based on the SDD, while 11 joints widened. Twenty-one joints progressed in the computer-based method, 11 joints in the scoring systems, and 13 joints in both methods. Based on the SDD, we found lower sensitivity and higher specificity with 54.2 and 92.8%, respectively. At the most discriminant cutoff point according to the ROC curve, the sensitivity and specificity was 70.8 and 81.7%, respectively. The proposed computer-based method provides quantitative measurement of JSN progression using sequential hand radiographs and may be a useful tool in follow-up assessment of joint damage in RA patients.

  5. Fast multigrid-based computation of the induced electric field for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka; Hirata, Akimasa

    2012-12-01

    In transcranial magnetic stimulation (TMS), the distribution of the induced electric field, and the affected brain areas, depends on the position of the stimulation coil and the individual geometry of the head and brain. The distribution of the induced electric field in realistic anatomies can be modelled using computational methods. However, existing computational methods for accurately determining the induced electric field in realistic anatomical models have suffered from long computation times, typically in the range of tens of minutes or longer. This paper presents a matrix-free implementation of the finite-element method with a geometric multigrid method that can potentially reduce the computation time to several seconds or less even when using an ordinary computer. The performance of the method is studied by computing the induced electric field in two anatomically realistic models. An idealized two-loop coil is used as the stimulating coil. Multiple computational grid resolutions ranging from 2 to 0.25 mm are used. The results show that, for macroscopic modelling of the electric field in an anatomically realistic model, computational grid resolutions of 1 mm or 2 mm appear to provide good numerical accuracy compared to higher resolutions. The multigrid iteration typically converges in less than ten iterations independent of the grid resolution. Even without parallelization, each iteration takes about 1.0 s or 0.1 s for the 1 and 2 mm resolutions, respectively. This suggests that calculating the electric field with sufficient accuracy in real time is feasible.

  6. Summary of research in applied mathematics, numerical analysis, and computer sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  7. Hydrology Model Formulation within the Training Range Environmental Evaluation and Characterization System (TREECS)

    DTIC Science & Technology

    2014-02-01

    Potential evapotranspiration is computed using the Thornthwaite Method. Infiltration is computed from a water balance. DISCLAIMER: The contents of...precipitation, rainfall, runoff, evapotranspiration , infiltration, and number of days with rainfall. A hydrology model was developed to estimate...temperatures. Potential evapotranspiration (PET) is computed using the Thornthwaite Method. Actual evapotranspiration (ET) and infiltration are computed from a

  8. Performance of some numerical Laplace inversion methods on American put option formula

    NASA Astrophysics Data System (ADS)

    Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.

    2018-03-01

    Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.

  9. An approximate solution to improve computational efficiency of impedance-type payload load prediction

    NASA Technical Reports Server (NTRS)

    White, C. W.

    1981-01-01

    The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.

  10. A class of hybrid finite element methods for electromagnetics: A review

    NASA Technical Reports Server (NTRS)

    Volakis, J. L.; Chatterjee, A.; Gong, J.

    1993-01-01

    Integral equation methods have generally been the workhorse for antenna and scattering computations. In the case of antennas, they continue to be the prominent computational approach, but for scattering applications the requirement for large-scale computations has turned researchers' attention to near neighbor methods such as the finite element method, which has low O(N) storage requirements and is readily adaptable in modeling complex geometrical features and material inhomogeneities. In this paper, we review three hybrid finite element methods for simulating composite scatterers, conformal microstrip antennas, and finite periodic arrays. Specifically, we discuss the finite element method and its application to electromagnetic problems when combined with the boundary integral, absorbing boundary conditions, and artificial absorbers for terminating the mesh. Particular attention is given to large-scale simulations, methods, and solvers for achieving low memory requirements and code performance on parallel computing architectures.

  11. Computer program modifications of Open-file report 82-1065; a comprehensive system for interpreting seismic-refraction and arrival-time data using interactive computer methods

    USGS Publications Warehouse

    Ackermann, Hans D.; Pankratz, Leroy W.; Dansereau, Danny A.

    1983-01-01

    The computer programs published in Open-File Report 82-1065, A comprehensive system for interpreting seismic-refraction arrival-time data using interactive computer methods (Ackermann, Pankratz, and Dansereau, 1982), have been modified to run on a mini-computer. The new version uses approximately 1/10 of the memory of the initial version, is more efficient and gives the same results.

  12. Acoustic radiosity for computation of sound fields in diffuse environments

    NASA Astrophysics Data System (ADS)

    Muehleisen, Ralph T.; Beamer, C. Walter

    2002-05-01

    The use of image and ray tracing methods (and variations thereof) for the computation of sound fields in rooms is relatively well developed. In their regime of validity, both methods work well for prediction in rooms with small amounts of diffraction and mostly specular reflection at the walls. While extensions to the method to include diffuse reflections and diffraction have been made, they are limited at best. In the fields of illumination and computer graphics the ray tracing and image methods are joined by another method called luminous radiative transfer or radiosity. In radiosity, an energy balance between surfaces is computed assuming diffuse reflection at the reflective surfaces. Because the interaction between surfaces is constant, much of the computation required for sound field prediction with multiple or moving source and receiver positions can be reduced. In acoustics the radiosity method has had little attention because of the problems of diffraction and specular reflection. The utility of radiosity in acoustics and an approach to a useful development of the method for acoustics will be presented. The method looks especially useful for sound level prediction in industrial and office environments. [Work supported by NSF.

  13. SCREENING CHEMICALS FOR ESTROGEN RECEPTOR BIOACTIVITY USING A COMPUTATIONAL MODEL

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) is considering the use high-throughput and computational methods for regulatory applications in the Endocrine Disruptor Screening Program (EDSP). To use these new tools for regulatory decision making, computational methods must be a...

  14. Helical gears with circular arc teeth: Generation, geometry, precision and adjustment to errors, computer aided simulation of conditions of meshing and bearing contact

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Tsay, Chung-Biau

    1987-01-01

    The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.

  15. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  16. A New Computational Method to Fit the Weighted Euclidean Distance Model.

    ERIC Educational Resources Information Center

    De Leeuw, Jan; Pruzansky, Sandra

    1978-01-01

    A computational method for weighted euclidean distance scaling (a method of multidimensional scaling) which combines aspects of an "analytic" solution with an approach using loss functions is presented. (Author/JKS)

  17. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the 'loop unrolling' technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large-scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  18. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  19. Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati

    2016-10-01

    The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.

  20. Method and system for redundancy management of distributed and recoverable digital control system

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2012-01-01

    A method and system for redundancy management is provided for a distributed and recoverable digital control system. The method uses unique redundancy management techniques to achieve recovery and restoration of redundant elements to full operation in an asynchronous environment. The system includes a first computing unit comprising a pair of redundant computational lanes for generating redundant control commands. One or more internal monitors detect data errors in the control commands, and provide a recovery trigger to the first computing unit. A second redundant computing unit provides the same features as the first computing unit. A first actuator control unit is configured to provide blending and monitoring of the control commands from the first and second computing units, and to provide a recovery trigger to each of the first and second computing units. A second actuator control unit provides the same features as the first actuator control unit.

  1. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    NASA Astrophysics Data System (ADS)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  2. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  3. A computer-controlled scintiscanning system and associated computer graphic techniques for study of regional distribution of blood flow.

    NASA Technical Reports Server (NTRS)

    Coulam, C. M.; Dunnette, W. H.; Wood, E. H.

    1970-01-01

    Two methods whereby a digital computer may be used to regulate a scintiscanning process are discussed from the viewpoint of computer input-output software. The computer's function, in this case, is to govern the data acquisition and storage, and to display the results to the investigator in a meaningful manner, both during and subsequent to the scanning process. Several methods (such as three-dimensional maps, contour plots, and wall-reflection maps) have been developed by means of which the computer can graphically display the data on-line, for real-time monitoring purposes, during the scanning procedure and subsequently for detailed analysis of the data obtained. A computer-governed method for converting scintiscan data recorded over the dorsal or ventral surfaces of the thorax into fractions of pulmonary blood flow traversing the right and left lungs is presented.

  4. Integrating structure-based and ligand-based approaches for computational drug design.

    PubMed

    Wilson, Gregory L; Lill, Markus A

    2011-04-01

    Methods utilized in computer-aided drug design can be classified into two major categories: structure based and ligand based, using information on the structure of the protein or on the biological and physicochemical properties of bound ligands, respectively. In recent years there has been a trend towards integrating these two methods in order to enhance the reliability and efficiency of computer-aided drug-design approaches by combining information from both the ligand and the protein. This trend resulted in a variety of methods that include: pseudoreceptor methods, pharmacophore methods, fingerprint methods and approaches integrating docking with similarity-based methods. In this article, we will describe the concepts behind each method and selected applications.

  5. Comparison of Knowledge and Attitudes Using Computer-Based and Face-to-Face Personal Hygiene Training Methods in Food Processing Facilities

    ERIC Educational Resources Information Center

    Fenton, Ginger D.; LaBorde, Luke F.; Radhakrishna, Rama B.; Brown, J. Lynne; Cutter, Catherine N.

    2006-01-01

    Computer-based training is increasingly favored by food companies for training workers due to convenience, self-pacing ability, and ease of use. The objectives of this study were to determine if personal hygiene training, offered through a computer-based method, is as effective as a face-to-face method in knowledge acquisition and improved…

  6. [Economic efficiency of computer monitoring of health].

    PubMed

    Il'icheva, N P; Stazhadze, L L

    2001-01-01

    Presents the method of computer monitoring of health, based on utilization of modern information technologies in public health. The method helps organize preventive activities of an outpatient clinic at a high level and essentially decrease the time and money loss. Efficiency of such preventive measures, increased number of computer and Internet users suggests that such methods are promising and further studies in this field are needed.

  7. CSM research: Methods and application studies

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    1989-01-01

    Computational mechanics is that discipline of applied science and engineering devoted to the study of physical phenomena by means of computational methods based on mathematical modeling and simulation, utilizing digital computers. The discipline combines theoretical and applied mechanics, approximation theory, numerical analysis, and computer science. Computational mechanics has had a major impact on engineering analysis and design. When applied to structural mechanics, the discipline is referred to herein as computational structural mechanics. Complex structures being considered by NASA for the 1990's include composite primary aircraft structures and the space station. These structures will be much more difficult to analyze than today's structures and necessitate a major upgrade in computerized structural analysis technology. NASA has initiated a research activity in structural analysis called Computational Structural Mechanics (CSM). The broad objective of the CSM activity is to develop advanced structural analysis technology that will exploit modern and emerging computers, such as those with vector and/or parallel processing capabilities. Here, the current research directions for the Methods and Application Studies Team of the Langley CSM activity are described.

  8. Design of transonic airfoil sections using a similarity theory

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.

  9. Aeroelasticity of wing and wing-body configurations on parallel computers

    NASA Technical Reports Server (NTRS)

    Byun, Chansup

    1995-01-01

    The objective of this research is to develop computationally efficient methods for solving aeroelasticity problems on parallel computers. Both uncoupled and coupled methods are studied in this research. For the uncoupled approach, the conventional U-g method is used to determine the flutter boundary. The generalized aerodynamic forces required are obtained by the pulse transfer-function analysis method. For the coupled approach, the fluid-structure interaction is obtained by directly coupling finite difference Euler/Navier-Stokes equations for fluids and finite element dynamics equations for structures. This capability will significantly impact many aerospace projects of national importance such as Advanced Subsonic Civil Transport (ASCT), where the structural stability margin becomes very critical at the transonic region. This research effort will have direct impact on the High Performance Computing and Communication (HPCC) Program of NASA in the area of parallel computing.

  10. A computing method for spatial accessibility based on grid partition

    NASA Astrophysics Data System (ADS)

    Ma, Linbing; Zhang, Xinchang

    2007-06-01

    An accessibility computing method and process based on grid partition was put forward in the paper. As two important factors impacting on traffic, density of road network and relative spatial resistance for difference land use was integrated into computing traffic cost in each grid. A* algorithms was inducted to searching optimum traffic cost of grids path, a detailed searching process and definition of heuristic evaluation function was described in the paper. Therefore, the method can be implemented more simply and its data source is obtained more easily. Moreover, by changing heuristic searching information, more reasonable computing result can be obtained. For confirming our research, a software package was developed with C# language under ArcEngine9 environment. Applying the computing method, a case study on accessibility of business districts in Guangzhou city was carried out.

  11. New approach to canonical partition functions computation in Nf=2 lattice QCD at finite baryon density

    NASA Astrophysics Data System (ADS)

    Bornyakov, V. G.; Boyda, D. L.; Goy, V. A.; Molochkov, A. V.; Nakamura, Atsushi; Nikolaev, A. A.; Zakharov, V. I.

    2017-05-01

    We propose and test a new approach to computation of canonical partition functions in lattice QCD at finite density. We suggest a few steps procedure. We first compute numerically the quark number density for imaginary chemical potential i μq I . Then we restore the grand canonical partition function for imaginary chemical potential using the fitting procedure for the quark number density. Finally we compute the canonical partition functions using high precision numerical Fourier transformation. Additionally we compute the canonical partition functions using the known method of the hopping parameter expansion and compare results obtained by two methods in the deconfining as well as in the confining phases. The agreement between two methods indicates the validity of the new method. Our numerical results are obtained in two flavor lattice QCD with clover improved Wilson fermions.

  12. Aerodynamic optimization studies on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  13. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  14. Face recognition system and method using face pattern words and face pattern bytes

    DOEpatents

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  15. Computation of elementary modes: a unifying framework and the new binary approach

    PubMed Central

    Gagneur, Julien; Klamt, Steffen

    2004-01-01

    Background Metabolic pathway analysis has been recognized as a central approach to the structural analysis of metabolic networks. The concept of elementary (flux) modes provides a rigorous formalism to describe and assess pathways and has proven to be valuable for many applications. However, computing elementary modes is a hard computational task. In recent years we assisted in a multiplication of algorithms dedicated to it. We require a summarizing point of view and a continued improvement of the current methods. Results We show that computing the set of elementary modes is equivalent to computing the set of extreme rays of a convex cone. This standard mathematical representation provides a unified framework that encompasses the most prominent algorithmic methods that compute elementary modes and allows a clear comparison between them. Taking lessons from this benchmark, we here introduce a new method, the binary approach, which computes the elementary modes as binary patterns of participating reactions from which the respective stoichiometric coefficients can be computed in a post-processing step. We implemented the binary approach in FluxAnalyzer 5.1, a software that is free for academics. The binary approach decreases the memory demand up to 96% without loss of speed giving the most efficient method available for computing elementary modes to date. Conclusions The equivalence between elementary modes and extreme ray computations offers opportunities for employing tools from polyhedral computation for metabolic pathway analysis. The new binary approach introduced herein was derived from this general theoretical framework and facilitates the computation of elementary modes in considerably larger networks. PMID:15527509

  16. Large-scale structural analysis: The structural analyst, the CSM Testbed and the NAS System

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Mccleary, Susan L.; Macy, Steven C.; Aminpour, Mohammad A.

    1989-01-01

    The Computational Structural Mechanics (CSM) activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM testbed methods development environment is presented and some numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.

  17. Reciprocal Questioning and Computer-based Instruction in Introductory Auditing: Student Perceptions.

    ERIC Educational Resources Information Center

    Watters, Mike

    2000-01-01

    An auditing course used reciprocal questioning (Socratic method) and computer-based instruction. Separate evaluations by 67 students revealed a strong aversion to the Socratic method; students expected professors to lecture. They showed a strong preference for the computer-based assignment. (SK)

  18. ELECTRONIC DIGITAL COMPUTER

    DOEpatents

    Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

    1957-10-01

    The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

  19. A simplified analysis of propulsion installation losses for computerized aircraft design

    NASA Technical Reports Server (NTRS)

    Morris, S. J., Jr.; Nelms, W. P., Jr.; Bailey, R. O.

    1976-01-01

    A simplified method is presented for computing the installation losses of aircraft gas turbine propulsion systems. The method has been programmed for use in computer aided conceptual aircraft design studies that cover a broad range of Mach numbers and altitudes. The items computed are: inlet size, pressure recovery, additive drag, subsonic spillage drag, bleed and bypass drags, auxiliary air systems drag, boundary-layer diverter drag, nozzle boattail drag, and the interference drag on the region adjacent to multiple nozzle installations. The methods for computing each of these installation effects are described and computer codes for the calculation of these effects are furnished. The results of these methods are compared with selected data for the F-5A and other aircraft. The computer program can be used with uninstalled engine performance information which is currently supplied by a cycle analysis program. The program, including comments, is about 600 FORTRAN statements long, and uses both theoretical and empirical techniques.

  20. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  1. A Metric for Reducing False Positives in the Computer-Aided Detection of Breast Cancer from Dynamic Contrast-Enhanced Magnetic Resonance Imaging Based Screening Examinations of High-Risk Women.

    PubMed

    Levman, Jacob E D; Gallego-Ortiz, Cristina; Warner, Ellen; Causer, Petrina; Martel, Anne L

    2016-02-01

    Magnetic resonance imaging (MRI)-enabled cancer screening has been shown to be a highly sensitive method for the early detection of breast cancer. Computer-aided detection systems have the potential to improve the screening process by standardizing radiologists to a high level of diagnostic accuracy. This retrospective study was approved by the institutional review board of Sunnybrook Health Sciences Centre. This study compares the performance of a proposed method for computer-aided detection (based on the second-order spatial derivative of the relative signal intensity) with the signal enhancement ratio (SER) on MRI-based breast screening examinations. Comparison is performed using receiver operating characteristic (ROC) curve analysis as well as free-response receiver operating characteristic (FROC) curve analysis. A modified computer-aided detection system combining the proposed approach with the SER method is also presented. The proposed method provides improvements in the rates of false positive markings over the SER method in the detection of breast cancer (as assessed by FROC analysis). The modified computer-aided detection system that incorporates both the proposed method and the SER method yields ROC results equal to that produced by SER while simultaneously providing improvements over the SER method in terms of false positives per noncancerous exam. The proposed method for identifying malignancies outperforms the SER method in terms of false positives on a challenging dataset containing many small lesions and may play a useful role in breast cancer screening by MRI as part of a computer-aided detection system.

  2. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  3. Computing pKa Values in Different Solvents by Electrostatic Transformation.

    PubMed

    Rossini, Emanuele; Netz, Roland R; Knapp, Ernst-Walter

    2016-07-12

    We introduce a method that requires only moderate computational effort to compute pKa values of small molecules in different solvents with an average accuracy of better than 0.7 pH units. With a known pKa value in one solvent, the electrostatic transform method computes the pKa value in any other solvent if the proton solvation energy is known in both considered solvents. To apply the electrostatic transform method to a molecule, the electrostatic solvation energies of the protonated and deprotonated molecular species are computed in the two considered solvents using a dielectric continuum to describe the solvent. This is demonstrated for 30 molecules belonging to 10 different molecular families by considering 77 measured pKa values in 4 different solvents: water, acetonitrile, dimethyl sulfoxide, and methanol. The electrostatic transform method can be applied to any other solvent if the proton solvation energy is known. It is exclusively based on physicochemical principles, not using any empirical fetch factors or explicit solvent molecules, to obtain agreement with measured pKa values and is therefore ready to be generalized to other solute molecules and solvents. From the computed pKa values, we obtained relative proton solvation energies, which agree very well with the proton solvation energies computed recently by ab initio methods, and used these energies in the present study.

  4. Moving Computational Domain Method and Its Application to Flow Around a High-Speed Car Passing Through a Hairpin Curve

    NASA Astrophysics Data System (ADS)

    Watanabe, Koji; Matsuno, Kenichi

    This paper presents a new method for simulating flows driven by a body traveling with neither restriction on motion nor a limit of a region size. In the present method named 'Moving Computational Domain Method', the whole of the computational domain including bodies inside moves in the physical space without the limit of region size. Since the whole of the grid of the computational domain moves according to the movement of the body, a flow solver of the method has to be constructed on the moving grid system and it is important for the flow solver to satisfy physical and geometric conservation laws simultaneously on moving grid. For this issue, the Moving-Grid Finite-Volume Method is employed as the flow solver. The present Moving Computational Domain Method makes it possible to simulate flow driven by any kind of motion of the body in any size of the region with satisfying physical and geometric conservation laws simultaneously. In this paper, the method is applied to the flow around a high-speed car passing through a hairpin curve. The distinctive flow field driven by the car at the hairpin curve has been demonstrated in detail. The results show the promising feature of the method.

  5. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.

  6. The use of computers in a materials science laboratory

    NASA Technical Reports Server (NTRS)

    Neville, J. P.

    1990-01-01

    The objective is to make available a method of easily recording the microstructure of a sample by means of a computer. The method requires a minimum investment and little or no instruction on the operation of a computer. An outline of the setup involving a black and white TV camera, a digitizer control box, a metallurgical microscope and a computer screen, printer, and keyboard is shown.

  7. Renormalized stress-energy tensor for stationary black holes

    NASA Astrophysics Data System (ADS)

    Levi, Adam

    2017-01-01

    We continue the presentation of the pragmatic mode-sum regularization (PMR) method for computing the renormalized stress-energy tensor (RSET). We show in detail how to employ the t -splitting variant of the method, which was first presented for ⟨ϕ2⟩ren , to compute the RSET in a stationary, asymptotically flat background. This variant of the PMR method was recently used to compute the RSET for an evaporating spinning black hole. As an example for regularization, we demonstrate here the computation of the RSET for a minimally coupled, massless scalar field on Schwarzschild background in all three vacuum states. We discuss future work and possible improvements of the regularization schemes in the PMR method.

  8. Simultaneous computation of jet turbulence and noise

    NASA Technical Reports Server (NTRS)

    Berman, C. H.; Ramos, J. I.

    1989-01-01

    The existing flow computation methods, wave computation techniques, and theories based on noise source models are reviewed in order to assess the capabilities of numerical techniques to compute jet turbulence noise and understand the physical mechanisms governing it over a range of subsonic and supersonic nozzle exit conditions. In particular, attention is given to (1) methods for extrapolating near field information, obtained from flow computations, to the acoustic far field and (2) the numerical solution of the time-dependent Lilley equation.

  9. Computation of Symmetric Discrete Cosine Transform Using Bakhvalov's Algorithm

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Strojny, Brian C.; Dorband, John E.

    2005-01-01

    A number of algorithms for recursive computation of the discrete cosine transform (DCT) have been developed recently. This paper presents a new method for computing the discrete cosine transform and its inverse using Bakhvalov's algorithm, a method developed for evaluation of a polynomial at a point. In this paper, we will focus on both the application of the algorithm to the computation of the DCT-I and its complexity. In addition, Bakhvalov s algorithm is compared with Clenshaw s algorithm for the computation of the DCT.

  10. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  11. A comparison of transport algorithms for premixed, laminar steady state flames

    NASA Technical Reports Server (NTRS)

    Coffee, T. P.; Heimerl, J. M.

    1980-01-01

    The effects of different methods of approximating multispecies transport phenomena in models of premixed, laminar, steady state flames were studied. Five approximation methods that span a wide range of computational complexity were developed. Identical data for individual species properties were used for each method. Each approximation method is employed in the numerical solution of a set of five H2-02-N2 flames. For each flame the computed species and temperature profiles, as well as the computed flame speeds, are found to be very nearly independent of the approximation method used. This does not indicate that transport phenomena are unimportant, but rather that the selection of the input values for the individual species transport properties is more important than the selection of the method used to approximate the multispecies transport. Based on these results, a sixth approximation method was developed that is computationally efficient and provides results extremely close to the most sophisticated and precise method used.

  12. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  13. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  14. Thai Language Sentence Similarity Computation Based on Syntactic Structure and Semantic Vector

    NASA Astrophysics Data System (ADS)

    Wang, Hongbin; Feng, Yinhan; Cheng, Liang

    2018-03-01

    Sentence similarity computation plays an increasingly important role in text mining, Web page retrieval, machine translation, speech recognition and question answering systems. Thai language as a kind of resources scarce language, it is not like Chinese language with HowNet and CiLin resources. So the Thai sentence similarity research faces some challenges. In order to solve this problem of the Thai language sentence similarity computation. This paper proposes a novel method to compute the similarity of Thai language sentence based on syntactic structure and semantic vector. This method firstly uses the Part-of-Speech (POS) dependency to calculate two sentences syntactic structure similarity, and then through the word vector to calculate two sentences semantic similarity. Finally, we combine the two methods to calculate two Thai language sentences similarity. The proposed method not only considers semantic, but also considers the sentence syntactic structure. The experiment result shows that this method in Thai language sentence similarity computation is feasible.

  15. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  16. An O(N squared) method for computing the eigensystem of N by N symmetric tridiagonal matrices by the divide and conquer approach

    NASA Technical Reports Server (NTRS)

    Gill, Doron; Tadmor, Eitan

    1988-01-01

    An efficient method is proposed to solve the eigenproblem of N by N Symmetric Tridiagonal (ST) matrices. Unlike the standard eigensolvers which necessitate O(N cubed) operations to compute the eigenvectors of such ST matrices, the proposed method computes both the eigenvalues and eigenvectors with only O(N squared) operations. The method is based on serial implementation of the recently introduced Divide and Conquer (DC) algorithm. It exploits the fact that by O(N squared) of DC operations, one can compute the eigenvalues of N by N ST matrix and a finite number of pairs of successive rows of its eigenvector matrix. The rest of the eigenvectors--all of them or one at a time--are computed by linear three-term recurrence relations. Numerical examples are presented which demonstrate the superiority of the proposed method by saving an order of magnitude in execution time at the expense of sacrificing a few orders of accuracy.

  17. METHODOLOGICAL NOTES: Computer viruses and methods of combatting them

    NASA Astrophysics Data System (ADS)

    Landsberg, G. L.

    1991-02-01

    This article examines the current virus situation for personal computers and time-sharing computers. Basic methods of combatting viruses are presented. Specific recommendations are given to eliminate the most widespread viruses. A short description is given of a universal antiviral system, PHENIX, which has been developed.

  18. Structure-sequence based analysis for identification of conserved regions in proteins

    DOEpatents

    Zemla, Adam T; Zhou, Carol E; Lam, Marisa W; Smith, Jason R; Pardes, Elizabeth

    2013-05-28

    Disclosed are computational methods, and associated hardware and software products for scoring conservation in a protein structure based on a computationally identified family or cluster of protein structures. A method of computationally identifying a family or cluster of protein structures in also disclosed herein.

  19. Computing Fiber/Matrix Interfacial Effects In SiC/RBSN

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Hopkins, Dale A.

    1996-01-01

    Computational study conducted to demonstrate use of boundary-element method in analyzing effects of fiber/matrix interface on elastic and thermal behaviors of representative laminated composite materials. In study, boundary-element method implemented by Boundary Element Solution Technology - Composite Modeling System (BEST-CMS) computer program.

  20. Computer Human Interaction for Image Information Systems.

    ERIC Educational Resources Information Center

    Beard, David Volk

    1991-01-01

    Presents an approach to developing viable image computer-human interactions (CHI) involving user metaphors for comprehending image data and methods for locating, accessing, and displaying computer images. A medical-image radiology workstation application is used as an example, and feedback and evaluation methods are discussed. (41 references) (LRW)

  1. Symposium on Parallel Computational Methods for Large-scale Structural Analysis and Design, 2nd, Norfolk, VA, US

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)

    1993-01-01

    Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.

  2. Optimal subsystem approach to multi-qubit quantum state discrimination and experimental investigation

    NASA Astrophysics Data System (ADS)

    Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun

    2018-02-01

    Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.

  3. SAM 2.1—A computer program for plotting and formatting surveying data for estimating peak discharges by the slope-area method

    USGS Publications Warehouse

    Hortness, J.E.

    2004-01-01

    The U.S. Geological Survey (USGS) measures discharge in streams using several methods. However, measurement of peak discharges is often impossible or impractical due to difficult access, inherent danger of making measurements during flood events, and timing often associated with flood events. Thus, many peak discharge values often are calculated after the fact by use of indirect methods. The most common indirect method for estimating peak dis- charges in streams is the slope-area method. This, like other indirect methods, requires measuring the flood profile through detailed surveys. Processing the survey data for efficient entry into computer streamflow models can be time demanding; SAM 2.1 is a program designed to expedite that process. The SAM 2.1 computer program is designed to be run in the field on a portable computer. The program processes digital surveying data obtained from an electronic surveying instrument during slope- area measurements. After all measurements have been completed, the program generates files to be input into the SAC (Slope-Area Computation program; Fulford, 1994) or HEC-RAS (Hydrologic Engineering Center-River Analysis System; Brunner, 2001) computer streamflow models so that an estimate of the peak discharge can be calculated.

  4. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  5. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  6. Student perceptions and learning outcomes of computer-assisted versus traditional instruction in physiology.

    PubMed

    Richardson, D

    1997-12-01

    This study compared student perceptions and learning outcomes of computer-assisted instruction against those of traditional didactic lectures. Components of Quantitative Circulatory Physiology (Biological Simulators) and Mechanical Properties of Active Muscle (Trinity Software) were used to teach regulation of tissue blood flow and muscle mechanics, respectively, in the course Medical Physiology. These topics were each taught, in part, by 1) standard didactic lectures, 2) computer-assisted lectures, and 3) computer laboratory assignment. Subjective evaluation was derived from a questionnaire assessing student opinions of the effectiveness of each method. Objective evaluation consisted of comparing scores on examination questions generated from each method. On a 1-10 scale, effectiveness ratings were higher (P < 0.0001) for the didactic lectures (7.7) compared with either computer-assisted lecture (3.8) or computer laboratory (4.2) methods. A follow-up discussion with representatives from the class indicated that students did not perceive computer instruction as being time effective. However, examination scores from computer laboratory questions (94.3%) were significantly higher compared with ones from either computer-assisted (89.9%; P < 0.025) or didactic (86.6%; P < 0.001) lectures. Thus computer laboratory instruction enhanced learning outcomes in medical physiology despite student perceptions to the contrary.

  7. Computing Determinants by Double-Crossing

    ERIC Educational Resources Information Center

    Leggett, Deanna; Perry, John; Torrence, Eve

    2011-01-01

    Dodgson's method of computing determinants is attractive, but fails if an interior entry of an intermediate matrix is zero. This paper reviews Dodgson's method and introduces a generalization, the double-crossing method, that provides a workaround for many interesting cases.

  8. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  9. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  10. First-arrival traveltime computation for quasi-P waves in 2D transversely isotropic media using Fermat’s principle-based fast marching

    NASA Astrophysics Data System (ADS)

    Hu, Jiangtao; Cao, Junxing; Wang, Huazhong; Wang, Xingjian; Jiang, Xudong

    2017-12-01

    First-arrival traveltime computation for quasi-P waves in transversely isotropic (TI) media is the key component of tomography and depth migration. It is appealing to use the fast marching method in isotropic media as it efficiently computes traveltime along an expanding wavefront. It uses the finite difference method to solve the eikonal equation. However, applying the fast marching method in anisotropic media faces challenges because the anisotropy introduces additional nonlinearity in the eikonal equation and solving this nonlinear eikonal equation with the finite difference method is challenging. To address this problem, we present a Fermat’s principle-based fast marching method to compute traveltime in two-dimensional TI media. This method is applicable in both vertical and tilted TI (VTI and TTI) media. It computes traveltime along an expanding wavefront using Fermat’s principle instead of the eikonal equation. Thus, it does not suffer from the nonlinearity of the eikonal equation in TI media. To compute traveltime using Fermat’s principle, the explicit expression of group velocity in TI media is required to describe the ray propagation. The moveout approximation is adopted to obtain the explicit expression of group velocity. Numerical examples on both VTI and TTI models show that the traveltime contour obtained by the proposed method matches well with the wavefront from the wave equation. This shows that the proposed method could be used in depth migration and tomography.

  11. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  12. Application of computational aerodynamics methods to the design and analysis of transport aircraft

    NASA Technical Reports Server (NTRS)

    Da Costa, A. L.

    1978-01-01

    The application and validation of several computational aerodynamic methods in the design and analysis of transport aircraft is established. An assessment is made concerning more recently developed methods that solve three-dimensional transonic flow and boundary layers on wings. Capabilities of subsonic aerodynamic methods are demonstrated by several design and analysis efforts. Among the examples cited are the B747 Space Shuttle Carrier Aircraft analysis, nacelle integration for transport aircraft, and winglet optimization. The accuracy and applicability of a new three-dimensional viscous transonic method is demonstrated by comparison of computed results to experimental data

  13. Numerical simulation using vorticity-vector potential formulation

    NASA Technical Reports Server (NTRS)

    Tokunaga, Hiroshi

    1993-01-01

    An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.

  14. Improved Collision-Detection Method for Robotic Manipulator

    NASA Technical Reports Server (NTRS)

    Leger, Chris

    2003-01-01

    An improved method has been devised for the computational prediction of a collision between (1) a robotic manipulator and (2) another part of the robot or an external object in the vicinity of the robot. The method is intended to be used to test commanded manipulator trajectories in advance so that execution of the commands can be stopped before damage is done. The method involves utilization of both (1) mathematical models of the robot and its environment constructed manually prior to operation and (2) similar models constructed automatically from sensory data acquired during operation. The representation of objects in this method is simpler and more efficient (with respect to both computation time and computer memory), relative to the representations used in most prior methods. The present method was developed especially for use on a robotic land vehicle (rover) equipped with a manipulator arm and a vision system that includes stereoscopic electronic cameras. In this method, objects are represented and collisions detected by use of a previously developed technique known in the art as the method of oriented bounding boxes (OBBs). As the name of this technique indicates, an object is represented approximately, for computational purposes, by a box that encloses its outer boundary. Because many parts of a robotic manipulator are cylindrical, the OBB method has been extended in this method to enable the approximate representation of cylindrical parts by use of octagonal or other multiple-OBB assemblies denoted oriented bounding prisms (OBPs), as in the example of Figure 1. Unlike prior methods, the OBB/OBP method does not require any divisions or transcendental functions; this feature leads to greater robustness and numerical accuracy. The OBB/OBP method was selected for incorporation into the present method because it offers the best compromise between accuracy on the one hand and computational efficiency (and thus computational speed) on the other hand.

  15. A sparse matrix-vector multiplication based algorithm for accurate density matrix computations on systems of millions of atoms

    NASA Astrophysics Data System (ADS)

    Ghale, Purnima; Johnson, Harley T.

    2018-06-01

    We present an efficient sparse matrix-vector (SpMV) based method to compute the density matrix P from a given Hamiltonian in electronic structure computations. Our method is a hybrid approach based on Chebyshev-Jackson approximation theory and matrix purification methods like the second order spectral projection purification (SP2). Recent methods to compute the density matrix scale as O(N) in the number of floating point operations but are accompanied by large memory and communication overhead, and they are based on iterative use of the sparse matrix-matrix multiplication kernel (SpGEMM), which is known to be computationally irregular. In addition to irregularity in the sparse Hamiltonian H, the nonzero structure of intermediate estimates of P depends on products of H and evolves over the course of computation. On the other hand, an expansion of the density matrix P in terms of Chebyshev polynomials is straightforward and SpMV based; however, the resulting density matrix may not satisfy the required constraints exactly. In this paper, we analyze the strengths and weaknesses of the Chebyshev-Jackson polynomials and the second order spectral projection purification (SP2) method, and propose to combine them so that the accurate density matrix can be computed using the SpMV computational kernel only, and without having to store the density matrix P. Our method accomplishes these objectives by using the Chebyshev polynomial estimate as the initial guess for SP2, which is followed by using sparse matrix-vector multiplications (SpMVs) to replicate the behavior of the SP2 algorithm for purification. We demonstrate the method on a tight-binding model system of an oxide material containing more than 3 million atoms. In addition, we also present the predicted behavior of our method when applied to near-metallic Hamiltonians with a wide energy spectrum.

  16. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    NASA Astrophysics Data System (ADS)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  17. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  18. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  19. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  20. Subtraction method of computing QCD jet cross sections at NNLO accuracy

    NASA Astrophysics Data System (ADS)

    Trócsányi, Zoltán; Somogyi, Gábor

    2008-10-01

    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

  1. Implementation of radiation shielding calculation methods. Volume 1: Synopsis of methods and summary of results

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.

  2. A combined direct/inverse three-dimensional transonic wing design method for vector computers

    NASA Technical Reports Server (NTRS)

    Weed, R. A.; Carlson, L. A.; Anderson, W. K.

    1984-01-01

    A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

  3. A hybrid method for the computation of quasi-3D seismograms.

    NASA Astrophysics Data System (ADS)

    Masson, Yder; Romanowicz, Barbara

    2013-04-01

    The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these Green's functions are computed using 2D SEM simulation in a 1D Earth model. Such seismograms account for the 3D structure inside the region of interest in a quasi-exact manner. Later we plan to extrapolate the misfit function computed from such seismograms at the stations back into the SEM region in order to compute local adjoint kernels. This opens a new path toward regional adjoint tomography into the deep Earth. Capdeville, Y., et al. (2002). "Coupling the spectral element method with a modal solution for elastic wave propagation in global Earth models." Geophysical Journal International 152(1): 34-67. Lekic, V. and B. Romanowicz (2011). "Inferring upper-mantle structure by full waveform tomography with the spectral element method." Geophysical Journal International 185(2): 799-831. Nissen-Meyer, T., et al. (2007). "A two-dimensional spectral-element method for computing spherical-earth seismograms-I. Moment-tensor source." Geophysical Journal International 168(3): 1067-1092. Robertsson, J. O. A. and C. H. Chapman (2000). "An efficient method for calculating finite-difference seismograms after model alterations." Geophysics 65(3): 907-918. Tape, C., et al. (2009). "Adjoint tomography of the southern California crust." Science 325(5943): 988-992.

  4. Extension of a streamwise upwind algorithm to a moving grid system

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.

    1990-01-01

    A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.

  5. Computer-aided detection of initial polyp candidates with level set-based adaptive convolution

    NASA Astrophysics Data System (ADS)

    Zhu, Hongbin; Duan, Chaijie; Liang, Zhengrong

    2009-02-01

    In order to eliminate or weaken the interference between different topological structures on the colon wall, adaptive and normalized convolution methods were used to compute the first and second order spatial derivatives of computed tomographic colonography images, which is the beginning of various geometric analyses. However, the performance of such methods greatly depends on the single-layer representation of the colon wall, which is called the starting layer (SL) in the following text. In this paper, we introduce a level set-based adaptive convolution (LSAC) method to compute the spatial derivatives, in which the level set method is employed to determine a more reasonable SL. The LSAC was applied to a computer-aided detection (CAD) scheme to detect the initial polyp candidates, and experiments showed that it benefits the CAD scheme in both the detection sensitivity and specificity as compared to our previous work.

  6. Supersonic nonlinear potential analysis

    NASA Technical Reports Server (NTRS)

    Siclari, M. J.

    1984-01-01

    The NCOREL computer code was established to compute supersonic flow fields of wings and bodies. The method encompasses an implicit finite difference transonic relaxation method to solve the full potential equation in a spherical coordinate system. Two basic topic to broaden the applicability and usefulness of the present method which is encompassed within the computer code NCOREL for the treatment of supersonic flow problems were studied. The first topic is that of computing efficiency. Accelerated schemes are in use for transonic flow problems. One such scheme is the approximate factorization (AF) method and an AF scheme to the supersonic flow problem is developed. The second topic is the computation of wake flows. The proper modeling of wake flows is important for multicomponent configurations such as wing-body and multiple lifting surfaces where the wake of one lifting surface has a pronounced effect on a downstream body or other lifting surfaces.

  7. Fast hydrological model calibration based on the heterogeneous parallel computing accelerated shuffled complex evolution method

    NASA Astrophysics Data System (ADS)

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke

    2018-01-01

    Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.

  8. Numerical methods for engine-airframe integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison ofmore » full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.« less

  9. A note on the computation of antenna-blocking shadows

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1993-01-01

    A simple and readily applied method is provided to compute the shadow on the main reflector of a Cassegrain antenna, when cast by the subreflector and the subreflector supports. The method entails some convenient minor approximations that will produce results similar to results obtained with a lengthier, mainframe computer program.

  10. The feasibility of using computer graphics in environmental evaluations : interim report, documenting historic site locations using computer graphics.

    DOT National Transportation Integrated Search

    1981-01-01

    This report describes a method for locating historic site information using a computer graphics program. If adopted for use by the Virginia Department of Highways and Transportation, this method should significantly reduce the time now required to de...

  11. The SQL Server Database for Non Computer Professional Teaching Reform

    ERIC Educational Resources Information Center

    Liu, Xiangwei

    2012-01-01

    A summary of the teaching methods of the non-computer professional SQL Server database, analyzes the current situation of the teaching course. According to non computer professional curriculum teaching characteristic, put forward some teaching reform methods, and put it into practice, improve the students' analysis ability, practice ability and…

  12. Analysis of Five Instructional Methods for Teaching Sketchpad to Junior High Students

    ERIC Educational Resources Information Center

    Wright, Geoffrey; Shumway, Steve; Terry, Ronald; Bartholomew, Scott

    2012-01-01

    This manuscript addresses a problem teachers of computer software applications face today: What is an effective method for teaching new computer software? Technology and engineering teachers, specifically those with communications and other related courses that involve computer software applications, face this problem when teaching computer…

  13. Efficient computational methods to study new and innovative signal detection techniques in SETI

    NASA Technical Reports Server (NTRS)

    Deans, Stanley R.

    1991-01-01

    The purpose of the research reported here is to provide a rapid computational method for computing various statistical parameters associated with overlapped Hann spectra. These results are important for the Targeted Search part of the Search for ExtraTerrestrial Intelligence (SETI) Microwave Observing Project.

  14. Design for a Manufacturing Method for Memristor-Based Neuromorphic Computing Processors

    DTIC Science & Technology

    2013-03-01

    DESIGN FOR A MANUFACTURING METHOD FOR MEMRISTOR- BASED NEUROMORPHIC COMPUTING PROCESSORS UNIVERSITY OF PITTSBURGH MARCH 2013...BASED NEUROMORPHIC COMPUTING PROCESSORS 5a. CONTRACT NUMBER FA8750-11-1-0271 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S...synapses and implemented a neuromorphic computing system based on our proposed synapse designs. The robustness of our system is also evaluated by

  15. Computational method for determining n and k for a thin film from the measured reflectance, transmittance, and film thickness.

    PubMed

    Bennett, J M; Booty, M J

    1966-01-01

    A computational method of determining n and k for an evaporated film from the measured reflectance, transmittance, and film thickness has been programmed for an IBM 7094 computer. The method consists of modifications to the NOTS multilayer film program. The basic program computes normal incidence reflectance, transmittance, phase change on reflection, and other parameters from the optical constants and thicknesses of all materials. In the modification, n and k for the film are varied in a prescribed manner, and the computer picks from among these values one n and one k which yield reflectance and transmittance values almost equalling the measured values. Results are given for films of silicon and aluminum.

  16. Computational methods for structural load and resistance modeling

    NASA Technical Reports Server (NTRS)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  17. Network gateway security method for enterprise Grid: a literature review

    NASA Astrophysics Data System (ADS)

    Sujarwo, A.; Tan, J.

    2017-03-01

    The computational Grid has brought big computational resources closer to scientists. It enables people to do a large computational job anytime and anywhere without any physical border anymore. However, the massive and spread of computer participants either as user or computational provider arise problems in security. The challenge is on how the security system, especially the one which filters data in the gateway could works in flexibility depends on the registered Grid participants. This paper surveys what people have done to approach this challenge, in order to find the better and new method for enterprise Grid. The findings of this paper is the dynamically controlled enterprise firewall to secure the Grid resources from unwanted connections with a new firewall controlling method and components.

  18. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  19. A method for the computational modeling of the physics of heart murmurs

    NASA Astrophysics Data System (ADS)

    Seo, Jung Hee; Bakhshaee, Hani; Garreau, Guillaume; Zhu, Chi; Andreou, Andreas; Thompson, William R.; Mittal, Rajat

    2017-05-01

    A computational method for direct simulation of the generation and propagation of blood flow induced sounds is proposed. This computational hemoacoustic method is based on the immersed boundary approach and employs high-order finite difference methods to resolve wave propagation and scattering accurately. The current method employs a two-step, one-way coupled approach for the sound generation and its propagation through the tissue. The blood flow is simulated by solving the incompressible Navier-Stokes equations using the sharp-interface immersed boundary method, and the equations corresponding to the generation and propagation of the three-dimensional elastic wave corresponding to the murmur are resolved with a high-order, immersed boundary based, finite-difference methods in the time-domain. The proposed method is applied to a model problem of aortic stenosis murmur and the simulation results are verified and validated by comparing with known solutions as well as experimental measurements. The murmur propagation in a realistic model of a human thorax is also simulated by using the computational method. The roles of hemodynamics and elastic wave propagation on the murmur are discussed based on the simulation results.

  20. Human sense utilization method on real-time computer graphics

    NASA Astrophysics Data System (ADS)

    Maehara, Hideaki; Ohgashi, Hitoshi; Hirata, Takao

    1997-06-01

    We are developing an adjustment method of real-time computer graphics, to obtain effective ones which give audience various senses intended by producer, utilizing human sensibility technologically. Generally, production of real-time computer graphics needs much adjustment of various parameters, such as 3D object models/their motions/attributes/view angle/parallax etc., in order that the graphics gives audience superior effects as reality of materials, sense of experience and so on. And it is also known it costs much to adjust such various parameters by trial and error. A graphics producer often evaluates his graphics to improve it. For example, it may lack 'sense of speed' or be necessary to be given more 'sense of settle down,' to improve it. On the other hand, we can know how the parameters in computer graphics affect such senses by means of statistically analyzing several samples of computer graphics which provide different senses. We paid attention to these two facts, so that we designed an adjustment method of the parameters by inputting phases of sense into a computer. By the way of using this method, it becomes possible to adjust real-time computer graphics more effectively than by conventional way of trial and error.

  1. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  2. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  3. Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Ren, Wei; Liu, Hong; Jin, Shi

    2014-12-01

    In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn → 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn → 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.

  4. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    DTIC Science & Technology

    2015-09-13

    prescribed fully or partially, is from an actual locust, extracted from high-speed, multi-camera video recordings of the locust in a wind tunnel . We use...With creative methods for coupling the fluid and structure, we can increase the scope and efficiency of the FSI modeling . Multiscale methods, which now...play an important role in computational mathematics, can also increase the accuracy and efficiency of the computer modeling techniques. The main

  5. Current status of computational methods for transonic unsteady aerodynamics and aeroelastic applications

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Malone, John B.

    1992-01-01

    The current status of computational methods for unsteady aerodynamics and aeroelasticity is reviewed. The key features of challenging aeroelastic applications are discussed in terms of the flowfield state: low-angle high speed flows and high-angle vortex-dominated flows. The critical role played by viscous effects in determining aeroelastic stability for conditions of incipient flow separation is stressed. The need for a variety of flow modeling tools, from linear formulations to implementations of the Navier-Stokes equations, is emphasized. Estimates of computer run times for flutter calculations using several computational methods are given. Applications of these methods for unsteady aerodynamic and transonic flutter calculations for airfoils, wings, and configurations are summarized. Finally, recommendations are made concerning future research directions.

  6. Computational Methods Development at Ames

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  7. Fast sweeping methods for hyperbolic systems of conservation laws at steady state II

    NASA Astrophysics Data System (ADS)

    Engquist, Björn; Froese, Brittany D.; Tsai, Yen-Hsi Richard

    2015-04-01

    The idea of using fast sweeping methods for solving stationary systems of conservation laws has previously been proposed for efficiently computing solutions with sharp shocks. We further develop these methods to allow for a more challenging class of problems including problems with sonic points, shocks originating in the interior of the domain, rarefaction waves, and two-dimensional systems. We show that fast sweeping methods can produce higher-order accuracy. Computational results validate the claims of accuracy, sharp shock curves, and optimal computational efficiency.

  8. An accurate computational method for the diffusion regime verification

    NASA Astrophysics Data System (ADS)

    Zhokh, Alexey A.; Strizhak, Peter E.

    2018-04-01

    The diffusion regime (sub-diffusive, standard, or super-diffusive) is defined by the order of the derivative in the corresponding transport equation. We develop an accurate computational method for the direct estimation of the diffusion regime. The method is based on the derivative order estimation using the asymptotic analytic solutions of the diffusion equation with the integer order and the time-fractional derivatives. The robustness and the computational cheapness of the proposed method are verified using the experimental methane and methyl alcohol transport kinetics through the catalyst pellet.

  9. A SCILAB Program for Computing General-Relativistic Models of Rotating Neutron Stars by Implementing Hartle's Perturbation Method

    NASA Astrophysics Data System (ADS)

    Papasotiriou, P. J.; Geroyannis, V. S.

    We implement Hartle's perturbation method to the computation of relativistic rigidly rotating neutron star models. The program has been written in SCILAB (© INRIA ENPC), a matrix-oriented high-level programming language. The numerical method is described in very detail and is applied to many models in slow or fast rotation. We show that, although the method is perturbative, it gives accurate results for all practical purposes and it should prove an efficient tool for computing rapidly rotating pulsars.

  10. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  11. A systematic and efficient method to compute multi-loop master integrals

    NASA Astrophysics Data System (ADS)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  12. CSM Testbed Development and Large-Scale Structural Applications

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Gillian, R. E.; Mccleary, Susan L.; Lotts, C. G.; Poole, E. L.; Overman, A. L.; Macy, S. C.

    1989-01-01

    A research activity called Computational Structural Mechanics (CSM) conducted at the NASA Langley Research Center is described. This activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM Testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.

  13. TU-AB-303-08: GPU-Based Software Platform for Efficient Image-Guided Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S; Robinson, A; McNutt, T

    2015-06-15

    Purpose: In this study, we develop an integrated software platform for adaptive radiation therapy (ART) that combines fast and accurate image registration, segmentation, and dose computation/accumulation methods. Methods: The proposed system consists of three key components; 1) deformable image registration (DIR), 2) automatic segmentation, and 3) dose computation/accumulation. The computationally intensive modules including DIR and dose computation have been implemented on a graphics processing unit (GPU). All required patient-specific data including the planning CT (pCT) with contours, daily cone-beam CTs, and treatment plan are automatically queried and retrieved from their own databases. To improve the accuracy of DIR between pCTmore » and CBCTs, we use the double force demons DIR algorithm in combination with iterative CBCT intensity correction by local intensity histogram matching. Segmentation of daily CBCT is then obtained by propagating contours from the pCT. Daily dose delivered to the patient is computed on the registered pCT by a GPU-accelerated superposition/convolution algorithm. Finally, computed daily doses are accumulated to show the total delivered dose to date. Results: Since the accuracy of DIR critically affects the quality of the other processes, we first evaluated our DIR method on eight head-and-neck cancer cases and compared its performance. Normalized mutual-information (NMI) and normalized cross-correlation (NCC) computed as similarity measures, and our method produced overall NMI of 0.663 and NCC of 0.987, outperforming conventional methods by 3.8% and 1.9%, respectively. Experimental results show that our registration method is more consistent and roust than existing algorithms, and also computationally efficient. Computation time at each fraction took around one minute (30–50 seconds for registration and 15–25 seconds for dose computation). Conclusion: We developed an integrated GPU-accelerated software platform that enables accurate and efficient DIR, auto-segmentation, and dose computation, thus supporting an efficient ART workflow. This work was supported by NIH/NCI under grant R42CA137886.« less

  14. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  15. Computational helioseismology in the frequency domain: acoustic waves in axisymmetric solar models with flows

    NASA Astrophysics Data System (ADS)

    Gizon, Laurent; Barucq, Hélène; Duruflé, Marc; Hanson, Chris S.; Leguèbe, Michael; Birch, Aaron C.; Chabassier, Juliette; Fournier, Damien; Hohage, Thorsten; Papini, Emanuele

    2017-04-01

    Context. Local helioseismology has so far relied on semi-analytical methods to compute the spatial sensitivity of wave travel times to perturbations in the solar interior. These methods are cumbersome and lack flexibility. Aims: Here we propose a convenient framework for numerically solving the forward problem of time-distance helioseismology in the frequency domain. The fundamental quantity to be computed is the cross-covariance of the seismic wavefield. Methods: We choose sources of wave excitation that enable us to relate the cross-covariance of the oscillations to the Green's function in a straightforward manner. We illustrate the method by considering the 3D acoustic wave equation in an axisymmetric reference solar model, ignoring the effects of gravity on the waves. The symmetry of the background model around the rotation axis implies that the Green's function can be written as a sum of longitudinal Fourier modes, leading to a set of independent 2D problems. We use a high-order finite-element method to solve the 2D wave equation in frequency space. The computation is embarrassingly parallel, with each frequency and each azimuthal order solved independently on a computer cluster. Results: We compute travel-time sensitivity kernels in spherical geometry for flows, sound speed, and density perturbations under the first Born approximation. Convergence tests show that travel times can be computed with a numerical precision better than one millisecond, as required by the most precise travel-time measurements. Conclusions: The method presented here is computationally efficient and will be used to interpret travel-time measurements in order to infer, e.g., the large-scale meridional flow in the solar convection zone. It allows the implementation of (full-waveform) iterative inversions, whereby the axisymmetric background model is updated at each iteration.

  16. Development and applications of two computational procedures for determining the vibration modes of structural systems. [aircraft structures - aerospaceplanes

    NASA Technical Reports Server (NTRS)

    Kvaternik, R. G.

    1975-01-01

    Two computational procedures for analyzing complex structural systems for their natural modes and frequencies of vibration are presented. Both procedures are based on a substructures methodology and both employ the finite-element stiffness method to model the constituent substructures. The first procedure is a direct method based on solving the eigenvalue problem associated with a finite-element representation of the complete structure. The second procedure is a component-mode synthesis scheme in which the vibration modes of the complete structure are synthesized from modes of substructures into which the structure is divided. The analytical basis of the methods contains a combination of features which enhance the generality of the procedures. The computational procedures exhibit a unique utilitarian character with respect to the versatility, computational convenience, and ease of computer implementation. The computational procedures were implemented in two special-purpose computer programs. The results of the application of these programs to several structural configurations are shown and comparisons are made with experiment.

  17. A Computer Simulation of Community Pharmacy Practice for Educational Use.

    PubMed

    Bindoff, Ivan; Ling, Tristan; Bereznicki, Luke; Westbury, Juanita; Chalmers, Leanne; Peterson, Gregory; Ollington, Robert

    2014-11-15

    To provide a computer-based learning method for pharmacy practice that is as effective as paper-based scenarios, but more engaging and less labor-intensive. We developed a flexible and customizable computer simulation of community pharmacy. Using it, the students would be able to work through scenarios which encapsulate the entirety of a patient presentation. We compared the traditional paper-based teaching method to our computer-based approach using equivalent scenarios. The paper-based group had 2 tutors while the computer group had none. Both groups were given a prescenario and postscenario clinical knowledge quiz and survey. Students in the computer-based group had generally greater improvements in their clinical knowledge score, and third-year students using the computer-based method also showed more improvements in history taking and counseling competencies. Third-year students also found the simulation fun and engaging. Our simulation of community pharmacy provided an educational experience as effective as the paper-based alternative, despite the lack of a human tutor.

  18. A hybrid parallel architecture for electrostatic interactions in the simulation of dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Yang, Sheng-Chun; Lu, Zhong-Yuan; Qian, Hu-Jun; Wang, Yong-Lei; Han, Jie-Ping

    2017-11-01

    In this work, we upgraded the electrostatic interaction method of CU-ENUF (Yang, et al., 2016) which first applied CUNFFT (nonequispaced Fourier transforms based on CUDA) to the reciprocal-space electrostatic computation and made the computation of electrostatic interaction done thoroughly in GPU. The upgraded edition of CU-ENUF runs concurrently in a hybrid parallel way that enables the computation parallelizing on multiple computer nodes firstly, then further on the installed GPU in each computer. By this parallel strategy, the size of simulation system will be never restricted to the throughput of a single CPU or GPU. The most critical technical problem is how to parallelize a CUNFFT in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Furthermore, the upgraded method is capable of computing electrostatic interactions for both the atomistic molecular dynamics (MD) and the dissipative particle dynamics (DPD). Finally, the benchmarks conducted for validation and performance indicate that the upgraded method is able to not only present a good precision when setting suitable parameters, but also give an efficient way to compute electrostatic interactions for huge simulation systems. Program Files doi:http://dx.doi.org/10.17632/zncf24fhpv.1 Licensing provisions: GNU General Public License 3 (GPL) Programming language: C, C++, and CUDA C Supplementary material: The program is designed for effective electrostatic interactions of large-scale simulation systems, which runs on particular computers equipped with NVIDIA GPUs. It has been tested on (a) single computer node with Intel(R) Core(TM) i7-3770@ 3.40 GHz (CPU) and GTX 980 Ti (GPU), and (b) MPI parallel computer nodes with the same configurations. Nature of problem: For molecular dynamics simulation, the electrostatic interaction is the most time-consuming computation because of its long-range feature and slow convergence in simulation space, which approximately take up most of the total simulation time. Although the parallel method CU-ENUF (Yang et al., 2016) based on GPU has achieved a qualitative leap compared with previous methods in electrostatic interactions computation, the computation capability is limited to the throughput capacity of a single GPU for super-scale simulation system. Therefore, we should look for an effective method to handle the calculation of electrostatic interactions efficiently for a simulation system with super-scale size. Solution method: We constructed a hybrid parallel architecture, in which CPU and GPU are combined to accelerate the electrostatic computation effectively. Firstly, the simulation system is divided into many subtasks via domain-decomposition method. Then MPI (Message Passing Interface) is used to implement the CPU-parallel computation with each computer node corresponding to a particular subtask, and furthermore each subtask in one computer node will be executed in GPU in parallel efficiently. In this hybrid parallel method, the most critical technical problem is how to parallelize a CUNFFT (nonequispaced fast Fourier transform based on CUDA) in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Restrictions: The HP-ENUF is mainly oriented to super-scale system simulations, in which the performance superiority is shown adequately. However, for a small simulation system containing less than 106 particles, the mode of multiple computer nodes has no apparent efficiency advantage or even lower efficiency due to the serious network delay among computer nodes, than the mode of single computer node. References: (1) S.-C. Yang, H.-J. Qian, Z.-Y. Lu, Appl. Comput. Harmon. Anal. 2016, http://dx.doi.org/10.1016/j.acha.2016.04.009. (2) S.-C. Yang, Y.-L. Wang, G.-S. Jiao, H.-J. Qian, Z.-Y. Lu, J. Comput. Chem. 37 (2016) 378. (3) S.-C. Yang, Y.-L. Zhu, H.-J. Qian, Z.-Y. Lu, Appl. Chem. Res. Chin. Univ., 2017, http://dx.doi.org/10.1007/s40242-016-6354-5. (4) Y.-L. Zhu, H. Liu, Z.-W. Li, H.-J. Qian, G. Milano, Z.-Y. Lu, J. Comput. Chem. 34 (2013) 2197.

  19. The effectiveness of a Supported Self-management task-shifting intervention for adult depression in Vietnam communities: study protocol for a randomized controlled trial.

    PubMed

    Murphy, Jill; Goldsmith, Charles H; Jones, Wayne; Oanh, Pham Thi; Nguyen, Vu Cong

    2017-05-05

    Depressive disorders are one of the leading causes of disease and disability worldwide. In Vietnam, although epidemiological evidence suggests that depression rates are on par with global averages, services for depression are very limited. In a feasibility study that was implemented from 2013 to 2015, we found that a Supported Self-management (SSM) intervention showed promising results for adults with depression in the community in Vietnam. This paper describes the Mental Health in Adults and Children: Frugal Innovations (MAC-FI) trial protocol that will assess the effectiveness of the SSM intervention, delivered by primary care and social workers, to community-based populations of adults with depression in eight Vietnamese provinces. The MAC-FI program will be assessed using a stepped-wedge, randomized controlled trial. Study participants are adults aged 18 years and over in eight provinces of Vietnam. Study participants will be screened at primary care centres and in the community by health and social workers using the Self-reporting Questionnaire-20 (SRQ-20). Patients scoring >7, indicating depression caseness, will be invited to participate in the study in either the SSM intervention group or the enhanced treatment as usual control group. Recruited participants will be further assessed using the World Health Organization's Disability Assessment Scale (WHODAS 2.0) and the Cut-down, Annoyed, Guilty, Eye-opener (CAGE) Questionnaire for alcohol misuse. Intervention-group participants will receive the SSM intervention, delivered with the support of a social worker or social collaborator, for a period of 2 months. Control- group participants will receive treatment as usual and a leaflet with information about depression. SRQ-20, WHODAS 2.0 and CAGE scores will be taken by blinded outcome assessors at baseline, after 1 month and after 2 months. The primary analysis method will be intention-to-treat. This study has the potential to add to the knowledge base about the effectiveness of a SSM intervention for adult depression that has been validated for the Vietnamese context. This trial will also contribute to the growing body of evidence about the effectiveness of low-cost, task-shifting interventions for use in low-resource settings, where specialist mental health services are often limited. Retrospectively registered at ClinicalTrials.gov, identifier: NCT03001063 . Registered on 20 December 2016.

  20. Effective atomic numbers of some tissue substitutes by different methods: A comparative study.

    PubMed

    Singh, Vishwanath P; Badiger, N M

    2014-01-01

    Effective atomic numbers of some human organ tissue substitutes such as polyethylene terephthalate, red articulation wax, paraffin 1, paraffin 2, bolus, pitch, polyphenylene sulfide, polysulfone, polyvinylchloride, and modeling clay have been calculated by four different methods like Auto-Zeff, direct, interpolation, and power law. It was found that the effective atomic numbers computed by Auto-Zeff, direct and interpolation methods were in good agreement for intermediate energy region (0.1 MeV < E < 5 MeV) where the Compton interaction dominates. A large difference in effective atomic numbers by direct method and Auto-Zeff was observed in photo-electric and pair-production regions. Effective atomic numbers computed by power law were found to be close to direct method in photo-electric absorption region. The Auto-Zeff, direct and interpolation methods were found to be in good agreement for computation of effective atomic numbers in intermediate energy region (100 keV < E < 10 MeV). The direct method was found to be appropriate method for computation of effective atomic numbers in photo-electric region (10 keV < E < 100 keV). The tissue equivalence of the tissue substitutes is possible to represent by any method for computation of effective atomic number mentioned in the present study. An accurate estimation of Rayleigh scattering is required to eliminate effect of molecular, chemical, or crystalline environment of the atom for estimation of gamma interaction parameters.

  1. Effective atomic numbers of some tissue substitutes by different methods: A comparative study

    PubMed Central

    Singh, Vishwanath P.; Badiger, N. M.

    2014-01-01

    Effective atomic numbers of some human organ tissue substitutes such as polyethylene terephthalate, red articulation wax, paraffin 1, paraffin 2, bolus, pitch, polyphenylene sulfide, polysulfone, polyvinylchloride, and modeling clay have been calculated by four different methods like Auto-Zeff, direct, interpolation, and power law. It was found that the effective atomic numbers computed by Auto-Zeff, direct and interpolation methods were in good agreement for intermediate energy region (0.1 MeV < E < 5 MeV) where the Compton interaction dominates. A large difference in effective atomic numbers by direct method and Auto-Zeff was observed in photo-electric and pair-production regions. Effective atomic numbers computed by power law were found to be close to direct method in photo-electric absorption region. The Auto-Zeff, direct and interpolation methods were found to be in good agreement for computation of effective atomic numbers in intermediate energy region (100 keV < E < 10 MeV). The direct method was found to be appropriate method for computation of effective atomic numbers in photo-electric region (10 keV < E < 100 keV). The tissue equivalence of the tissue substitutes is possible to represent by any method for computation of effective atomic number mentioned in the present study. An accurate estimation of Rayleigh scattering is required to eliminate effect of molecular, chemical, or crystalline environment of the atom for estimation of gamma interaction parameters. PMID:24600169

  2. The Shock and Vibration Digest. Volume 14, Number 11

    DTIC Science & Technology

    1982-11-01

    cooled reactor 1981) ( HTGR ) core under seismic excitation his been developed . N82-18644 The computer program can be used to predict the behavior (In...French) of the HTGR core under seismic excitation. Key Words: Computer programs , Modal analysis, Beams, Undamped structures A computation method is...30) PROGRAMMING c c Dale and Cohen [221 extended the method of McMunn and Plunkett [201 developed a compute- McMunn and Plunkett to continuous systems

  3. Semiannual report, 1 April - 30 September 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software for parallel computers. Research in these areas is discussed.

  4. Computational simulation of progressive fracture in fiber composites

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1986-01-01

    Computational methods for simulating and predicting progressive fracture in fiber composite structures are presented. These methods are integrated into a computer code of modular form. The modules include composite mechanics, finite element analysis, and fracture criteria. The code is used to computationally simulate progressive fracture in composite laminates with and without defects. The simulation tracks the fracture progression in terms of modes initiating fracture, damage growth, and imminent global (catastrophic) laminate fracture.

  5. A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain

    DTIC Science & Technology

    2015-05-18

    approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a

  6. Leapfrog variants of iterative methods for linear algebra equations

    NASA Technical Reports Server (NTRS)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  7. Multiple node remote messaging

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Ohmacht, Martin; Salapura, Valentina; Steinmacher-Burow, Burkhard; Vranas, Pavlos

    2010-08-31

    A method for passing remote messages in a parallel computer system formed as a network of interconnected compute nodes includes that a first compute node (A) sends a single remote message to a remote second compute node (B) in order to control the remote second compute node (B) to send at least one remote message. The method includes various steps including controlling a DMA engine at first compute node (A) to prepare the single remote message to include a first message descriptor and at least one remote message descriptor for controlling the remote second compute node (B) to send at least one remote message, including putting the first message descriptor into an injection FIFO at the first compute node (A) and sending the single remote message and the at least one remote message descriptor to the second compute node (B).

  8. Computational modelling of oxygenation processes in enzymes and biomimetic model complexes.

    PubMed

    de Visser, Sam P; Quesne, Matthew G; Martin, Bodo; Comba, Peter; Ryde, Ulf

    2014-01-11

    With computational resources becoming more efficient and more powerful and at the same time cheaper, computational methods have become more and more popular for studies on biochemical and biomimetic systems. Although large efforts from the scientific community have gone into exploring the possibilities of computational methods for studies on large biochemical systems, such studies are not without pitfalls and often cannot be routinely done but require expert execution. In this review we summarize and highlight advances in computational methodology and its application to enzymatic and biomimetic model complexes. In particular, we emphasize on topical and state-of-the-art methodologies that are able to either reproduce experimental findings, e.g., spectroscopic parameters and rate constants, accurately or make predictions of short-lived intermediates and fast reaction processes in nature. Moreover, we give examples of processes where certain computational methods dramatically fail.

  9. Computer-Based Methods for Collecting Peer Nomination Data: Utility, Practice, and Empirical Support.

    PubMed

    van den Berg, Yvonne H M; Gommans, Rob

    2017-09-01

    New technologies have led to several major advances in psychological research over the past few decades. Peer nomination research is no exception. Thanks to these technological innovations, computerized data collection is becoming more common in peer nomination research. However, computer-based assessment is more than simply programming the questionnaire and asking respondents to fill it in on computers. In this chapter the advantages and challenges of computer-based assessments are discussed. In addition, a list of practical recommendations and considerations is provided to inform researchers on how computer-based methods can be applied to their own research. Although the focus is on the collection of peer nomination data in particular, many of the requirements, considerations, and implications are also relevant for those who consider the use of other sociometric assessment methods (e.g., paired comparisons, peer ratings, peer rankings) or computer-based assessments in general. © 2017 Wiley Periodicals, Inc.

  10. Paper-Based and Computer-Based Concept Mappings: The Effects on Computer Achievement, Computer Anxiety and Computer Attitude

    ERIC Educational Resources Information Center

    Erdogan, Yavuz

    2009-01-01

    The purpose of this paper is to compare the effects of paper-based and computer-based concept mappings on computer hardware achievement, computer anxiety and computer attitude of the eight grade secondary school students. The students were randomly allocated to three groups and were given instruction on computer hardware. The teaching methods used…

  11. Methods for computing color anaglyphs

    NASA Astrophysics Data System (ADS)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  12. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  13. GAP Noise Computation By The CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2001-01-01

    A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

  14. Comparing Virtual and Physical Robotics Environments for Supporting Complex Systems and Computational Thinking

    ERIC Educational Resources Information Center

    Berland, Matthew; Wilensky, Uri

    2015-01-01

    Both complex systems methods (such as agent-based modeling) and computational methods (such as programming) provide powerful ways for students to understand new phenomena. To understand how to effectively teach complex systems and computational content to younger students, we conducted a study in four urban middle school classrooms comparing…

  15. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  16. An Evaluation of the Effectiveness of a Computer-Assisted Reading Intervention

    ERIC Educational Resources Information Center

    Messer, David; Nash, Gilly

    2018-01-01

    Background: A cost-effective method to address reading delays is to use computer-assisted learning, but these techniques are not always effective. Methods: We evaluated a commercially available computer system that uses visual mnemonics, in a randomised controlled trial with 78 English-speaking children (mean age 7 years) who their schools…

  17. Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions

    ERIC Educational Resources Information Center

    Moreira, M. V.; Basilio, J. C.

    2012-01-01

    All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…

  18. A method of non-contact reading code based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  19. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  20. A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium

    NASA Astrophysics Data System (ADS)

    Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand

    2014-05-01

    The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.

  1. Research data collection methods: from paper to tablet computers.

    PubMed

    Wilcox, Adam B; Gallagher, Kathleen D; Boden-Albala, Bernadette; Bakken, Suzanne R

    2012-07-01

    Primary data collection is a critical activity in clinical research. Even with significant advances in technical capabilities, clear benefits of use, and even user preferences for using electronic systems for collecting primary data, paper-based data collection is still common in clinical research settings. However, with recent developments in both clinical research and tablet computer technology, the comparative advantages and disadvantages of data collection methods should be determined. To describe case studies using multiple methods of data collection, including next-generation tablets, and consider their various advantages and disadvantages. We reviewed 5 modern case studies using primary data collection, using methods ranging from paper to next-generation tablet computers. We performed semistructured telephone interviews with each project, which considered factors relevant to data collection. We address specific issues with workflow, implementation and security for these different methods, and identify differences in implementation that led to different technology considerations for each case study. There remain multiple methods for primary data collection, each with its own strengths and weaknesses. Two recent methods are electronic health record templates and next-generation tablet computers. Electronic health record templates can link data directly to medical records, but are notably difficult to use. Current tablet computers are substantially different from previous technologies with regard to user familiarity and software cost. The use of cloud-based storage for tablet computers, however, creates a specific challenge for clinical research that must be considered but can be overcome.

  2. Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems

    NASA Astrophysics Data System (ADS)

    Endo, Eishin; Toga, Yuta; Sasaki, Munetaka

    2015-07-01

    We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.

  3. Estimation of relative free energies of binding using pre-computed ensembles based on the single-step free energy perturbation and the site-identification by Ligand competitive saturation approaches.

    PubMed

    Raman, E Prabhu; Lakkaraju, Sirish Kaushik; Denny, Rajiah Aldrin; MacKerell, Alexander D

    2017-06-05

    Accurate and rapid estimation of relative binding affinities of ligand-protein complexes is a requirement of computational methods for their effective use in rational ligand design. Of the approaches commonly used, free energy perturbation (FEP) methods are considered one of the most accurate, although they require significant computational resources. Accordingly, it is desirable to have alternative methods of similar accuracy but greater computational efficiency to facilitate ligand design. In the present study relative free energies of binding are estimated for one or two non-hydrogen atom changes in compounds targeting the proteins ACK1 and p38 MAP kinase using three methods. The methods include standard FEP, single-step free energy perturbation (SSFEP) and the site-identification by ligand competitive saturation (SILCS) ligand grid free energy (LGFE) approach. Results show the SSFEP and SILCS LGFE methods to be competitive with or better than the FEP results for the studied systems, with SILCS LGFE giving the best agreement with experimental results. This is supported by additional comparisons with published FEP data on p38 MAP kinase inhibitors. While both the SSFEP and SILCS LGFE approaches require a significant upfront computational investment, they offer a 1000-fold computational savings over FEP for calculating the relative affinities of ligand modifications once those pre-computations are complete. An illustrative example of the potential application of these methods in the context of screening large numbers of transformations is presented. Thus, the SSFEP and SILCS LGFE approaches represent viable alternatives for actively driving ligand design during drug discovery and development. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  5. The Calculation of Potential Energy Curves of Diatomic Molecules: The RKR Method.

    ERIC Educational Resources Information Center

    Castano, F.; And Others

    1983-01-01

    The RKR method for determining accurate potential energy curves is described. Advantages of using the method (compared to Morse procedure) and a TRS-80 computer program which calculates the classical turning points by an RKR method are also described. The computer program is available from the author upon request. (Author/JN)

  6. Choosing Learning Methods Suitable for Teaching and Learning in Computer Science

    ERIC Educational Resources Information Center

    Taylor, Estelle; Breed, Marnus; Hauman, Ilette; Homann, Armando

    2013-01-01

    Our aim is to determine which teaching methods students in Computer Science and Information Systems prefer. There are in total 5 different paradigms (behaviorism, cognitivism, constructivism, design-based and humanism) with 32 models between them. Each model is unique and states different learning methods. Recommendations are made on methods that…

  7. Physical Principle for Generation of Randomness

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2009-01-01

    A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)

  8. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  9. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  10. Probabilistic analysis of tsunami hazards

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2006-01-01

    Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).

  11. A variational multiscale method for particle-cloud tracking in turbomachinery flows

    NASA Astrophysics Data System (ADS)

    Corsini, A.; Rispoli, F.; Sheard, A. G.; Takizawa, K.; Tezduyar, T. E.; Venturini, P.

    2014-11-01

    We present a computational method for simulation of particle-laden flows in turbomachinery. The method is based on a stabilized finite element fluid mechanics formulation and a finite element particle-cloud tracking method. We focus on induced-draft fans used in process industries to extract exhaust gases in the form of a two-phase fluid with a dispersed solid phase. The particle-laden flow causes material wear on the fan blades, degrading their aerodynamic performance, and therefore accurate simulation of the flow would be essential in reliable computational turbomachinery analysis and design. The turbulent-flow nature of the problem is dealt with a Reynolds-Averaged Navier-Stokes model and Streamline-Upwind/Petrov-Galerkin/Pressure-Stabilizing/Petrov-Galerkin stabilization, the particle-cloud trajectories are calculated based on the flow field and closure models for the turbulence-particle interaction, and one-way dependence is assumed between the flow field and particle dynamics. We propose a closure model utilizing the scale separation feature of the variational multiscale method, and compare that to the closure utilizing the eddy viscosity model. We present computations for axial- and centrifugal-fan configurations, and compare the computed data to those obtained from experiments, analytical approaches, and other computational methods.

  12. Rapid high performance liquid chromatography method development with high prediction accuracy, using 5cm long narrow bore columns packed with sub-2microm particles and Design Space computer modeling.

    PubMed

    Fekete, Szabolcs; Fekete, Jeno; Molnár, Imre; Ganzler, Katalin

    2009-11-06

    Many different strategies of reversed phase high performance liquid chromatographic (RP-HPLC) method development are used today. This paper describes a strategy for the systematic development of ultrahigh-pressure liquid chromatographic (UHPLC or UPLC) methods using 5cmx2.1mm columns packed with sub-2microm particles and computer simulation (DryLab((R)) package). Data for the accuracy of computer modeling in the Design Space under ultrahigh-pressure conditions are reported. An acceptable accuracy for these predictions of the computer models is presented. This work illustrates a method development strategy, focusing on time reduction up to a factor 3-5, compared to the conventional HPLC method development and exhibits parts of the Design Space elaboration as requested by the FDA and ICH Q8R1. Furthermore this paper demonstrates the accuracy of retention time prediction at elevated pressure (enhanced flow-rate) and shows that the computer-assisted simulation can be applied with sufficient precision for UHPLC applications (p>400bar). Examples of fast and effective method development in pharmaceutical analysis, both for gradient and isocratic separations are presented.

  13. Geoid undulation computations at laser tracking stations

    NASA Technical Reports Server (NTRS)

    Despotakis, Vasilios K.

    1987-01-01

    Geoid undulation computations were performed at 29 laser stations distributed around the world using a combination of terrestrial gravity data within a cap of radius 2 deg and a potential coefficient set up to 180 deg. The traditional methods of Stokes' and Meissl's modification together with the Molodenskii method and the modified Sjoberg method were applied. Performing numerical tests based on global error assumptions regarding the terrestrial data and the geopotential set it was concluded that the modified Sjoberg method is the most accurate and promising technique for geoid undulation computations. The numerical computations for the geoid undulations using all the four methods resulted in agreement with the ellipsoidal minus orthometric value of the undulations on the order of 60 cm or better for most of the laser stations in the eastern United States, Australia, Japan, Bermuda, and Europe. A systematic discrepancy of about 2 meters for most of the western United States stations was detected and verified by using two relatively independent data sets. For oceanic laser stations in the western Atlantic and Pacific oceans that have no terrestrial data available, the adjusted GEOS-3 and SEASAT altimeter data were used for the computation of the geoid undulation in a collocation method.

  14. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    NASA Technical Reports Server (NTRS)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  15. Incorporating the gas analyzer response time in gas exchange computations.

    PubMed

    Mitchell, R R

    1979-11-01

    A simple method for including the gas analyzer response time in the breath-by-breath computation of gas exchange rates is described. The method uses a difference equation form of a model for the gas analyzer in the computation of oxygen uptake and carbon dioxide production and avoids a numerical differentiation required to correct the gas fraction wave forms. The effect of not accounting for analyzer response time is shown to be a 20% underestimation in gas exchange rate. The present method accurately measures gas exchange rate, is relatively insensitive to measurement errors in the analyzer time constant, and does not significantly increase the computation time.

  16. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  17. Subsonic and Supersonic Jet Noise Calculations Using PSE and DNS

    NASA Technical Reports Server (NTRS)

    Balakumar, P.; Owis, Farouk

    1999-01-01

    Noise radiated from a supersonic jet is computed using the Parabolized Stability Equations (PSE) method. The evolution of the instability waves inside the jet is computed using the PSE method and the noise radiated to the far field from these waves is calculated by solving the wave equation using the Fourier transform method. We performed the computations for a cold supersonic jet of Mach number 2.1 which is excited by disturbances with Strouhal numbers St=.2 and .4 and the azimuthal wavenumber m=l. Good agreement in the sound pressure level are observed between the computed and the measured (Troutt and McLaughlin 1980) results.

  18. A geometricla error in some Computer Programs based on the Aki-Christofferson-Husebye (ACH) Method of Teleseismic Tomography

    USGS Publications Warehouse

    Julian, B.R.; Evans, J.R.; Pritchard, M.J.; Foulger, G.R.

    2000-01-01

    Some computer programs based on the Aki-Christofferson-Husebye (ACH) method of teleseismic tomography contain an error caused by identifying local grid directions with azimuths on the spherical Earth. This error, which is most severe in high latitudes, introduces systematic errors into computed ray paths and distorts inferred Earth models. It is best dealt with by explicity correcting for the difference between true and grid directions. Methods for computing these directions are presented in this article and are likely to be useful in many other kinds of regional geophysical studies that use Cartesian coordinates and flat-earth approximations.

  19. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  20. COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES

    PubMed Central

    Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus

    2017-01-01

    The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875

  1. Parameters Free Computational Characterization of Defects in Transition Metal Oxides with Diffusion Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Santana, Juan A.; Krogel, Jaron T.; Kent, Paul R.; Reboredo, Fernando

    Materials based on transition metal oxides (TMO's) are among the most challenging systems for computational characterization. Reliable and practical computations are possible by directly solving the many-body problem for TMO's with quantum Monte Carlo (QMC) methods. These methods are very computationally intensive, but recent developments in algorithms and computational infrastructures have enabled their application to real materials. We will show our efforts on the application of the diffusion quantum Monte Carlo (DMC) method to study the formation of defects in binary and ternary TMO and heterostructures of TMO. We will also outline current limitations in hardware and algorithms. This work is supported by the Materials Sciences & Engineering Division of the Office of Basic Energy Sciences, U.S. Department of Energy (DOE).

  2. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft, supplemental data

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1975-01-01

    Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.

  3. System and method for controlling power consumption in a computer system based on user satisfaction

    DOEpatents

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  4. Computational methods in drug discovery

    PubMed Central

    Leelananda, Sumudu P

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed. PMID:28144341

  5. Computational methods in drug discovery.

    PubMed

    Leelananda, Sumudu P; Lindert, Steffen

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  6. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  7. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE PAGES

    Huang, Hongying; Chen, Zheng; Li, Jin; ...

    2016-08-23

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  8. pK(A) in proteins solving the Poisson-Boltzmann equation with finite elements.

    PubMed

    Sakalli, Ilkay; Knapp, Ernst-Walter

    2015-11-05

    Knowledge on pK(A) values is an eminent factor to understand the function of proteins in living systems. We present a novel approach demonstrating that the finite element (FE) method of solving the linearized Poisson-Boltzmann equation (lPBE) can successfully be used to compute pK(A) values in proteins with high accuracy as a possible replacement to finite difference (FD) method. For this purpose, we implemented the software molecular Finite Element Solver (mFES) in the framework of the Karlsberg+ program to compute pK(A) values. This work focuses on a comparison between pK(A) computations obtained with the well-established FD method and with the new developed FE method mFES, solving the lPBE using protein crystal structures without conformational changes. Accurate and coarse model systems are set up with mFES using a similar number of unknowns compared with the FD method. Our FE method delivers results for computations of pK(A) values and interaction energies of titratable groups, which are comparable in accuracy. We introduce different thermodynamic cycles to evaluate pK(A) values and we show for the FE method how different parameters influence the accuracy of computed pK(A) values. © 2015 Wiley Periodicals, Inc.

  9. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hongying; Chen, Zheng; Li, Jin

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  10. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  11. Oxygen Distributions—Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network

    PubMed Central

    Bernhardt, Peter

    2016-01-01

    Purpose To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. Methods A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham’s line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green’s function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. Results The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. Conclusions The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour. PMID:27861529

  12. Static aeroelastic analysis and tailoring of a single-element racing car wing

    NASA Astrophysics Data System (ADS)

    Sadd, Christopher James

    This thesis presents the research from an Engineering Doctorate research programme in collaboration with Reynard Motorsport Ltd, a manufacturer of racing cars. Racing car wing design has traditionally considered structures to be rigid. However, structures are never perfectly rigid and the interaction between aerodynamic loading and structural flexibility has a direct impact on aerodynamic performance. This interaction is often referred to as static aeroelasticity and the focus of this research has been the development of a computational static aeroelastic analysis method to improve the design of a single-element racing car wing. A static aeroelastic analysis method has been developed by coupling a Reynolds-Averaged Navier-Stokes CFD analysis method with a Finite Element structural analysis method using an iterative scheme. Development of this method has included assessment of CFD and Finite Element analysis methods and development of data transfer and mesh deflection methods. Experimental testing was also completed to further assess the computational analyses. The computational and experimental results show a good correlation and these studies have also shown that a Navier-Stokes static aeroelastic analysis of an isolated wing can be performed at an acceptable computational cost. The static aeroelastic analysis tool was used to assess methods of tailoring the structural flexibility of the wing to increase its aerodynamic performance. These tailoring methods were then used to produce two final wing designs to increase downforce and reduce drag respectively. At the average operating dynamic pressure of the racing car, the computational analysis predicts that the downforce-increasing wing has a downforce of C[1]=-1.377 in comparison to C[1]=-1.265 for the original wing. The computational analysis predicts that the drag-reducing wing has a drag of C[d]=0.115 in comparison to C[d]=0.143 for the original wing.

  13. Computation of the acoustic radiation force using the finite-difference time-domain method.

    PubMed

    Cai, Feiyan; Meng, Long; Jiang, Chunxiang; Pan, Yu; Zheng, Hairong

    2010-10-01

    The computational details related to calculating the acoustic radiation force on an object using a 2-D grid finite-difference time-domain method (FDTD) are presented. The method is based on propagating the stress and velocity fields through the grid and determining the energy flow with and without the object. The axial and radial acoustic radiation forces predicted by FDTD method are in excellent agreement with the results obtained by analytical evaluation of the scattering method. In particular, the results indicate that it is possible to trap the steel cylinder in the radial direction by optimizing the width of Gaussian source and the operation frequency. As the sizes of the relating objects are smaller than or comparable to wavelength, the algorithm presented here can be easily extended to 3-D and include torque computation algorithms, thus providing a highly flexible and universally usable computation engine.

  14. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  15. SIAM Conference on Parallel Processing for Scientific Computing, 4th, Chicago, IL, Dec. 11-13, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)

    1990-01-01

    Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.

  16. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).

  17. Numerical Grid Generation and Potential Airfoil Analysis and Design

    DTIC Science & Technology

    1988-01-01

    Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR

  18. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.

    1996-01-01

    A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.

  19. Computational Electromagnetic Modeling of SansEC(Trade Mark) Sensors

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.; Dudley, Kenneth L.; Szatkowski, George N.

    2011-01-01

    This paper describes the preliminary effort to apply computational design tools to aid in the development of an electromagnetic SansEC resonant sensor composite materials damage detection system. The computational methods and models employed on this research problem will evolve in complexity over time and will lead to the development of new computational methods and experimental sensor systems that demonstrate the capability to detect, diagnose, and monitor the damage of composite materials and structures on aerospace vehicles.

  20. Space-Time Fluid-Structure Interaction Computation of Flapping-Wing Aerodynamics

    DTIC Science & Technology

    2013-12-01

    SST-VMST." The structural mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is...mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is ap- plicable to some classes of FSI...we use the ST-VMS method in combination with the ST-SUPS method. The structural mechanics computations are mostly based on the Kirchhoff –Love shell

  1. Computation of type curves for flow to partially penetrating wells in water-table aquifers

    USGS Publications Warehouse

    Moench, Allen F.

    1993-01-01

    Evaluation of Neuman's analytical solution for flow to a well in a homogeneous, anisotropic, water-table aquifer commonly requires large amounts of computation time and can produce inaccurate results for selected combinations of parameters. Large computation times occur because the integrand of a semi-infinite integral involves the summation of an infinite series. Each term of the series requires evaluation of the roots of equations, and the series itself is sometimes slowly convergent. Inaccuracies can result from lack of computer precision or from the use of improper methods of numerical integration. In this paper it is proposed to use a method of numerical inversion of the Laplace transform solution, provided by Neuman, to overcome these difficulties. The solution in Laplace space is simpler in form than the real-time solution; that is, the integrand of the semi-infinite integral does not involve an infinite series or the need to evaluate roots of equations. Because the integrand is evaluated rapidly, advanced methods of numerical integration can be used to improve accuracy with an overall reduction in computation time. The proposed method of computing type curves, for which a partially documented computer program (WTAQ1) was written, was found to reduce computation time by factors of 2 to 20 over the time needed to evaluate the closed-form, real-time solution.

  2. Computational Chemistry Using Modern Electronic Structure Methods

    ERIC Educational Resources Information Center

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  3. New algorithms to compute the nearness symmetric solution of the matrix equation.

    PubMed

    Peng, Zhen-Yun; Fang, Yang-Zhi; Xiao, Xian-Wei; Du, Dan-Dan

    2016-01-01

    In this paper we consider the nearness symmetric solution of the matrix equation AXB = C to a given matrix [Formula: see text] in the sense of the Frobenius norm. By discussing equivalent form of the considered problem, we derive some necessary and sufficient conditions for the matrix [Formula: see text] is a solution of the considered problem. Based on the idea of the alternating variable minimization with multiplier method, we propose two iterative methods to compute the solution of the considered problem, and analyze the global convergence results of the proposed algorithms. Numerical results illustrate the proposed methods are more effective than the existing two methods proposed in Peng et al. (Appl Math Comput 160:763-777, 2005) and Peng (Int J Comput Math 87: 1820-1830, 2010).

  4. An interactive program for pharmacokinetic modeling.

    PubMed

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  5. Parameter estimation methods for gene circuit modeling from time-series mRNA data: a comparative study.

    PubMed

    Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin

    2015-11-01

    Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  6. Climate variability and the European agricultural production

    NASA Astrophysics Data System (ADS)

    Guimarães Nobre, Gabriela; Hunink, Johannes E.; Baruth, Bettina; Aerts, Jeroen C. J. H.; Ward, Philip J.

    2017-04-01

    By 2050, the global demand for maize, wheat and other major crops is expected to grow sharply. To meet this challenge, agricultural systems have to increase substantially their production. However, the expanding world population, coupled with a decline of arable land per person, and the variability in global climate, are obstacles to achieving the increasing demand. Creating a resilient agriculture system requires the incorporation of preparedness measures against weather-related events, which can trigger disruptive risks such as droughts. This study examines the influence of large-scale climate variability on agriculture production applying a robust decision-making tool named fast-and-frugal trees (FFT). We created FFTs using a dataset of crop production and indices of climate variability: the El Niño Southern Oscillation (SOI) and the North Atlantic Oscillation (NAO). Our main goal is to predict the occurrence of below-average crop production, using these two indices at different lead times. Initial results indicated that SOI and NAO have strong links with European low sugar beet production. For some areas, the FFTs were able to detect below-average productivity events six months before harvesting with hit rate and predictive positive value higher than 70%. We found that shorter lead times, such as three months before harvesting, have the highest predictive skill. Additionally, we observed that the responses of low production events to the phases of the NAO and SOI vary spatially and seasonally. Through the comprehension of the relationship between large scale climate variability and European drought related agricultural impact, this study reflects on how this information could potentially improve the management of the agricultural sector by coupling the findings with seasonal forecasting system of crop production.

  7. Best Bang for the Buck: Part 1 – The Size of Experiments Relative to Design Performance

    DOE PAGES

    Anderson-Cook, Christine Michaela; Lu, Lu

    2016-10-01

    There are many choices to make, when designing an experiment for a study, such as: what design factors to consider, which levels of the factors to use and which model to focus on. One aspect of design, however, is often left unquestioned: the size of the experiment. When learning about design of experiments, problems are often posed as "select a design for a particular objective with N runs." It’s tempting to consider the design size as a given constraint in the design-selection process. If you think of learning through designed experiments as a sequential process, however, strategically planning for themore » use of resources at different stages of data collection can be beneficial: Saving experimental runs for later is advantageous if you can efficiently learn with less in the early stages. Alternatively, if you’re too frugal in the early stages, you might not learn enough to proceed confidently with the next stages. Therefore, choosing the right-sized experiment is important—not too large or too small, but with a thoughtful balance to maximize the knowledge gained given the available resources. It can be a great advantage to think about the design size as flexible and include it as an aspect for comparisons. Sometimes you’re asked to provide a small design that is too ambitious for the goals of the study. Finally, if you can show quantitatively how the suggested design size might be inadequate or lead to problems during analysis—and also offer a formal comparison to some alternatives of different (likely larger) sizes—you may have a better chance to ask for additional resources to deliver statistically sound and satisfying results« less

  8. When reality is out of focus: Can people tell whether their beliefs and judgments are correct or wrong?

    PubMed

    Koriat, Asher

    2018-05-01

    Can we tell whether our beliefs and judgments are correct or wrong? Results across many domains indicate that people are skilled at discriminating between correct and wrong answers, endorsing the former with greater confidence than the latter. However, it has not been realized that because of people's adaptation to reality, representative samples of items tend to favor the correct answer, yielding object-level accuracy (OLA) that is considerably better than chance. Across 16 experiments that used 2-alternative forced-choice items from several domains, the confidence/accuracy (C/A) relationship was positive for items with OLA >50%, but consistently negative across items with OLA <50%. A systematic sampling of items that covered the full range of OLA (0-100%) yielded a U-function relating confidence to OLA. The results imply that the positive C/A relationship that has been reported in many studies is an artifact of OLA being better than chance rather than representing a general ability to discriminate between correct and wrong responses. However, the results also support the ecological approach, suggesting that confidence is based on a frugal, "bounded" heuristic that has been specifically tailored to the ecological structure of the natural environment. This heuristic is used despite the fact that for items with OLA <50%, it yields confidence judgments that are counterdiagnostic of accuracy. Our ability to tell between correct and wrong judgments is confined to the probability structure of the world we live in. The results were discussed in terms of the contrast between systematic design and representative design. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Understanding surgeon decision making in the use of radiotherapy as neoadjuvant treatment in rectal cancer.

    PubMed

    Ansari, Nabila; Young, Christopher J; Schlub, Timothy E; Dhillon, Haryana M; Solomon, Michael J

    2015-12-01

    Strong evidence supports the use of neoadjuvant radiotherapy in rectal cancer to improve local control. This randomised controlled trial aimed to determine the effect of clinical and non-clinical factors on decision making by colorectal surgeons in patients with rectal cancer. Two surveys comprising vignettes of alternating short (4) and long (12) cues identified previously as important in rectal cancer, were randomly assigned to all members of the CSSANZ. Respondents chose from three possible treatments: long course chemoradiotherapy (LC), short course radiotherapy (SC) or surgery alone to investigate the effects on surgeon decision and confidence in decisions. Choice data were analysed using multinomial logistic regression models. 106 of 165 (64%) surgeons responded. LC was the preferred treatment choice in 73% of vignettes. Surgeons were more likely to recommend LC over SC (OR 1.79) or surgery alone (OR 1.99) when presented with the shorter, four-cue scenarios. There was no significant difference in confidence in decisions made when surgeons were presented with long cue vignettes (P = 0.57). Significant effects on the choice between LC, SC and surgery alone were tumour stage (P < 0.001), nodal status (P < 0.001), tumour position in the rectum (P < 0.001) and the circumferential location of the tumour (P < 0.001). A T4 tumour was the factor most likely associated with a recommendation against surgery alone (OR 335.96) or SC (OR 61.73). This study shows that clinical factors exert the greatest influence on surgeon decision making, which follows a "fast and frugal" heuristic decision making model. Copyright © 2015 IJS Publishing Group Limited. Published by Elsevier Ltd. All rights reserved.

  10. Factors influencing nurses' judgements about self-neglect cases.

    PubMed

    Lauder, W; Ludwick, R; Zeller, R; Winchell, J

    2006-06-01

    From the perspective of the practising nurse self-neglect may best be understood in terms of a set of complex and often poorly defined clinical problems in which two key clinical issues are "how do I judge whether this person has the capacity to make decisions about their lifestyle?" and "do we need to treat this person using mental health legislation?" These are taxing questions as judging if a patient has the capacity to make decisions about their lifestyle choices is difficult for even the most experienced clinicians. Such determinations require nurses to form a judgement as to mental capacity of the patient. We do not know what patient characteristics and in what combination nurses use these when making these judgements. This factorial survey aimed to identify which patient characteristics influenced Registered Nurses' judgements on decision-making capacity and decisions on the use of interventions which require statutory interventions in cases of self-neglect. Judgements on decision-making capacity were overwhelmingly predicted by information of the patients' mental health status. Nurses place patients in one of three broad categories of no mental illness, minor mental illness and severe mental illness. This categorization appears to operate as a fast and frugal heuristic indicating that nurses may use mental status as a cognitive screen to work from in judging self-neglect. Although there is a correlation between the severity of mental illness and the capacity for making decisions they are not the same. This study shows the continued work that needs done in educating nurses not only about self-neglect but also about the role a patient's mental status may have in assessment of problems.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela; Lu, Lu

    There are many choices to make, when designing an experiment for a study, such as: what design factors to consider, which levels of the factors to use and which model to focus on. One aspect of design, however, is often left unquestioned: the size of the experiment. When learning about design of experiments, problems are often posed as "select a design for a particular objective with N runs." It’s tempting to consider the design size as a given constraint in the design-selection process. If you think of learning through designed experiments as a sequential process, however, strategically planning for themore » use of resources at different stages of data collection can be beneficial: Saving experimental runs for later is advantageous if you can efficiently learn with less in the early stages. Alternatively, if you’re too frugal in the early stages, you might not learn enough to proceed confidently with the next stages. Therefore, choosing the right-sized experiment is important—not too large or too small, but with a thoughtful balance to maximize the knowledge gained given the available resources. It can be a great advantage to think about the design size as flexible and include it as an aspect for comparisons. Sometimes you’re asked to provide a small design that is too ambitious for the goals of the study. Finally, if you can show quantitatively how the suggested design size might be inadequate or lead to problems during analysis—and also offer a formal comparison to some alternatives of different (likely larger) sizes—you may have a better chance to ask for additional resources to deliver statistically sound and satisfying results« less

  12. IT doesn't matter.

    PubMed

    Carr, Nicholas G

    2003-05-01

    As information technology has grown in power and ubiquity, companies have come to view it as ever more critical to their success; their heavy spending on hardware and software clearly reflects that assumption. Chief executives routinely talk about information technology's strategic value, about how they can use IT to gain a competitive edge. But scarcity, not ubiquity, makes a business resource truly strategic--and allows companies to use it for a sustained competitive advantage. You only gain an edge over rivals by doing something that they can't. IT is the latest in a series of broadly adopted technologies--think of the railroad or the electric generator--that have reshaped industry over the past two centuries. For a brief time, as they were being built into the infrastructure of commerce, these technologies created powerful opportunities for forward-looking companies. But as their availability increased and their costs decreased, they became commodity inputs. From a strategic standpoint, they became invisible; they no longer mattered. that's exactly what's happening to IT, and the implications are profound. In this article, HBR's editor-at-large Nicholas Carr suggests that IT management should, frankly, become boring. It should focus on reducing risks, not increasing opportunities. For example, companies need to pay more attention to ensuring network and data security. Even more important, they need to manage IT costs more aggressively. IT may not help you gain a strategic advantage, but it could easily put you at a cost disadvantage. If, like many executives, you've begun to take a more defensive posture toward IT, spending more frugally and thinking more pragmatically, you're already on the right course. The challenge will be to maintain that discipline when the business cycle strengthens.

  13. Informal and formal support among community-dwelling Japanese American elders living alone in Chicagoland: an in-depth qualitative study.

    PubMed

    Lau, Denys T; Machizawa, Sayaka; Doi, Mary

    2012-06-01

    A key public health approach to promote independent living and avoid nursing home placement is ensuring that elders can obtain adequate informal support from family and friends, as well as formal support from community services. This study aims to describe the use of informal and formal support among community-dwelling Nikkei elders living alone, and explore perceived barriers hindering their use of such support. We conducted English and Japanese semi-structured, open-ended interviews in Chicagoland with a convenience sample of 34 Nikkei elders age 60+ who were functionally independent and living alone; 9 family/friends; and 10 local service providers. According to participants, for informal support, Nikkei elders relied mainly on: family for homemaking and health management; partners for emotional and emergency support; friends for emotional and transportation support; and neighbors for emergency assistance. Perceived barriers to informal support included elders' attitudinal impediments (feeling burdensome, reciprocating support, self-reliance), family-related interpersonal circumstances (poor communication, distance, intergenerational differences); and friendship/neighbor-related interpersonal situations (difficulty making friends, relocation, health decline/death). For formal support, Nikkei elders primarily used adult day care/cultural programs for socializing and learning and in-home care for personal/homemaking assistance and companionship. Barriers to formal support included attitudinal impediments (stoicism, privacy, frugality); perception of care (incompatibility with services, poor opinions of in-home care quality); and accessibility (geographical distance, lack of transportation). In summary, this study provides important preliminary insights for future community strategies that will target resources and training for support networks of Nikkei elders living alone to maximize their likelihood to age in place independently.

  14. Fertility and the family.

    PubMed

    Kono, S

    1991-09-01

    The developed countries of Asia include Japan and the newly industrialized economies of Hong Kong, Taiwan, Republic of Korea, and Singapore. These 5 societies share various similarities that have helped in their socioeconomic development. The 1st includes a significant and quite rapid socioeconomic development (per capita gross national product, urbanization, demographic transition, educational attainment, and industrialization). The 2nd similarity is the extremely high levels of education. Indeed those 5 societies boast the most highly educated populations in Asia. 3rd, they have achieved below replacement fertility. In fact, they never really had very high birth rates anyhow. 4th, extended families are the norm despite rapid transformation of their societies. Confucianism, the 5th similarity, advocates multigenerational families. Children understand that it is their duty to always take care of their parents. Parents know that they have a right to ask for their children's help at any time. In 1981, a demographer hypothesized that 1 of the leading signs of fertility decline is the nucleation of the family. yet in these 5 societies nucleation did not appear before fertility decline. In fact, in Japan, fertility is higher in those homes where the grandmother watches her grandchildren while the daughter or daughter-in-law is at work. Some have compared the role of Confucianism in socioeconomic development with that of Protestantism in the industrialization of Europe. Both stress hard work, simplicity, frugality, discipline, and regularity in daily life. Yet Confucianism sees women as bearers of children and servants to men. Women in these societies protest this mindset by postponing marriage and having few children. If these societies wish to increase fertility, they need to restructure their male oriented societies.

  15. A triangular climate-based decision model to forecast crop anomalies in Kenya

    NASA Astrophysics Data System (ADS)

    Guimarães Nobre, G.; Davenport, F.; Veldkamp, T.; Jongman, B.; Funk, C. C.; Husak, G. J.; Ward, P.; Aerts, J.

    2017-12-01

    By the end of 2017, the world is expected to experience unprecedented demands for food assistance where, across 45 countries, some 81 million people will face a food security crisis. Prolonged droughts in Eastern Africa are playing a major role in these crises. To mitigate famine risk and save lives, government bodies and international donor organisations are increasingly building up efforts to resolve conflicts and secure humanitarian relief. Disaster-relief and financing organizations traditionally focus on emergency response, providing aid after an extreme drought event, instead of taking actions in advance based on early warning. One of the reasons for this approach is that the seasonal risk information provided by early warning systems is often considered highly uncertain. Overcoming the reluctance to act based on early warnings greatly relies on understanding the risk of acting in vain, and assessing the cost-effectiveness of early actions. This research develops a triangular climate-based decision model for multiple seasonal time-scales to forecast strong anomalies in crop yield shortages in Kenya using Casual Discovery Algorithms and Fast and Frugal Decision Trees. This Triangular decision model (1) estimates the causality and strength of the relationship between crop yields and hydro climatological predictors (extracted from the Famine Early Warning Systems Network's data archive) during the crop growing season; (2) provides probabilistic forecasts of crop yield shortages in multiple time scales before the harvesting season; and (3) evaluates the cost-effectiveness of different financial mechanisms to respond to early warning indicators of crop yield shortages obtained from the model. Furthermore, we reflect on how such a model complements and advances the current state-of-art FEWS Net system, and examine its potential application to improve the management of agricultural risks in Kenya.

  16. The self-consistency model of subjective confidence.

    PubMed

    Koriat, Asher

    2012-01-01

    How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen answer is supported across representations. Confidence is modeled by analogy to the calculation of statistical level of confidence (SLC) in testing hypotheses about a population and represents the participant's assessment of the likelihood that a new sample will yield the same choice. Assuming that participants draw representations from a commonly shared item-specific population of representations, predictions were derived regarding the function relating confidence to inter-participant consensus and intra-participant consistency for the more preferred (majority) and the less preferred (minority) choices. The predicted pattern was confirmed for several different tasks. The confidence-accuracy relationship was shown to be a by-product of the consistency-correctness relationship: It is positive because the answers that are consistently chosen are generally correct, but negative when the wrong answers tend to be favored. The overconfidence bias stems from the reliability-validity discrepancy: Confidence monitors reliability (or self-consistency), but its accuracy is evaluated in calibration studies against correctness. Simulation and empirical results suggest that response speed is a frugal cue for self-consistency, and its validity depends on the validity of self-consistency in predicting performance. Another mnemonic cue-accessibility, which is the overall amount of information that comes to mind-makes an added, independent contribution. Self-consistency and accessibility may correspond to the 2 parameters that affect SLC: sample variance and sample size.

  17. The Use of Computer Simulation Gaming in Teaching Broadcast Economics.

    ERIC Educational Resources Information Center

    Mancuso, Louis C.

    The purpose of this study was to develop a broadcast economic computer simulation and to ascertain how a lecture-computer simulation game compared as a teaching method with a more traditional lecture and case study instructional methods. In each of three sections of a broadcast economics course, a different teaching methodology was employed: (1)…

  18. Hodge numbers for all CICY quotients

    NASA Astrophysics Data System (ADS)

    Constantin, Andrei; Gray, James; Lukas, Andre

    2017-01-01

    We present a general method for computing Hodge numbers for Calabi-Yau manifolds realised as discrete quotients of complete intersections in products of projective spaces. The method relies on the computation of equivariant cohomologies and is illustrated for several explicit examples. In this way, we compute the Hodge numbers for all discrete quotients obtained in Braun's classification [1].

  19. Projection multiplex recording of computer-synthesised one-dimensional Fourier holograms for holographic memory systems: mathematical and experimental modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Betin, A Yu; Bobrinev, V I; Verenikina, N M

    A multiplex method of recording computer-synthesised one-dimensional Fourier holograms intended for holographic memory devices is proposed. The method potentially allows increasing the recording density in the previously proposed holographic memory system based on the computer synthesis and projection recording of data page holograms. (holographic memory)

  20. Computing discharge using the index velocity method

    USGS Publications Warehouse

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression techniques in which the mean cross-sectional velocity for the standard section is related to the measured index velocity. Most ratings are simple-linear regressions, but more complex ratings may be necessary in some cases. Once the rating is established, validation measurements should be made periodically. Over time, validation measurements may provide additional definition to the rating or result in the creation of a new rating. The computation of discharge is the last step in the index velocity method, and in some ways it is the most straight-forward step. This step differs little from the steps used to compute discharge records for stage-discharge gaging stations. The ratings are entered into database software used for records computation, and continuous records of discharge are computed.

Top