Sample records for modeling technique called

  1. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  2. A proposed technique for vehicle tracking, direction, and speed determination

    NASA Astrophysics Data System (ADS)

    Fisher, Paul S.; Angaye, Cleopas O.; Fisher, Howard P.

    2004-12-01

    A technique for recognition of vehicles in terms of direction, distance, and rate of change is presented. This represents very early work on this problem with significant hurdles still to be addressed. These are discussed in the paper. However, preliminary results also show promise for this technique for use in security and defense environments where the penetration of a perimeter is of concern. The material described herein indicates a process whereby the protection of a barrier could be augmented by computers and installed cameras assisting the individuals charged with this responsibility. The technique we employ is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage and recognition where conventional mathematical models don"t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a priori value for all derived rules. The rules are obtained from exemplar data sets, and are derived by a technique called Factoring, yielding a table of rules called a Ruling. These rules can then be used in pattern recognition applications such as described in this paper.

  3. Calculating phase equilibrium properties of plasma pseudopotential model using hybrid Gibbs statistical ensemble Monte-Carlo technique

    NASA Astrophysics Data System (ADS)

    Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.

    2015-11-01

    Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.

  4. What Does CALL Have to Offer Computer Science and What Does Computer Science Have to Offer CALL?

    ERIC Educational Resources Information Center

    Cushion, Steve

    2006-01-01

    We will argue that CALL can usefully be viewed as a subset of computer software engineering and can profit from adopting some of the recent progress in software development theory. The unified modelling language has become the industry standard modelling technique and the accompanying unified process is rapidly gaining acceptance. The manner in…

  5. Design of a 3D Navigation Technique Supporting VR Interaction

    NASA Astrophysics Data System (ADS)

    Boudoin, Pierre; Otmane, Samir; Mallem, Malik

    2008-06-01

    Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.

  6. Adaptation of warrant price with Black Scholes model and historical volatility

    NASA Astrophysics Data System (ADS)

    Aziz, Khairu Azlan Abd; Idris, Mohd Fazril Izhar Mohd; Saian, Rizauddin; Daud, Wan Suhana Wan

    2015-05-01

    This project discusses about pricing warrant in Malaysia. The Black Scholes model with non-dividend approach and linear interpolation technique was applied in pricing the call warrant. Three call warrants that are listed in Bursa Malaysia were selected randomly from UiTM's datastream. The finding claims that the volatility for each call warrants are different to each other. We have used the historical volatility which will describes the price movement by which an underlying share is expected to fluctuate within a period. The Black Scholes model price that was obtained by the model will be compared with the actual market price. Mispricing the call warrants will contribute to under or over valuation price. Other variables like interest rate, time to maturity date, exercise price and underlying stock price are involves in pricing call warrants as well as measuring the moneyness of call warrants.

  7. Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models

    DTIC Science & Technology

    2008-08-01

    Level Outputs Campaign Level Model Campaign Level Outputs Aggregation Metamodeling Complexity (Spatial, Temporal, etc.) Others? Apply VRT (type......reduction, are called variance reduction techniques ( VRT ) [Law, 2006]. The implementation of some type of VRT can prove to be a very valuable tool

  8. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1977-01-01

    Models, measures and techniques were developed for evaluating the effectiveness of aircraft computing systems. The concept of effectiveness involves aspects of system performance, reliability and worth. Specifically done was a detailed development of model hierarchy at mission, functional task, and computational task levels. An appropriate class of stochastic models was investigated which served as bottom level models in the hierarchial scheme. A unified measure of effectiveness called 'performability' was defined and formulated.

  9. (abstract) Generic Modeling of a Life Support System for Process Technology Comparisons

    NASA Technical Reports Server (NTRS)

    Ferrall, J. F.; Seshan, P. K.; Rohatgi, N. K.; Ganapathi, G. B.

    1993-01-01

    This paper describes a simulation model called the Life Support Systems Analysis Simulation Tool (LiSSA-ST), the spreadsheet program called the Life Support Systems Analysis Trade Tool (LiSSA-TT), and the Generic Modular Flow Schematic (GMFS) modeling technique. Results of using the LiSSA-ST and the LiSSA-TT will be presented for comparing life support systems and process technology options for a Lunar Base and a Mars Exploration Mission.

  10. Variable horizon in a peridynamic medium

    DOE PAGES

    Silling, Stewart A.; Littlewood, David J.; Seleson, Pablo

    2015-12-10

    Here, a notion of material homogeneity is proposed for peridynamic bodies with variable horizon but constant bulk properties. A relation is derived that scales the force state according to the position-dependent horizon while keeping the bulk properties unchanged. Using this scaling relation, if the horizon depends on position, artifacts called ghost forces may arise in a body under a homogeneous deformation. These artifacts depend on the second derivative of the horizon and can be reduced by employing a modified equilibrium equation using a new quantity called the partial stress. Bodies with piecewise constant horizon can be modeled without ghost forcesmore » by using a simpler technique called a splice. As a limiting case of zero horizon, both the partial stress and splice techniques can be used to achieve local-nonlocal coupling. Computational examples, including dynamic fracture in a one-dimensional model with local-nonlocal coupling, illustrate the methods.« less

  11. A pilot modeling technique for handling-qualities research

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  12. The Monitoring, Detection, Isolation and Assessment of Information Warfare Attacks Through Multi-Level, Multi-Scale System Modeling and Model Based Technology

    DTIC Science & Technology

    2004-01-01

    login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a

  13. Socrates Meets the 21st Century

    ERIC Educational Resources Information Center

    Lege, Jerry

    2005-01-01

    A inquiry-based approach called the "modelling discussion" is introduced for structuring beginning modelling activity, teaching new mathematics from examining its applications in contextual situations, and as a general classroom management technique when students are engaged in mathematical modelling. An example which illustrates the style and…

  14. Interactomes to Biological Phase Space: a call to begin thinking at a new level in computational biology.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, George S.; Brown, William Michael

    2007-09-01

    Techniques for high throughput determinations of interactomes, together with high resolution protein collocalizations maps within organelles and through membranes will soon create a vast resource. With these data, biological descriptions, akin to the high dimensional phase spaces familiar to physicists, will become possible. These descriptions will capture sufficient information to make possible realistic, system-level models of cells. The descriptions and the computational models they enable will require powerful computing techniques. This report is offered as a call to the computational biology community to begin thinking at this scale and as a challenge to develop the required algorithms and codes tomore » make use of the new data.3« less

  15. Mitigating Handoff Call Dropping in Wireless Cellular Networks: A Call Admission Control Technique

    NASA Astrophysics Data System (ADS)

    Ekpenyong, Moses Effiong; Udoh, Victoria Idia; Bassey, Udoma James

    2016-06-01

    Handoff management has been an important but challenging issue in the field of wireless communication. It seeks to maintain seamless connectivity of mobile users changing their points of attachment from one base station to another. This paper derives a call admission control model and establishes an optimal step-size coefficient (k) that regulates the admission probability of handoff calls. An operational CDMA network carrier was investigated through the analysis of empirical data collected over a period of 1 month, to verify the performance of the network. Our findings revealed that approximately 23 % of calls in the existing system were lost, while 40 % of the calls (on the average) were successfully admitted. A simulation of the proposed model was then carried out under ideal network conditions to study the relationship between the various network parameters and validate our claim. Simulation results showed that increasing the step-size coefficient degrades the network performance. Even at optimum step-size (k), the network could still be compromised in the presence of severe network crises, but our model was able to recover from these problems and still functions normally.

  16. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    NASA Astrophysics Data System (ADS)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  17. Anticipatory Neurofuzzy Control

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1994-01-01

    Technique of feedback control, called "anticipatory neurofuzzy control," developed for use in controlling flexible structures and other dynamic systems for which mathematical models of dynamics poorly known or unknown. Superior ability to act during operation to compensate for, and adapt to, errors in mathematical model of dynamics, changes in dynamics, and noise. Also offers advantage of reduced computing time. Hybrid of two older fuzzy-logic control techniques: standard fuzzy control and predictive fuzzy control.

  18. Incorporating principal component analysis into air quality model evaluation

    EPA Science Inventory

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...

  19. Generic Modeling of a Life Support System for Process Technology Comparison

    NASA Technical Reports Server (NTRS)

    Ferrall, J. F.; Seshan, P. K.; Rohatgi, N. K.; Ganapathi, G. B.

    1993-01-01

    This paper describes a simulation model called the Life Support Systems Analysis Simulation Tool (LiSSA-ST), the spreadsheet program called the Life Support Systems Analysis Trade Tool (LiSSA-TT), and the Generic Modular Flow Schematic (GMFS) modeling technique. Results of using the LiSSA-ST and the LiSSA-TT will be presented for comparing life support system and process technology options for a Lunar Base with a crew size of 4 and mission lengths of 90 and 600 days. System configurations to minimize the life support system weight and power are explored.

  20. Continuous state-space representation of a bucket-type rainfall-runoff model: a case study with the GR4 model using state-space GR4 (version 1.0)

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2018-04-01

    In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.

  1. Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent

    PubMed Central

    De Sa, Christopher; Feldman, Matthew; Ré, Christopher; Olukotun, Kunle

    2018-01-01

    Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buckwild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11×. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems. PMID:29391770

  2. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  3. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  4. Nonlinear acoustics in cicada mating calls enhance sound propagation.

    PubMed

    Hughes, Derke R; Nuttall, Albert H; Katz, Richard A; Carter, G Clifford

    2009-02-01

    An analysis of cicada mating calls, measured in field experiments, indicates that the very high levels of acoustic energy radiated by this relatively small insect are mainly attributed to the nonlinear characteristics of the signal. The cicada emits one of the loudest sounds in all of the insect population with a sound production system occupying a physical space typically less than 3 cc. The sounds made by tymbals are amplified by the hollow abdomen, functioning as a tuned resonator, but models of the signal based solely on linear techniques do not fully account for a sound radiation capability that is so disproportionate to the insect's size. The nonlinear behavior of the cicada signal is demonstrated by combining the mutual information and surrogate data techniques; the results obtained indicate decorrelation when the phase-randomized and non-phase-randomized data separate. The Volterra expansion technique is used to fit the nonlinearity in the insect's call. The second-order Volterra estimate provides further evidence that the cicada mating calls are dominated by nonlinear characteristics and also suggests that the medium contributes to the cicada's efficient sound propagation. Application of the same principles has the potential to improve radiated sound levels for sonar applications.

  5. Reservoir Modeling by Data Integration via Intermediate Spaces and Artificial Intelligence Tools in MPS Simulation Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmadi, Rouhollah, E-mail: rouhollahahmadi@yahoo.com; Khamehchi, Ehsan

    Conditioning stochastic simulations are very important in many geostatistical applications that call for the introduction of nonlinear and multiple-point data in reservoir modeling. Here, a new methodology is proposed for the incorporation of different data types into multiple-point statistics (MPS) simulation frameworks. Unlike the previous techniques that call for an approximate forward model (filter) for integration of secondary data into geologically constructed models, the proposed approach develops an intermediate space where all the primary and secondary data are easily mapped onto. Definition of the intermediate space, as may be achieved via application of artificial intelligence tools like neural networks andmore » fuzzy inference systems, eliminates the need for using filters as in previous techniques. The applicability of the proposed approach in conditioning MPS simulations to static and geologic data is verified by modeling a real example of discrete fracture networks using conventional well-log data. The training patterns are well reproduced in the realizations, while the model is also consistent with the map of secondary data.« less

  6. Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models

    NASA Astrophysics Data System (ADS)

    Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto

    In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.

  7. Chroma intra prediction based on inter-channel correlation for HEVC.

    PubMed

    Zhang, Xingyu; Gisquet, Christophe; François, Edouard; Zou, Feng; Au, Oscar C

    2014-01-01

    In this paper, we investigate a new inter-channel coding mode called LM mode proposed for the next generation video coding standard called high efficiency video coding. This mode exploits inter-channel correlation using reconstructed luma to predict chroma linearly with parameters derived from neighboring reconstructed luma and chroma pixels at both encoder and decoder to avoid overhead signaling. In this paper, we analyze the LM mode and prove that the LM parameters for predicting original chroma and reconstructed chroma are statistically the same. We also analyze the error sensitivity of the LM parameters. We identify some LM mode problematic situations and propose three novel LM-like modes called LMA, LML, and LMO to address the situations. To limit the increase in complexity due to the LM-like modes, we propose some fast algorithms with the help of some new cost functions. We further identify some potentially-problematic conditions in the parameter estimation (including regression dilution problem) and introduce a novel model correction technique to detect and correct those conditions. Simulation results suggest that considerable BD-rate reduction can be achieved by the proposed LM-like modes and model correction technique. In addition, the performance gain of the two techniques appears to be essentially additive when combined.

  8. Designing a more efficient, effective and safe Medical Emergency Team (MET) service using data analysis

    PubMed Central

    Bilgrami, Irma; Bain, Christopher; Webb, Geoffrey I.; Orosz, Judit; Pilcher, David

    2017-01-01

    Introduction Hospitals have seen a rise in Medical Emergency Team (MET) reviews. We hypothesised that the commonest MET calls result in similar treatments. Our aim was to design a pre-emptive management algorithm that allowed direct institution of treatment to patients without having to wait for attendance of the MET team and to model its potential impact on MET call incidence and patient outcomes. Methods Data was extracted for all MET calls from the hospital database. Association rule data mining techniques were used to identify the most common combinations of MET call causes, outcomes and therapies. Results There were 13,656 MET calls during the 34-month study period in 7936 patients. The most common MET call was for hypotension [31%, (2459/7936)]. These MET calls were strongly associated with the immediate administration of intra-venous fluid (70% [1714/2459] v 13% [739/5477] p<0.001), unless the patient was located on a respiratory ward (adjusted OR 0.41 [95%CI 0.25–0.67] p<0.001), had a cardiac cause for admission (adjusted OR 0.61 [95%CI 0.50–0.75] p<0.001) or was under the care of the heart failure team (adjusted OR 0.29 [95%CI 0.19–0.42] p<0.001). Modelling the effect of a pre-emptive management algorithm for immediate fluid administration without MET activation on data from a test period of 24 months following the study period, suggested it would lead to a 68.7% (2541/3697) reduction in MET calls for hypotension and a 19.6% (2541/12938) reduction in total METs without adverse effects on patients. Conclusion Routinely collected data and analytic techniques can be used to develop a pre-emptive management algorithm to administer intravenous fluid therapy to a specific group of hypotensive patients without the need to initiate a MET call. This could both lead to earlier treatment for the patient and less total MET calls. PMID:29281665

  9. Calling depths of baleen whales from single sensor data: development of an autocorrelation method using multipath localization.

    PubMed

    Valtierra, Robert D; Glynn Holt, R; Cholewiak, Danielle; Van Parijs, Sofie M

    2013-09-01

    Multipath localization techniques have not previously been applied to baleen whale vocalizations due to difficulties in application to tonal vocalizations. Here it is shown that an autocorrelation method coupled with the direct reflected time difference of arrival localization technique can successfully resolve location information. A derivation was made to model the autocorrelation of a direct signal and its overlapping reflections to illustrate that an autocorrelation may be used to extract reflection information from longer duration signals containing a frequency sweep, such as some calls produced by baleen whales. An analysis was performed to characterize the difference in behavior of the autocorrelation when applied to call types with varying parameters (sweep rate, call duration). The method's feasibility was tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The method was then used to estimate the depth and range of a single North Atlantic right whale (Eubalaena glacialis) and humpback whale (Megaptera novaeangliae) from two separate experiments.

  10. Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.

    ERIC Educational Resources Information Center

    Simpson, William A.

    In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…

  11. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. An Application of the A* Search to Trajectory Optimization

    DTIC Science & Technology

    1990-05-11

    linearized model of orbital motion called the Clohessy - Wiltshire Equations and a node search technique called A*. The planner discussed in this thesis starts...states while transfer time is left unspecified. 13 Chapter 2. Background HILL’S ( CLOHESSY - WILTSHIRE ) EQUATIONS The Euler-Hill equations describe... Clohessy - Wiltshire equations. The coordinate system used in this thesis is commonly referred to as Local Vertical, Local Horizontal or LVLH reference frame

  13. A Mixed Effects Randomized Item Response Model

    ERIC Educational Resources Information Center

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  14. Automatic welding detection by an intelligent tool pipe inspection

    NASA Astrophysics Data System (ADS)

    Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.

    2015-07-01

    This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.

  15. Symbolically Modeling Concurrent MCAPI Executions

    NASA Technical Reports Server (NTRS)

    Fischer, Topher; Mercer, Eric; Rungta, Neha

    2011-01-01

    Improper use of Inter-Process Communication (IPC) within concurrent systems often creates data races which can lead to bugs that are challenging to discover. Techniques that use Satisfiability Modulo Theories (SMT) problems to symbolically model possible executions of concurrent software have recently been proposed for use in the formal verification of software. In this work we describe a new technique for modeling executions of concurrent software that use a message passing API called MCAPI. Our technique uses an execution trace to create an SMT problem that symbolically models all possible concurrent executions and follows the same sequence of conditional branch outcomes as the provided execution trace. We check if there exists a satisfying assignment to the SMT problem with respect to specific safety properties. If such an assignment exists, it provides the conditions that lead to the violation of the property. We show how our method models behaviors of MCAPI applications that are ignored in previously published techniques.

  16. Verus: A Tool for Quantitative Analysis of Finite-State Real-Time Systems.

    DTIC Science & Technology

    1996-08-12

    Symbolic model checking is a technique for verifying finite-state concurrent systems that has been extended to handle real - time systems . Models with...up to 10(exp 30) states can often be verified in minutes. In this paper, we present a new tool to analyze real - time systems , based on this technique...We have designed a language, called Verus, for the description of real - time systems . Such a description is compiled into a state-transition graph and

  17. Test Input Generation for Red-Black Trees using Abstraction

    NASA Technical Reports Server (NTRS)

    Visser, Willem; Pasareanu, Corina S.; Pelanek, Radek

    2005-01-01

    We consider the problem of test input generation for code that manipulates complex data structures. Test inputs are sequences of method calls from the data structure interface. We describe test input generation techniques that rely on state matching to avoid generation of redundant tests. Exhaustive techniques use explicit state model checking to explore all the possible test sequences up to predefined input sizes. Lossy techniques rely on abstraction mappings to compute and store abstract versions of the concrete states; they explore under-approximations of all the possible test sequences. We have implemented the techniques on top of the Java PathFinder model checker and we evaluate them using a Java implementation of red-black trees.

  18. Order reduction for a model of marine bacteriophage evolution

    NASA Astrophysics Data System (ADS)

    Pagliarini, Silvia; Korobeinikov, Andrei

    2017-02-01

    A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.

  19. Detecting reactive islands using Lagrangian descriptors and the relevance to transition path sampling.

    PubMed

    Patra, Sarbani; Keshavamurthy, Srihari

    2018-02-14

    It has been known for sometime now that isomerization reactions, classically, are mediated by phase space structures called reactive islands (RI). RIs provide one possible route to correct for the nonstatistical effects in the reaction dynamics. In this work, we map out the reactive islands for the two dimensional Müller-Brown model potential and show that the reactive islands are intimately linked to the issue of rare event sampling. In particular, we establish the sensitivity of the so called committor probabilities, useful quantities in the transition path sampling technique, to the hierarchical RI structures. Mapping out the RI structure for high dimensional systems, however, is a challenging task. Here, we show that the technique of Lagrangian descriptors is able to effectively identify the RI hierarchy in the model system. Based on our results, we suggest that the Lagrangian descriptors can be useful for detecting RIs in high dimensional systems.

  20. Modeling Antimicrobial Activity of Clorox(R) Using an Agar-Diffusion Test: A New Twist On an Old Experiment.

    ERIC Educational Resources Information Center

    Mitchell, James K.; Carter, William E.

    2000-01-01

    Describes using a computer statistical software package called Minitab to model the sensitivity of several microbes to the disinfectant NaOCl (Clorox') using the Kirby-Bauer technique. Each group of students collects data from one microbe, conducts regression analyses, then chooses the best-fit model based on the highest r-values obtained.…

  1. Applying the Mixed Rasch Model to the Runco Ideational Behavior Scale

    ERIC Educational Resources Information Center

    Sen, Sedat

    2016-01-01

    Previous research using creativity assessments has used latent class models and identified multiple classes (a 3-class solution) associated with various domains. This study explored the latent class structure of the Runco Ideational Behavior Scale, which was designed to quantify ideational capacity. A robust state-of the-art technique called the…

  2. Distributed intelligent scheduling of FMS

    NASA Astrophysics Data System (ADS)

    Wu, Zuobao; Cheng, Yaodong; Pan, Xiaohong

    1995-08-01

    In this paper, a distributed scheduling approach of a flexible manufacturing system (FMS) is presented. A new class of Petri nets called networked time Petri nets (NTPN) for system modeling of networking environment is proposed. The distributed intelligent scheduling is implemented by three schedulers which combine NTPN models with expert system techniques. The simulation results are shown.

  3. Second-Language Learning through Imaginative Theory

    ERIC Educational Resources Information Center

    Broom, Catherine

    2011-01-01

    This article explores how Egan's (1997) work on imagination can enrich our understanding of teaching English as a second language (ESL). Much has been written on ESL teaching techniques; however, some of this work has been expounded in a standard educational framework, which is what Egan calls an assembly-line model. This model can easily underlie…

  4. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  5. Time-partitioning simulation models for calculation on parallel computers

    NASA Technical Reports Server (NTRS)

    Milner, Edward J.; Blech, Richard A.; Chima, Rodrick V.

    1987-01-01

    A technique allowing time-staggered solution of partial differential equations is presented in this report. Using this technique, called time-partitioning, simulation execution speedup is proportional to the number of processors used because all processors operate simultaneously, with each updating of the solution grid at a different time point. The technique is limited by neither the number of processors available nor by the dimension of the solution grid. Time-partitioning was used to obtain the flow pattern through a cascade of airfoils, modeled by the Euler partial differential equations. An execution speedup factor of 1.77 was achieved using a two processor Cray X-MP/24 computer.

  6. An empirical analysis of the corporate call decision

    NASA Astrophysics Data System (ADS)

    Carlson, Murray Dean

    1998-12-01

    In this thesis we provide insights into the behavior of financial managers of utility companies by studying their decisions to redeem callable preferred shares. In particular, we investigate whether or not an option pricing based model of the call decision, with managers who maximize shareholder value, does a better job of explaining callable preferred share prices and call decisions than do other models of the decision. In order to perform these tests, we extend an empirical technique introduced by Rust (1987) to include the use of information from preferred share prices in addition to the call decisions. The model we develop to value the option embedded in a callable preferred share differs from standard models in two ways. First, as suggested in Kraus (1983), we explicitly account for transaction costs associated with a redemption. Second, we account for state variables that are observed by the decision makers but not by the preferred shareholders. We interpret these unobservable state variables as the benefits and costs associated with a change in capital structure that can accompany a call decision. When we add this variable, our empirical model changes from one which predicts exactly when a share should be called to one which predicts the probability of a call as the function of the observable state. These two modifications of the standard model result in predictions of calls, and therefore of callable preferred share prices, that are consistent with several previously unexplained features of the data; we show that the predictive power of the model is improved in a statistical sense by adding these features to the model. The pricing and call probability functions from our model do a good job of describing call decisions and preferred share prices for several utilities. Using data from shares of the Pacific Gas and Electric Co. (PGE) we obtain reasonable estimates for the transaction costs associated with a call. Using a formal empirical test, we are able to conclude that the managers of the Pacific Gas and Electric Company clearly take into account the value of the option to delay the call when making their call decisions. Overall, the model seems to be robust to tests of its specification and does a better job of describing the data than do simpler models of the decision making process. Limitations in the data do not allow us to perform the same tests in a larger cross-section of utility companies. However, we are able to estimate transaction cost parameters for many firms and these do not seem to vary significantly from those of PGE. This evidence does not cause us to reject our hypothesis that managerial behavior is consistent with a model in which managers maximize shareholder value.

  7. Control system design for flexible structures using data models

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis; Frazier, W. Garth; Mitchell, Jerrel R.; Medina, Enrique A.; Bukley, Angelia P.

    1993-01-01

    The dynamics and control of flexible aerospace structures exercises many of the engineering disciplines. In recent years there has been considerable research in the developing and tailoring of control system design techniques for these structures. This problem involves designing a control system for a multi-input, multi-output (MIMO) system that satisfies various performance criteria, such as vibration suppression, disturbance and noise rejection, attitude control and slewing control. Considerable progress has been made and demonstrated in control system design techniques for these structures. The key to designing control systems for these structures that meet stringent performance requirements is an accurate model. It has become apparent that theoretically and finite-element generated models do not provide the needed accuracy; almost all successful demonstrations of control system design techniques have involved using test results for fine-tuning a model or for extracting a model using system ID techniques. This paper describes past and ongoing efforts at Ohio University and NASA MSFC to design controllers using 'data models.' The basic philosophy of this approach is to start with a stabilizing controller and frequency response data that describes the plant; then, iteratively vary the free parameters of the controller so that performance measures become closer to satisfying design specifications. The frequency response data can be either experimentally derived or analytically derived. One 'design-with-data' algorithm presented in this paper is called the Compensator Improvement Program (CIP). The current CIP designs controllers for MIMO systems so that classical gain, phase, and attenuation margins are achieved. The center-piece of the CIP algorithm is the constraint improvement technique which is used to calculate a parameter change vector that guarantees an improvement in all unsatisfied, feasible performance metrics from iteration to iteration. The paper also presents a recently demonstrated CIP-type algorithm, called the Model and Data Oriented Computer-Aided Design System (MADCADS), developed for achieving H(sub infinity) type design specifications using data models. Control system design for the NASA/MSFC Single Structure Control Facility are demonstrated for both CIP and MADCADS. Advantages of design-with-data algorithms over techniques that require analytical plant models are also presented.

  8. Model reduction by trimming for a class of semi-Markov reliability models and the corresponding error bound

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Palumbo, Daniel L.

    1991-01-01

    Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.

  9. Application of interactive computer graphics in wind-tunnel dynamic model testing

    NASA Technical Reports Server (NTRS)

    Doggett, R. V., Jr.; Hammond, C. E.

    1975-01-01

    The computer-controlled data-acquisition system recently installed for use with a transonic dynamics tunnel was described. This includes a discussion of the hardware/software features of the system. A subcritical response damping technique, called the combined randomdec/moving-block method, for use in windtunnel-model flutter testing, that has been implemented on the data-acquisition system, is described in some detail. Some results using the method are presented and the importance of using interactive graphics in applying the technique in near real time during wind-tunnel test operations is discussed.

  10. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    NASA Astrophysics Data System (ADS)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  11. Uncertainty Quantification for Robust Control of Wind Turbines using Sliding Mode Observer

    NASA Astrophysics Data System (ADS)

    Schulte, Horst

    2016-09-01

    A new quantification method of uncertain models for robust wind turbine control using sliding-mode techniques is presented with the objective to improve active load mitigation. This approach is based on the so-called equivalent output injection signal, which corresponds to the average behavior of the discontinuous switching term, establishing and maintaining a motion on a so-called sliding surface. The injection signal is directly evaluated to obtain estimates of the uncertainty bounds of external disturbances and parameter uncertainties. The applicability of the proposed method is illustrated by the quantification of a four degree-of-freedom model of the NREL 5MW reference turbine containing uncertainties.

  12. Stereo imaging with spaceborne radars

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Kobrick, M.

    1983-01-01

    Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.

  13. On splice site prediction using weight array models: a comparison of smoothing techniques

    NASA Astrophysics Data System (ADS)

    Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard

    2007-11-01

    In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.

  14. Rabbit tissue model (RTM) harvesting technique.

    PubMed

    Medina, Marelyn

    2002-01-01

    A method for creating a tissue model using a female rabbit for laparoscopic simulation exercises is described. The specimen is called a Rabbit Tissue Model (RTM). Dissection techniques are described for transforming the rabbit carcass into a small, compact unit that can be used for multiple training sessions. Preservation is accomplished by using saline and refrigeration. Only the animal trunk is used, with the rest of the animal carcass being discarded. Practice exercises are provided for using the preserved organs. Basic surgical skills, such as dissection, suturing, and knot tying, can be practiced on this model. In addition, the RTM can be used with any pelvic trainer that permits placement of larger practice specimens within its confines.

  15. Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

    USGS Publications Warehouse

    Safak, Erdal

    1989-01-01

    This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

  16. Mechanical impedance and acoustic mobility measurement techniques of specifying vibration environments

    NASA Technical Reports Server (NTRS)

    Kao, G. C.

    1973-01-01

    Method has been developed for predicting interaction between components and corresponding support structures subjected to acoustic excitations. Force environments determined in spectral form are called force spectra. Force-spectra equation is determined based on one-dimensional structural impedance model.

  17. MCAID--A Generalized Text Driver.

    ERIC Educational Resources Information Center

    Ahmed, K.; Dickinson, C. J.

    MCAID is a relatively machine-independent technique for writing computer-aided instructional material consisting of descriptive text, multiple choice questions, and the ability to call compiled subroutines to perform extensive calculations. It was specially developed to incorporate test-authoring around complex mathematical models to explore a…

  18. High-Dimensional Modeling for Cytometry: Building Rock Solid Models Using GemStone™ and Verity Cen-se'™ High-Definition t-SNE Mapping.

    PubMed

    Bruce Bagwell, C

    2018-01-01

    This chapter outlines how to approach the complex tasks associated with designing models for high-dimensional cytometry data. Unlike gating approaches, modeling lends itself to automation and accounts for measurement overlap among cellular populations. Designing these models is now easier because of a new technique called high-definition t-SNE mapping. Nontrivial examples are provided that serve as a guide to create models that are consistent with data.

  19. Convergence Properties of a Class of Probabilistic Adaptive Schemes Called Sequential Reproductive Plans. Psychology and Education Series, Technical Report No. 210.

    ERIC Educational Resources Information Center

    Martin, Nancy

    Presented is a technical report concerning the use of a mathematical model describing certain aspects of the duplication and selection processes in natural genetic adaptation. This reproductive plan/model occurs in artificial genetics (the use of ideas from genetics to develop general problem solving techniques for computers). The reproductive…

  20. Mathematical Practice in Textbooks Analysis: Praxeological Reference Models, the Case of Proportion

    ERIC Educational Resources Information Center

    Wijayanti, Dyana; Winsløw, Carl

    2017-01-01

    We present a new method in textbook analysis, based on so-called praxeological reference models focused on specific content at task level. This method implies that the mathematical contents of a textbook (or textbook part) is analyzed in terms of the tasks and techniques which are exposed to or demanded from readers; this can then be interpreted…

  1. Interoperability Policy Roadmap

    DTIC Science & Technology

    2010-01-01

    Retrieval – SMART The technique developed by Dr. Gerard Salton for automated information retrieval and text analysis is called the vector-space... Salton , G., Wong, A., Yang, C.S., “A Vector Space Model for Automatic Indexing”, Commu- nications of the ACM, 18, 613-620. [10] Salton , G., McGill

  2. Nursing 2000: Collaboration to Promote Careers in Registered Nursing.

    ERIC Educational Resources Information Center

    Wilson, Connie S.; Mitchell, Barbara S.

    1999-01-01

    The effectiveness of the collaborative Nursing 2000 model in promoting nursing careers was evaluated through a survey of 1,598 nursing students (637 responses). Most effective techniques were the "shadow a nurse" program, publications, classroom and community presentations, and career-counseling telephone calls. (SK)

  3. Automated recognition of bird song elements from continuous recordings using dynamic time warping and hidden Markov models: a comparative study.

    PubMed

    Kogan, J A; Margoliash, D

    1998-04-01

    The performance of two techniques is compared for automated recognition of bird song units from continuous recordings. The advantages and limitations of dynamic time warping (DTW) and hidden Markov models (HMMs) are evaluated on a large database of male songs of zebra finches (Taeniopygia guttata) and indigo buntings (Passerina cyanea), which have different types of vocalizations and have been recorded under different laboratory conditions. Depending on the quality of recordings and complexity of song, the DTW-based technique gives excellent to satisfactory performance. Under challenging conditions such as noisy recordings or presence of confusing short-duration calls, good performance of the DTW-based technique requires careful selection of templates that may demand expert knowledge. Because HMMs are trained, equivalent or even better performance of HMMs can be achieved based only on segmentation and labeling of constituent vocalizations, albeit with many more training examples than DTW templates. One weakness in HMM performance is the misclassification of short-duration vocalizations or song units with more variable structure (e.g., some calls, and syllables of plastic songs). To address these and other limitations, new approaches for analyzing bird vocalizations are discussed.

  4. Efficient massively parallel simulation of dynamic channel assignment schemes for wireless cellular communications

    NASA Technical Reports Server (NTRS)

    Greenberg, Albert G.; Lubachevsky, Boris D.; Nicol, David M.; Wright, Paul E.

    1994-01-01

    Fast, efficient parallel algorithms are presented for discrete event simulations of dynamic channel assignment schemes for wireless cellular communication networks. The driving events are call arrivals and departures, in continuous time, to cells geographically distributed across the service area. A dynamic channel assignment scheme decides which call arrivals to accept, and which channels to allocate to the accepted calls, attempting to minimize call blocking while ensuring co-channel interference is tolerably low. Specifically, the scheme ensures that the same channel is used concurrently at different cells only if the pairwise distances between those cells are sufficiently large. Much of the complexity of the system comes from ensuring this separation. The network is modeled as a system of interacting continuous time automata, each corresponding to a cell. To simulate the model, conservative methods are used; i.e., methods in which no errors occur in the course of the simulation and so no rollback or relaxation is needed. Implemented on a 16K processor MasPar MP-1, an elegant and simple technique provides speedups of about 15 times over an optimized serial simulation running on a high speed workstation. A drawback of this technique, typical of conservative methods, is that processor utilization is rather low. To overcome this, new methods were developed that exploit slackness in event dependencies over short intervals of time, thereby raising the utilization to above 50 percent and the speedup over the optimized serial code to about 120 times.

  5. Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm

    NASA Astrophysics Data System (ADS)

    Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi

    2014-01-01

    This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.

  6. Psychological Dynamics of Adolescent Satanism.

    ERIC Educational Resources Information Center

    Moriarty, Anthony R.; Story, Donald W.

    1990-01-01

    Attempts to describe the psychological processes that predispose an individual to adopt a Satanic belief system. Describes processes in terms of child-parent relationships and the developmental tasks of adolescence. Proposes a model called the web of psychic tension to represent the process of Satanic cult adoption. Describes techniques for…

  7. Systematization of a set of closure techniques.

    PubMed

    Hausken, Kjell; Moxnes, John F

    2011-11-01

    Approximations in population dynamics are gaining popularity since stochastic models in large populations are time consuming even on a computer. Stochastic modeling causes an infinite set of ordinary differential equations for the moments. Closure models are useful since they recast this infinite set into a finite set of ordinary differential equations. This paper systematizes a set of closure approximations. We develop a system, which we call a power p closure of n moments, where 0≤p≤n. Keeling's (2000a,b) approximation with third order moments is shown to be an instantiation of this system which we call a power 3 closure of 3 moments. We present an epidemiological example and evaluate the system for third and fourth moments compared with Monte Carlo simulations. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    NASA Astrophysics Data System (ADS)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  9. Time-frequency and advanced frequency estimation techniques for the investigation of bat echolocation calls.

    PubMed

    Kopsinis, Yannis; Aboutanios, Elias; Waters, Dean A; McLaughlin, Steve

    2010-02-01

    In this paper, techniques for time-frequency analysis and investigation of bat echolocation calls are studied. Particularly, enhanced resolution techniques are developed and/or used in this specific context for the first time. When compared to traditional time-frequency representation methods, the proposed techniques are more capable of showing previously unseen features in the structure of bat echolocation calls. It should be emphasized that although the study is focused on bat echolocation recordings, the results are more general and applicable to many other types of signal.

  10. DNA Base-Calling from a Nanopore Using a Viterbi Algorithm

    PubMed Central

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-01-01

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (∼98%), even with a poor signal/noise ratio. PMID:22677395

  11. Toward fidelity between specification and implementation

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing

    1994-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  12. Verification and validation of a reliable multicast protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  13. Response spectra analysis of the modal summation technique verified by observed seismometer and accelerometer waveform data of the M6.5 Pidie Jaya Earthquake

    NASA Astrophysics Data System (ADS)

    Irwandi; Rusydy, Ibnu; Muksin, Umar; Rudyanto, Ariska; Daryono

    2018-05-01

    Wave vibration confined in the boundary will produce stationary wave solution in discrete states called modes. There are many physics applications related to modal solutions such as air column resonance, string vibration, and emission spectrum of the atomic Hydrogen. Naturally, energy is distributed in several modes so that the complete calculation is obtained from the sum of the whole modes called modal summation. The modal summation technique was applied to simulate the surface wave propagation above crustal structure of the earth. The method is computational because it uses 1D structural model which is not necessary to calculate the overall wave propagation. The simulation results of the magnitude 6.5 Pidie Jaya earthquake show the response spectral of the Summation Technique has a good correlation to the observed seismometer and accelerometer waveform data, especially at the KCSI (Kotacane) station. On the other hand, at the LASI (Langsa) station shows the modal simulation result of response is relatively lower than observation. The lower value of the reaction spectral estimation is obtained because the station is located in the thick sedimentary basin causing the amplification effect. This is the limitation of modal summation technique, and therefore it should be combined with different finite simulation on the 2D local structural model of the basin.

  14. Source localization of narrow band signals in multipath environments, with application to marine mammals

    NASA Astrophysics Data System (ADS)

    Valtierra, Robert Daniel

    Passive acoustic localization has benefited from many major developments and has become an increasingly important focus point in marine mammal research. Several challenges still remain. This work seeks to address several of these challenges such as tracking the calling depths of baleen whales. In this work, data from an array of widely spaced Marine Acoustic Recording Units (MARUs) was used to achieve three dimensional localization by combining the methods Time Difference of Arrival (TDOA) and Direct-Reflected Time Difference of Arrival (DRTD) along with a newly developed autocorrelation technique. TDOA was applied to data for two dimensional (latitude and longitude) localization and depth was resolved using DRTD. Previously, DRTD had been limited to pulsed broadband signals, such as sperm whale or dolphin echolocation, where individual direct and reflected signals are separated in time. Due to the length of typical baleen whale vocalizations, individual multipath signal arrivals can overlap making time differences of arrival difficult to resolve. This problem can be solved using an autocorrelation, which can extract reflection information from overlapping signals. To establish this technique, a derivation was made to model the autocorrelation of a direct signal and its overlapping reflection. The model was exploited to derive performance limits allowing for prediction of the minimum resolvable direct-reflected time difference for a known signal type. The dependence on signal parameters (sweep rate, call duration) was also investigated. The model was then verified using both recorded and simulated data from two analysis cases for North Atlantic right whales (NARWs, Eubalaena glacialis) and humpback whales (Megaptera noveaengliae). The newly developed autocorrelation technique was then combined with DRTD and tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The combined DRTD-autocorrelation methods enabled calling depth and range estimations of a vocalizing NARW and humpback whale in two separate cases. The DRTD-autocorrelation method was then combined with TDOA to create a three dimensional track of a NARW in the Stellwagen Bank National Marine Sanctuary. Results from these experiments illustrated the potential of the combined methods to successfully resolve baleen calling depths in three dimensions.

  15. An experimental ward. Improving care and learning.

    PubMed

    Ronan, L; Stoeckle, J D

    1992-01-01

    The rapidly changing health care system is still largely organized according to old, and increasingly outdated models. The contemporary demands of patient care and residency training call for an experimental ward, which can develop and test new techniques in hospital organization and the delivery of care in a comprehensive way.

  16. Probabilistic Physics-Based Risk Tools Used to Analyze the International Space Station Electrical Power System Output

    NASA Technical Reports Server (NTRS)

    Patel, Bhogila M.; Hoge, Peter A.; Nagpal, Vinod K.; Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2004-01-01

    This paper describes the methods employed to apply probabilistic modeling techniques to the International Space Station (ISS) power system. These techniques were used to quantify the probabilistic variation in the power output, also called the response variable, due to variations (uncertainties) associated with knowledge of the influencing factors called the random variables. These uncertainties can be due to unknown environmental conditions, variation in the performance of electrical power system components or sensor tolerances. Uncertainties in these variables, cause corresponding variations in the power output, but the magnitude of that effect varies with the ISS operating conditions, e.g. whether or not the solar panels are actively tracking the sun. Therefore, it is important to quantify the influence of these uncertainties on the power output for optimizing the power available for experiments.

  17. Modeling, simulation, and estimation of optical turbulence

    NASA Astrophysics Data System (ADS)

    Formwalt, Byron Paul

    This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.

  18. Dynamics of the brain: Mathematical models and non-invasive experimental studies

    NASA Astrophysics Data System (ADS)

    Toronov, V.; Myllylä, T.; Kiviniemi, V.; Tuchin, V. V.

    2013-10-01

    Dynamics is an essential aspect of the brain function. In this article we review theoretical models of neural and haemodynamic processes in the human brain and experimental non-invasive techniques developed to study brain functions and to measure dynamic characteristics, such as neurodynamics, neurovascular coupling, haemodynamic changes due to brain activity and autoregulation, and cerebral metabolic rate of oxygen. We focus on emerging theoretical biophysical models and experimental functional neuroimaging results, obtained mostly by functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS). We also included our current results on the effects of blood pressure variations on cerebral haemodynamics and simultaneous measurements of fast processes in the brain by near-infrared spectroscopy and a very novel functional MRI technique called magnetic resonance encephalography. Based on a rapid progress in theoretical and experimental techniques and due to the growing computational capacities and combined use of rapidly improving and emerging neuroimaging techniques we anticipate during next decade great achievements in the overall knowledge of the human brain.

  19. Evolutionary neural networks for anomaly detection based on the behavior of a program.

    PubMed

    Han, Sang-Jun; Cho, Sung-Bae

    2006-06-01

    The process of learning the behavior of a given program by using machine-learning techniques (based on system-call audit data) is effective to detect intrusions. Rule learning, neural networks, statistics, and hidden Markov models (HMMs) are some of the kinds of representative methods for intrusion detection. Among them, neural networks are known for good performance in learning system-call sequences. In order to apply this knowledge to real-world problems successfully, it is important to determine the structures and weights of these call sequences. However, finding the appropriate structures requires very long time periods because there are no suitable analytical solutions. In this paper, a novel intrusion-detection technique based on evolutionary neural networks (ENNs) is proposed. One advantage of using ENNs is that it takes less time to obtain superior neural networks than when using conventional approaches. This is because they discover the structures and weights of the neural networks simultaneously. Experimental results with the 1999 Defense Advanced Research Projects Agency (DARPA) Intrusion Detection Evaluation (IDEVAL) data confirm that ENNs are promising tools for intrusion detection.

  20. The Buffer Diagnostic Prototype: A fault isolation application using CLIPS

    NASA Technical Reports Server (NTRS)

    Porter, Ken

    1994-01-01

    This paper describes problem domain characteristics and development experiences from using CLIPS 6.0 in a proof-of-concept troubleshooting application called the Buffer Diagnostic Prototype. The problem domain is a large digital communications subsystems called the real-time network (RTN), which was designed to upgrade the launch processing system used for shuttle support at KSC. The RTN enables up to 255 computers to share 50,000 data points with millisecond response times. The RTN's extensive built-in test capability but lack of any automatic fault isolation capability presents a unique opportunity for a diagnostic expert system application. The Buffer Diagnostic Prototype addresses RTN diagnosis with a multiple strategy approach. A novel technique called 'faulty causality' employs inexact qualitative models to process test results. Experimental knowledge provides a capability to recognize symptom-fault associations. The implementation utilizes rule-based and procedural programming techniques, including a goal-directed control structure and simple text-based generic user interface that may be reusable for other rapid prototyping applications. Although limited in scope, this project demonstrates a diagnostic approach that may be adapted to troubleshoot a broad range of equipment.

  1. Managing a work-life balance: the experiences of midwives working in a group practice setting.

    PubMed

    Fereday, Jennifer; Oster, Candice

    2010-06-01

    To explore how a group of midwives achieved a work-life balance working within a caseload model of care with flexible work hours and on-call work. in-depth interviews were conducted and the data were analysed using a data-driven thematic analysis technique. Children, Youth and Women's Health Service (CYWHS) (previously Women's and Children's Hospital), Adelaide, where a midwifery service known as Midwifery Group Practice (MGP) offers a caseload model of care to women within a midwife-managed unit. 17 midwives who were currently working, or had previously worked, in MGP. analysis of the midwives' individual experiences provided insight into how midwives managed the flexible hours and on-call work to achieve a sustainable work-life balance within a caseload model of care. it is important for midwives working in MGP to actively manage the flexibility of their role with time on call. Organisational, team and individual structure influenced how flexibility of hours was managed; however, a period of adjustment was required to achieve this balance. the study findings offer a description of effective, sustainable strategies to manage flexible hours and on-call work that may assist other midwives working in a similar role or considering this type of work setting. Copyright 2008 Elsevier Ltd. All rights reserved.

  2. Modeling of environmentally induced transients within satellites

    NASA Technical Reports Server (NTRS)

    Stevens, N. John; Barbay, Gordon J.; Jones, Michael R.; Viswanathan, R.

    1987-01-01

    A technique is described that allows an estimation of possible spacecraft charging hazards. This technique, called SCREENS (spacecraft response to environments of space), utilizes the NASA charging analyzer program (NASCAP) to estimate the electrical stress locations and the charge stored in the dielectric coatings due to spacecraft encounter with a geomagnetic substorm environment. This information can then be used to determine the response of the spacecraft electrical system to a surface discharge by means of lumped element models. The coupling into the electronics is assumed to be due to magnetic linkage from the transient currents flowing as a result of the discharge transient. The behavior of a spinning spacecraft encountering a severe substorm is predicted using this technique. It is found that systems are potentially vulnerable to upset if transient signals enter through the ground lines.

  3. How to mathematically optimize drug regimens using optimal control.

    PubMed

    Moore, Helen

    2018-02-01

    This article gives an overview of a technique called optimal control, which is used to optimize real-world quantities represented by mathematical models. I include background information about the historical development of the technique and applications in a variety of fields. The main focus here is the application to diseases and therapies, particularly the optimization of combination therapies, and I highlight several such examples. I also describe the basic theory of optimal control, and illustrate each of the steps with an example that optimizes the doses in a combination regimen for leukemia. References are provided for more complex cases. The article is aimed at modelers working in drug development, who have not used optimal control previously. My goal is to make this technique more accessible in the biopharma community.

  4. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.

  5. Microscopic Shell Model Calculations for sd-Shell Nuclei

    NASA Astrophysics Data System (ADS)

    Barrett, Bruce R.; Dikmen, Erdal; Maris, Pieter; Shirokov, Andrey M.; Smirnova, Nadya A.; Vary, James P.

    Several techniques now exist for performing detailed and accurate calculations of the structure of light nuclei, i.e., A ≤ 16. Going to heavier nuclei requires new techniques or extensions of old ones. One of these is the so-called No Core Shell Model (NCSM) with a Core approach, which involves an Okubo-Lee-Suzuki (OLS) transformation of a converged NCSM result into a single major shell, such as the sd-shell. The obtained effective two-body matrix elements can be separated into core and single-particle (s.p.) energies plus residual two-body interactions, which can be used for performing standard shell-model (SSM) calculations. As an example, an application of this procedure will be given for nuclei at the beginning ofthe sd-shell.

  6. Using State Merging and State Pruning to Address the Path Explosion Problem Faced by Symbolic Execution

    DTIC Science & Technology

    2014-06-19

    urgent and compelling. Recent efforts in this area automate program analysis techniques using model checking and symbolic execution [2, 5–7]. These...bounded model checking tool for x86 binary programs developed at the Air Force Institute of Technology (AFIT). Jiseki creates a bit-vector logic model based...assume there are n different paths through the function foo . The program could potentially call the function foo a bound number of times, resulting in n

  7. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  8. DNA base-calling from a nanopore using a Viterbi algorithm.

    PubMed

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-05-16

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (~98%), even with a poor signal/noise ratio. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  10. Probabilistic vs linear blending approaches to shared control for wheelchair driving.

    PubMed

    Ezeh, Chinemelu; Trautman, Pete; Devigne, Louise; Bureau, Valentin; Babel, Marie; Carlson, Tom

    2017-07-01

    Some people with severe mobility impairments are unable to operate powered wheelchairs reliably and effectively, using commercially available interfaces. This has sparked a body of research into "smart wheelchairs", which assist users to drive safely and create opportunities for them to use alternative interfaces. Various "shared control" techniques have been proposed to provide an appropriate level of assistance that is satisfactory and acceptable to the user. Most shared control techniques employ a traditional strategy called linear blending (LB), where the user's commands and wheelchair's autonomous commands are combined in some proportion. In this paper, however, we implement a more generalised form of shared control called probabilistic shared control (PSC). This probabilistic formulation improves the accuracy of modelling the interaction between the user and the wheelchair by taking into account uncertainty in the interaction. In this paper, we demonstrate the practical success of PSC over LB in terms of safety, particularly for novice users.

  11. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  12. Anonymizing 1:M microdata with high utility

    PubMed Central

    Gong, Qiyuan; Luo, Junzhou; Yang, Ming; Ni, Weiwei; Li, Xiao-Bai

    2016-01-01

    Preserving privacy and utility during data publishing and data mining is essential for individuals, data providers and researchers. However, studies in this area typically assume that one individual has only one record in a dataset, which is unrealistic in many applications. Having multiple records for an individual leads to new privacy leakages. We call such a dataset a 1:M dataset. In this paper, we propose a novel privacy model called (k, l)-diversity that addresses disclosure risks in 1:M data publishing. Based on this model, we develop an efficient algorithm named 1:M-Generalization to preserve privacy and data utility, and compare it with alternative approaches. Extensive experiments on real-world data show that our approach outperforms the state-of-the-art technique, in terms of data utility and computational cost. PMID:28603388

  13. Automatic classification of animal vocalizations

    NASA Astrophysics Data System (ADS)

    Clemins, Patrick J.

    2005-11-01

    Bioacoustics, the study of animal vocalizations, has begun to use increasingly sophisticated analysis techniques in recent years. Some common tasks in bioacoustics are repertoire determination, call detection, individual identification, stress detection, and behavior correlation. Each research study, however, uses a wide variety of different measured variables, called features, and classification systems to accomplish these tasks. The well-established field of human speech processing has developed a number of different techniques to perform many of the aforementioned bioacoustics tasks. Melfrequency cepstral coefficients (MFCCs) and perceptual linear prediction (PLP) coefficients are two popular feature sets. The hidden Markov model (HMM), a statistical model similar to a finite autonoma machine, is the most commonly used supervised classification model and is capable of modeling both temporal and spectral variations. This research designs a framework that applies models from human speech processing for bioacoustic analysis tasks. The development of the generalized perceptual linear prediction (gPLP) feature extraction model is one of the more important novel contributions of the framework. Perceptual information from the species under study can be incorporated into the gPLP feature extraction model to represent the vocalizations as the animals might perceive them. By including this perceptual information and modifying parameters of the HMM classification system, this framework can be applied to a wide range of species. The effectiveness of the framework is shown by analyzing African elephant and beluga whale vocalizations. The features extracted from the African elephant data are used as input to a supervised classification system and compared to results from traditional statistical tests. The gPLP features extracted from the beluga whale data are used in an unsupervised classification system and the results are compared to labels assigned by experts. The development of a framework from which to build animal vocalization classifiers will provide bioacoustics researchers with a consistent platform to analyze and classify vocalizations. A common framework will also allow studies to compare results across species and institutions. In addition, the use of automated classification techniques can speed analysis and uncover behavioral correlations not readily apparent using traditional techniques.

  14. Swedish PE Teachers' Understandings of Legitimate Movement in a Criterion-Referenced Grading System

    ERIC Educational Resources Information Center

    Svennberg, Lena

    2017-01-01

    Background: Physical Education (PE) has been associated with a multi-activity model in which movement is related to sport discourses and sport techniques. However, as in many international contexts, the Swedish national PE syllabus calls for a wider and more inclusive concept of movement. Complex movement adapted to different settings is valued,…

  15. School Location and Teacher Supply: Understanding the Distribution of Teacher Effects

    ERIC Educational Resources Information Center

    Gagnon, Douglas

    2015-01-01

    The U.S. Department of Education has recently called on all states to create plans to ensure equal access to excellent teachers. Although there are numerous limitations in using VAM [value-added modeling] in high-stakes contexts such as teacher evaluation, such techniques offer promise in helping states grapple with issues in equitable access.…

  16. Creating physically-based three-dimensional microstructures: Bridging phase-field and crystal plasticity models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Hojun; Owen, Steven J.; Abdeljawad, Fadi F.

    In order to better incorporate microstructures in continuum scale models, we use a novel finite element (FE) meshing technique to generate three-dimensional polycrystalline aggregates from a phase field grain growth model of grain microstructures. The proposed meshing technique creates hexahedral FE meshes that capture smooth interfaces between adjacent grains. Three dimensional realizations of grain microstructures from the phase field model are used in crystal plasticity-finite element (CP-FE) simulations of polycrystalline a -iron. We show that the interface conformal meshes significantly reduce artificial stress localizations in voxelated meshes that exhibit the so-called "wedding cake" interfaces. This framework provides a direct linkmore » between two mesoscale models - phase field and crystal plasticity - and for the first time allows mechanics simulations of polycrystalline materials using three-dimensional hexahedral finite element meshes with realistic topological features.« less

  17. Comparison of frequency-domain and time-domain rotorcraft vibration control methods

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.

    1984-01-01

    Active control of rotor-induced vibration in rotorcraft has received significant attention recently. Two classes of techniques have been proposed. The more developed approach works with harmonic analysis of measured time histories and is called the frequency-domain approach. The more recent approach computes the control input directly using the measured time history data and is called the time-domain approach. The report summarizes the results of a theoretical investigation to compare the two approaches. Five specific areas were addressed: (1) techniques to derive models needed for control design (system identification methods), (2) robustness with respect to errors, (3) transient response, (4) susceptibility to noise, and (5) implementation difficulties. The system identification methods are more difficult for the time-domain models. The time-domain approach is more robust (e.g., has higher gain and phase margins) than the frequency-domain approach. It might thus be possible to avoid doing real-time system identification in the time-domain approach by storing models at a number of flight conditions. The most significant error source is the variation in open-loop vibrations caused by pilot inputs, maneuvers or gusts. The implementation requirements are similar except that the time-domain approach can be much simpler to implement if real-time system identification were not necessary.

  18. Ductile film delamination from compliant substrates using hard overlayers

    PubMed Central

    Cordill, M.J.; Marx, V.M.; Kirchlechner, C.

    2014-01-01

    Flexible electronic devices call for copper and gold metal films to adhere well to polymer substrates. Measuring the interfacial adhesion of these material systems is often challenging, requiring the formulation of different techniques and models. Presented here is a strategy to induce well defined areas of delamination to measure the adhesion of copper films on polyimide substrates. The technique utilizes a stressed overlayer and tensile straining to cause buckle formation. The described method allows one to examine the effects of thin adhesion layers used to improve the adhesion of flexible systems. PMID:25641995

  19. Ductile film delamination from compliant substrates using hard overlayers.

    PubMed

    Cordill, M J; Marx, V M; Kirchlechner, C

    2014-11-28

    Flexible electronic devices call for copper and gold metal films to adhere well to polymer substrates. Measuring the interfacial adhesion of these material systems is often challenging, requiring the formulation of different techniques and models. Presented here is a strategy to induce well defined areas of delamination to measure the adhesion of copper films on polyimide substrates. The technique utilizes a stressed overlayer and tensile straining to cause buckle formation. The described method allows one to examine the effects of thin adhesion layers used to improve the adhesion of flexible systems.

  20. Analysis of High Spatial, Temporal, and Directional Resolution Recordings of Biological Sounds in the Southern California Bight

    DTIC Science & Technology

    2013-09-30

    transiting whales in the Southern California Bight, b) the use of passive underwater acoustic techniques for improved habitat assessment in biologically...sensitive areas and improved ecosystem modeling, and c) the application of the physics of excitable media to numerical modeling of biological choruses...was on the potential impact of man-made sounds on the calling behavior of transiting humpback whales in the Southern California Bight. The main

  1. Accurate recapture identification for genetic mark–recapture studies with error-tolerant likelihood-based match calling and sample clustering

    USGS Publications Warehouse

    Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.

    2016-01-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  2. Sharing motherhood: biological lesbian co-mothers, a new IVF indication.

    PubMed

    Marina, S; Marina, D; Marina, F; Fosas, N; Galiana, N; Jové, I

    2010-04-01

    We herein present the initial experiences of the CEFER Institute of Reproduction in the formation of a new family model: two biological mothers, lesbians, one who provides the eggs and the other who carries the embryo in her womb. We have called this family model ROPA (Reception of Oocytes from PArtner). It is a pioneer event in Spain and among the first at a worldwide level. Fourteen lesbian couples have undergone treatment using the ROPA technique. This paper briefly describes the technique. Six pregnancies have been obtained from 13 embryo transfers. There were two miscarriages and there are three ongoing pregnancies, one of them twins. One healthy female baby was born. The following aspects are addressed: (i) legal status of lesbian couples in Western countries; (ii) the lesbian couple's access to assisted reproduction techniques; (iii) ethical aspects; (iv) medical acceptability; and (v) single mother versus lesbian mothers. In countries where the ROPA technique is legal, it offers lesbian couples a more favourable route, involving both partners, to start a family, and doctors who treat lesbian couples must be sensitive to this new family model.

  3. Enhanced polychronization in a spiking network with metaplasticity.

    PubMed

    Guise, Mira; Knott, Alistair; Benuskova, Lubica

    2015-01-01

    Computational models of metaplasticity have usually focused on the modeling of single synapses (Shouval et al., 2002). In this paper we study the effect of metaplasticity on network behavior. Our guiding assumption is that the primary purpose of metaplasticity is to regulate synaptic plasticity, by increasing it when input is low and decreasing it when input is high. For our experiments we adopt a model of metaplasticity that demonstrably has this effect for a single synapse; our primary interest is in how metaplasticity thus defined affects network-level phenomena. We focus on a network-level phenomenon called polychronicity, that has a potential role in representation and memory. A network with polychronicity has the ability to produce non-synchronous but precisely timed sequences of neural firing events that can arise from strongly connected groups of neurons called polychronous neural groups (Izhikevich et al., 2004). Polychronous groups (PNGs) develop readily when spiking networks are exposed to repeated spatio-temporal stimuli under the influence of spike-timing-dependent plasticity (STDP), but are sensitive to changes in synaptic weight distribution. We use a technique we have recently developed called Response Fingerprinting to show that PNGs formed in the presence of metaplasticity are significantly larger than those with no metaplasticity. A potential mechanism for this enhancement is proposed that links an inherent property of integrator type neurons called spike latency to an increase in the tolerance of PNG neurons to jitter in their inputs.

  4. Reasoning about real-time systems with temporal interval logic constraints on multi-state automata

    NASA Technical Reports Server (NTRS)

    Gabrielian, Armen

    1991-01-01

    Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.

  5. Detection of Cutting Tool Wear using Statistical Analysis and Regression Model

    NASA Astrophysics Data System (ADS)

    Ghani, Jaharah A.; Rizal, Muhammad; Nuawi, Mohd Zaki; Haron, Che Hassan Che; Ramli, Rizauddin

    2010-10-01

    This study presents a new method for detecting the cutting tool wear based on the measured cutting force signals. A statistical-based method called Integrated Kurtosis-based Algorithm for Z-Filter technique, called I-kaz was used for developing a regression model and 3D graphic presentation of I-kaz 3D coefficient during machining process. The machining tests were carried out using a CNC turning machine Colchester Master Tornado T4 in dry cutting condition. A Kistler 9255B dynamometer was used to measure the cutting force signals, which were transmitted, analyzed, and displayed in the DasyLab software. Various force signals from machining operation were analyzed, and each has its own I-kaz 3D coefficient. This coefficient was examined and its relationship with flank wear lands (VB) was determined. A regression model was developed due to this relationship, and results of the regression model shows that the I-kaz 3D coefficient value decreases as tool wear increases. The result then is used for real time tool wear monitoring.

  6. Ontogenetic variation of heritability and maternal effects in yellow-bellied marmot alarm calls.

    PubMed

    Blumstein, Daniel T; Nguyen, Kathy T; Martin, Julien G A

    2013-05-07

    Individuals of many species produce distinctive vocalizations that may relay potential information about the signaller. The alarm calls of some species have been reported to be individually specific, and this distinctiveness may allow individuals to access the reliability or kinship of callers. While not much is known generally about the heritability of mammalian vocalizations, if alarm calls were individually distinctive to permit kinship assessment, then call structure should be heritable. Here, we show conclusively for the first time that alarm call structure is heritable. We studied yellow-bellied marmots (Marmota flaviventris) and made nine quantitative measurements of their alarm calls. With a known genealogy, we used the animal model (a statistical technique) to estimate alarm call heritability. In juveniles, only one of the measured variables had heritability significantly different from zero; however, most variables had significant maternal environmental effects. By contrast, yearlings and adults had no significant maternal environmental effects, but the heritability of nearly all measured variables was significantly different from zero. Some, but not all of these heritable effects were significantly different across age classes. The presence of significantly non-zero maternal environmental effects in juveniles could reflect the impact of maternal environmental stresses on call structure. Regardless of this mechanism, maternal environmental effects could permit kinship recognition in juveniles. In older animals, the substantial genetic basis of alarm call structure suggests that calls could be used to assess kinship and, paradoxically, might also suggest a role of learning in call structure.

  7. Ontogenetic variation of heritability and maternal effects in yellow-bellied marmot alarm calls

    PubMed Central

    Blumstein, Daniel T.; Nguyen, Kathy T.; Martin, Julien G. A.

    2013-01-01

    Individuals of many species produce distinctive vocalizations that may relay potential information about the signaller. The alarm calls of some species have been reported to be individually specific, and this distinctiveness may allow individuals to access the reliability or kinship of callers. While not much is known generally about the heritability of mammalian vocalizations, if alarm calls were individually distinctive to permit kinship assessment, then call structure should be heritable. Here, we show conclusively for the first time that alarm call structure is heritable. We studied yellow-bellied marmots (Marmota flaviventris) and made nine quantitative measurements of their alarm calls. With a known genealogy, we used the animal model (a statistical technique) to estimate alarm call heritability. In juveniles, only one of the measured variables had heritability significantly different from zero; however, most variables had significant maternal environmental effects. By contrast, yearlings and adults had no significant maternal environmental effects, but the heritability of nearly all measured variables was significantly different from zero. Some, but not all of these heritable effects were significantly different across age classes. The presence of significantly non-zero maternal environmental effects in juveniles could reflect the impact of maternal environmental stresses on call structure. Regardless of this mechanism, maternal environmental effects could permit kinship recognition in juveniles. In older animals, the substantial genetic basis of alarm call structure suggests that calls could be used to assess kinship and, paradoxically, might also suggest a role of learning in call structure. PMID:23466987

  8. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  9. Quality assessment of a new surgical simulator for neuroendoscopic training.

    PubMed

    Filho, Francisco Vaz Guimarães; Coelho, Giselle; Cavalheiro, Sergio; Lyra, Marcos; Zymberg, Samuel T

    2011-04-01

    Ideal surgical training models should be entirely reliable, atoxic, easy to handle, and, if possible, low cost. All available models have their advantages and disadvantages. The choice of one or another will depend on the type of surgery to be performed. The authors created an anatomical model called the S.I.M.O.N.T. (Sinus Model Oto-Rhino Neuro Trainer) Neurosurgical Endotrainer, which can provide reliable neuroendoscopic training. The aim in the present study was to assess both the quality of the model and the development of surgical skills by trainees. The S.I.M.O.N.T. is built of a synthetic thermoretractable, thermosensible rubber called Neoderma, which, combined with different polymers, produces more than 30 different formulas. Quality assessment of the model was based on qualitative and quantitative data obtained from training sessions with 9 experienced and 13 inexperienced neurosurgeons. The techniques used for evaluation were face validation, retest and interrater reliability, and construct validation. The experts considered the S.I.M.O.N.T. capable of reproducing surgical situations as if they were real and presenting great similarity with the human brain. Surgical results of serial training showed that the model could be considered precise. Finally, development and improvement in surgical skills by the trainees were observed and considered relevant to further training. It was also observed that the probability of any single error was dramatically decreased after each training session, with a mean reduction of 41.65% (range 38.7%-45.6%). Neuroendoscopic training has some specific requirements. A unique set of instruments is required, as is a model that can resemble real-life situations. The S.I.M.O.N.T. is a new alternative model specially designed for this purpose. Validation techniques followed by precision assessments attested to the model's feasibility.

  10. Probabilistic Priority Message Checking Modeling Based on Controller Area Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    Although the probabilistic model checking tool called PRISM has been applied in many communication systems, such as wireless local area network, Bluetooth, and ZigBee, the technique is not used in a controller area network (CAN). In this paper, we use PRISM to model the mechanism of priority messages for CAN because the mechanism has allowed CAN to become the leader in serial communication for automobile and industry control. Through modeling CAN, it is easy to analyze the characteristic of CAN for further improving the security and efficiency of automobiles. The Markov chain model helps us to model the behaviour of priority messages.

  11. Anticipation of the landing shock phenomenon in flight simulation

    NASA Technical Reports Server (NTRS)

    Mcfarland, Richard E.

    1987-01-01

    An aircraft landing may be described as a controlled crash because a runway surface is intercepted. In a simulation model the transition from aerodynamic flight to weight on wheels involves a single computational cycle during which stiff differential equations are activated; with a significant probability these initial conditions are unrealistic. This occurs because of the finite cycle time, during which large restorative forces will accompany unrealistic initial oleo compressions. This problem was recognized a few years ago at Ames Research Center during simulation studies of a supersonic transport. The mathematical model of this vehicle severely taxed computational resources, and required a large cycle time. The ground strike problem was solved by a described technique called anticipation equations. This extensively used technique has not been previously reported. The technique of anticipating a significant event is a useful tool in the general field of discrete flight simulation. For the differential equations representing a landing gear model stiffness, rate of interception and cycle time may combine to produce an unrealistic simulation of the continuum.

  12. Applications of Evolutionary Technology to Manufacturing and Logistics Systems : State-of-the Art Survey

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin

    Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.

  13. Development of hybrid computer plasma models for different pressure regimes

    NASA Astrophysics Data System (ADS)

    Hromadka, Jakub; Ibehej, Tomas; Hrach, Rudolf

    2016-09-01

    With increased performance of contemporary computers during last decades numerical simulations became a very powerful tool applicable also in plasma physics research. Plasma is generally an ensemble of mutually interacting particles that is out of the thermodynamic equilibrium and for this reason fluid computer plasma models give results with only limited accuracy. On the other hand, much more precise particle models are often limited only on 2D problems because of their huge demands on the computer resources. Our contribution is devoted to hybrid modelling techniques that combine advantages of both modelling techniques mentioned above, particularly to their so-called iterative version. The study is focused on mutual relations between fluid and particle models that are demonstrated on the calculations of sheath structures of low temperature argon plasma near a cylindrical Langmuir probe for medium and higher pressures. Results of a simple iterative hybrid plasma computer model are also given. The authors acknowledge the support of the Grant Agency of Charles University in Prague (project 220215).

  14. Reachability analysis of real-time systems using time Petri nets.

    PubMed

    Wang, J; Deng, Y; Xu, G

    2000-01-01

    Time Petri nets (TPNs) are a popular Petri net model for specification and verification of real-time systems. A fundamental and most widely applied method for analyzing Petri nets is reachability analysis. The existing technique for reachability analysis of TPNs, however, is not suitable for timing property verification because one cannot derive end-to-end delay in task execution, an important issue for time-critical systems, from the reachability tree constructed using the technique. In this paper, we present a new reachability based analysis technique for TPNs for timing property analysis and verification that effectively addresses the problem. Our technique is based on a concept called clock-stamped state class (CS-class). With the reachability tree generated based on CS-classes, we can directly compute the end-to-end time delay in task execution. Moreover, a CS-class can be uniquely mapped to a traditional state class based on which the conventional reachability tree is constructed. Therefore, our CS-class-based analysis technique is more general than the existing technique. We show how to apply this technique to timing property verification of the TPN model of a command and control (C2) system.

  15. Documentation Driven Development for Complex Real-Time Systems

    DTIC Science & Technology

    2004-12-01

    This paper presents a novel approach for development of complex real - time systems , called the documentation-driven development (DDD) approach. This... time systems . DDD will also support automated software generation based on a computational model and some relevant techniques. DDD includes two main...stakeholders to be easily involved in development processes and, therefore, significantly improve the agility of software development for complex real

  16. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genest-Beaulieu, C.; Bergeron, P., E-mail: genest@astro.umontreal.ca, E-mail: bergeron@astro.umontreal.ca

    We present a comparative analysis of atmospheric parameters obtained with the so-called photometric and spectroscopic techniques. Photometric and spectroscopic data for 1360 DA white dwarfs from the Sloan Digital Sky Survey (SDSS) are used, as well as spectroscopic data from the Villanova White Dwarf Catalog. We first test the calibration of the ugriz photometric system by using model atmosphere fits to observed data. Our photometric analysis indicates that the ugriz photometry appears well calibrated when the SDSS to AB{sub 95} zeropoint corrections are applied. The spectroscopic analysis of the same data set reveals that the so-called high-log g problem canmore » be solved by applying published correction functions that take into account three-dimensional hydrodynamical effects. However, a comparison between the SDSS and the White Dwarf Catalog spectra also suggests that the SDSS spectra still suffer from a small calibration problem. We then compare the atmospheric parameters obtained from both fitting techniques and show that the photometric temperatures are systematically lower than those obtained from spectroscopic data. This systematic offset may be linked to the hydrogen line profiles used in the model atmospheres. We finally present the results of an analysis aimed at measuring surface gravities using photometric data only.« less

  18. Effect of geometrical parameters on pressure distributions of impulse manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Brune, Ryan Carl

    Impulse manufacturing techniques constitute a growing field of methods that utilize high-intensity pressure events to conduct useful mechanical operations. As interest in applying this technology continues to grow, greater understanding must be achieved with respect to output pressure events in both magnitude and distribution. In order to address this need, a novel pressure measurement has been developed called the Profile Indentation Pressure Evaluation (PIPE) method that systematically analyzes indentation patterns created with impulse events. Correlation with quasi-static test data and use of software-assisted analysis techniques allows for colorized pressure maps to be generated for both electromagnetic and vaporizing foil actuator (VFA) impulse forming events. Development of this technique aided introduction of a design method for electromagnetic path actuator systems, where key geometrical variables are considered using a newly developed analysis method, which is called the Path Actuator Proximal Array (PAPA) pressure model. This model considers key current distribution and proximity effects and interprets generated pressure by considering the adjacent conductor surfaces as proximal arrays of individual conductors. According to PIPE output pressure analysis, the PAPA model provides a reliable prediction of generated pressure for path actuator systems as local geometry is changed. Associated mechanical calculations allow for pressure requirements to be calculated for shearing, flanging, and hemming operations, providing a design process for such cases. Additionally, geometry effect is investigated through a formability enhancement study using VFA metalworking techniques. A conical die assembly is utilized with both VFA high velocity and traditional quasi-static test methods on varied Hasek-type sample geometries to elicit strain states consistent with different locations on a forming limit diagram. Digital image correlation techniques are utilized to measure major and minor strains for each sample type to compare limit strain results. Overall testing indicated decreased formability at high velocity for 304 DDQ stainless steel and increased formability at high velocity for 3003-H14 aluminum. Microstructural and fractographic analysis helped dissect and analyze the observed differences in these cases. Overall, these studies comprehensively explore the effects of geometrical parameters on magnitude and distribution of impulse manufacturing generated pressure, establishing key guidelines and models for continued development and implementation in commercial applications.

  19. Three Reading Comprehension Strategies: TELLS, Story Mapping, and QARs.

    ERIC Educational Resources Information Center

    Sorrell, Adrian L.

    1990-01-01

    Three reading comprehension strategies are presented to assist learning-disabled students: an advance organizer technique called "TELLS Fact or Fiction" used before reading a passage, a schema-based technique called "Story Mapping" used while reading, and a postreading method of categorizing questions called…

  20. Comparative Benchmark Dose Modeling as a Tool to Make the First Estimate of Safe Human Exposure Levels to Lunar Dust

    NASA Technical Reports Server (NTRS)

    James, John T.; Lam, Chiu-wing; Scully, Robert R.

    2013-01-01

    Brief exposures of Apollo Astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure ot lunar dust. Habitats for exploration, whether mobile of fixed must be designed to limit human exposure to lunar dust to safe levels. We have used a new technique we call Comparative Benchmark Dose Modeling to estimate safe exposure limits for lunar dust collected during the Apollo 14 mission.

  1. Computer aided design of monolithic microwave and millimeter wave integrated circuits and subsystems

    NASA Astrophysics Data System (ADS)

    Ku, Walter H.; Gang, Guan-Wan; He, J. Q.; Ichitsubo, I.

    1988-05-01

    This final technical report presents results on the computer aided design of monolithic microwave and millimeter wave integrated circuits and subsystems. New results include analytical and computer aided device models of GaAs MESFETs and HEMTs or MODFETs, new synthesis techniques for monolithic feedback and distributed amplifiers and a new nonlinear CAD program for MIMIC called CADNON. This program incorporates the new MESFET and HEMT model and has been successfully applied to the design of monolithic millimeter-wave mixers.

  2. Computer tomography of flows external to test models

    NASA Technical Reports Server (NTRS)

    Prikryl, I.; Vest, C. M.

    1982-01-01

    Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.

  3. Reducing software mass through behavior control. [of planetary roving robots

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1992-01-01

    Attention is given to the tradeoff between communication and computation as regards a planetary rover (both these subsystems are very power-intensive, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover). Software techniques that can be used to reduce the requirements on both communciation and computation, allowing the overall robot mass to be greatly reduced, are discussed. Novel approaches to autonomous control, called behavior control, employ an entirely different approach, and for many tasks will yield a similar or superior level of autonomy to traditional control techniques, while greatly reducing the computational demand. Traditional systems have several expensive processes that operate serially, while behavior techniques employ robot capabilities that run in parallel. Traditional systems make extensive world models, while behavior control systems use minimal world models or none at all.

  4. Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B

    2011-01-01

    In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less

  5. Input-output relationship in social communications characterized by spike train analysis

    NASA Astrophysics Data System (ADS)

    Aoki, Takaaki; Takaguchi, Taro; Kobayashi, Ryota; Lambiotte, Renaud

    2016-10-01

    We study the dynamical properties of human communication through different channels, i.e., short messages, phone calls, and emails, adopting techniques from neuronal spike train analysis in order to characterize the temporal fluctuations of successive interevent times. We first measure the so-called local variation (LV) of incoming and outgoing event sequences of users and find that these in- and out-LV values are positively correlated for short messages and uncorrelated for phone calls and emails. Second, we analyze the response-time distribution after receiving a message to focus on the input-output relationship in each of these channels. We find that the time scales and amplitudes of response differ between the three channels. To understand the effects of the response-time distribution on the correlations between the LV values, we develop a point process model whose activity rate is modulated by incoming and outgoing events. Numerical simulations of the model indicate that a quick response to incoming events and a refractory effect after outgoing events are key factors to reproduce the positive LV correlations.

  6. Three Dimensional Reconstruction Workflows for Lost Cultural Heritage Monuments Exploiting Public Domain and Professional Photogrammetric Imagery

    NASA Astrophysics Data System (ADS)

    Wahbeh, W.; Nebiker, S.

    2017-08-01

    In our paper, we document experiments and results of image-based 3d reconstructions of famous heritage monuments which were recently damaged or completely destroyed by the so-called Islamic state in Syria and Iraq. The specific focus of our research is on the combined use of professional photogrammetric imagery and of publicly available imagery from the web for optimally 3d reconstructing those monuments. The investigated photogrammetric reconstruction techniques include automated bundle adjustment and dense multi-view 3d reconstruction using public domain and professional imagery on the one hand and an interactive polygonal modelling based on projected panoramas on the other. Our investigations show that the combination of these two image-based modelling techniques delivers better results in terms of model completeness, level of detail and appearance.

  7. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  8. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  9. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  10. A quantitative comparison of precipitation forecasts between the storm-scale numerical weather prediction model and auto-nowcast system in Jiangsu, China

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Yang, Ji; Wang, Dan; Liu, Liping

    2016-11-01

    Extrapolation techniques and storm-scale Numerical Weather Prediction (NWP) models are two primary approaches for short-term precipitation forecasts. The primary objective of this study is to verify precipitation forecasts and compare the performances of two nowcasting schemes: a Beijing Auto-Nowcast system (BJ-ANC) based on extrapolation techniques and a storm-scale NWP model called the Advanced Regional Prediction System (ARPS). The verification and comparison takes into account six heavy precipitation events that occurred in the summer of 2014 and 2015 in Jiangsu, China. The forecast performances of the two schemes were evaluated for the next 6 h at 1-h intervals using gridpoint-based measures of critical success index, bias, index of agreement, root mean square error, and using an object-based verification method called Structure-Amplitude-Location (SAL) score. Regarding gridpoint-based measures, BJ-ANC outperforms ARPS at first, but then the forecast accuracy decreases rapidly with lead time and performs worse than ARPS after 4-5 h of the initial forecast. Regarding the object-based verification method, most forecasts produced by BJ-ANC focus on the center of the diagram at the 1-h lead time and indicate high-quality forecasts. As the lead time increases, BJ-ANC overestimates precipitation amount and produces widespread precipitation, especially at a 6-h lead time. The ARPS model overestimates precipitation at all lead times, particularly at first.

  11. Towards the Integration of Police Psychology Techniques Combined with the Socio-Ecological Psychology Model to Confront Juvenile Delinquency in K-12 Classrooms

    ERIC Educational Resources Information Center

    Rose, Gary

    2013-01-01

    Dealing with students' behavioral problems is one of the most pressing concerns facing educators today, and teachers are feeling inadequately equipped to meet the challenge. The objective of this research was to better understand prevailing delinquency problems in K-12 classrooms, and how teachers address them. Although calls to improve school…

  12. The Dream Catcher Meditation: a therapeutic technique used with American Indian adolescents.

    PubMed

    Robbins, R

    2001-01-01

    This article describes a short-term treatment insight-oriented model for American Indian adolescents, called Dream Catcher Meditation. It is aimed at helping clients' express unconscious conflicts and to facilitate differentiation and healthy mutuality. Though its duration can vary, twelve sessions are outlined here. Session descriptions include goals and sample questions. Also included are anecdotal material and reflections about cultural relevancy.

  13. Recognition of human activity characteristics based on state transitions modeling technique

    NASA Astrophysics Data System (ADS)

    Elangovan, Vinayak; Shirkhodaie, Amir

    2012-06-01

    Human Activity Discovery & Recognition (HADR) is a complex, diverse and challenging task but yet an active area of ongoing research in the Department of Defense. By detecting, tracking, and characterizing cohesive Human interactional activity patterns, potential threats can be identified which can significantly improve situation awareness, particularly, in Persistent Surveillance Systems (PSS). Understanding the nature of such dynamic activities, inevitably involves interpretation of a collection of spatiotemporally correlated activities with respect to a known context. In this paper, we present a State Transition model for recognizing the characteristics of human activities with a link to a prior contextbased ontology. Modeling the state transitions between successive evidential events determines the activities' temperament. The proposed state transition model poses six categories of state transitions including: Human state transitions of Object handling, Visibility, Entity-entity relation, Human Postures, Human Kinematics and Distance to Target. The proposed state transition model generates semantic annotations describing the human interactional activities via a technique called Casual Event State Inference (CESI). The proposed approach uses a low cost kinect depth camera for indoor and normal optical camera for outdoor monitoring activities. Experimental results are presented here to demonstrate the effectiveness and efficiency of the proposed technique.

  14. High angle of attack control law development for a free-flight wind tunnel model using direct eigenstructure assignment

    NASA Technical Reports Server (NTRS)

    Wendel, Thomas R.; Boland, Joseph R.; Hahne, David E.

    1991-01-01

    Flight-control laws are developed for a wind-tunnel aircraft model flying at a high angle of attack by using a synthesis technique called direct eigenstructure assignment. The method employs flight guidelines and control-power constraints to develop the control laws, and gain schedules and nonlinear feedback compensation provide a framework for considering the nonlinear nature of the attack angle. Linear and nonlinear evaluations show that the control laws are effective, a conclusion that is further confirmed by a scale model used for free-flight testing.

  15. Stochastic-field cavitation model

    NASA Astrophysics Data System (ADS)

    Dumond, J.; Magagnato, F.; Class, A.

    2013-07-01

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  16. A cavitation model based on Eulerian stochastic fields

    NASA Astrophysics Data System (ADS)

    Magagnato, F.; Dumond, J.

    2013-12-01

    Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  17. Stochastic-field cavitation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumond, J., E-mail: julien.dumond@areva.com; AREVA GmbH, Erlangen, Paul-Gossen-Strasse 100, D-91052 Erlangen; Magagnato, F.

    2013-07-15

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-fieldmore » cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.« less

  18. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training

    PubMed Central

    Schellenberg, Florian; Oberhofer, Katja; Taylor, William R.

    2015-01-01

    Background. Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. Methods. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Results. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. Conclusion. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines. PMID:26417378

  19. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training.

    PubMed

    Schellenberg, Florian; Oberhofer, Katja; Taylor, William R; Lorenzetti, Silvio

    2015-01-01

    Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines.

  20. A Self-Organizing Maps approach to assess the wave climate of the Adriatic Sea

    NASA Astrophysics Data System (ADS)

    Barbariol, Francesco; Marcello Falcieri, Francesco; Scotton, Carlotta; Benetazzo, Alvise; Bergamasco, Andrea; Bergamasco, Filippo; Bonaldo, Davide; Carniel, Sandro; Sclavo, Mauro

    2015-04-01

    The assessment of wave conditions at sea is fruitful for many research fields in marine and atmospheric sciences and for the human activities in the marine environment. To this end, in the last decades the observational network, that mostly relies on buoys, satellites and other probes from fixed platforms, has been integrated with numerical models outputs, which allow to compute the parameters of sea states (e.g. the significant wave height, the mean and peak wave periods, the mean and peak wave directions) over wider regions. Apart from the collection of wave parameters observed at specific sites or modeled on arbitrary domains, the data processing performed to infer the wave climate at those sites is a crucial step in order to provide high quality data and information to the community. In this context, several statistical techniques has been used to model the randomness of wave parameters. While univariate and bivariate probability distribution functions (pdf) are routinely used, multivariate pdfs that model the probability structure of more than two wave parameters are hardly managed. Recently, the Self-Organizing Maps (SOM) technique has been successfully applied to represent the multivariate random wave climate at sites around the Iberian peninsula and the South America continent. Indeed, the visualization properties offered by this technique allow to get the dependencies between the different parameters by visual inspection. In this study, carried out in the frame of the Italian National Flagship Project "RITMARE", we take advantage of the SOM technique to assess the multivariate wave climate over the Adriatic Sea, a semi-enclosed basin in the north-eastern Mediterranean Sea, where winds from North-East (called "Bora") and South-East (called "Sirocco") mainly blow causing sea storms. By means of the SOM techniques we can observe the multivariate character of the typical Bora and Sirocco wave features in the Adriatic Sea. To this end, we used both observed and modeled wave parameters. The "Acqua Alta" oceanographic tower in the northern Adriatic Sea (ISMAR-CNR) and the Italian Data Buoy Network (RON, managed by ISPRA) off the western Adriatic coasts furnished the wave parameters at specific sites of interest. Widespread wave parameters were obtained by means of a numerical SWAN wave model that was implemented on the whole Adriatic Sea with a 6x6 km2 resolution and forced by the high resolution COSMO-I7 atmospheric model for the period 2007-2013.

  1. Multi- and monofractal indices of short-term heart rate variability.

    PubMed

    Fischer, R; Akay, M; Castiglioni, P; Di Rienzo, M

    2003-09-01

    Indices of heart rate variability (HRV) based on fractal signal models have recently been shown to possess value as predictors of mortality in specific patient populations. To develop more powerful clinical indices of HRV based on a fractal signal model, the study investigated two HRV indices based on a monofractal signal model called fractional Brownian motion and an index based on a multifractal signal model called multifractional Brownian motion. The performance of the indices was compared with an HRV index in common clinical use. To compare the indices, 18 normal subjects were subjected to postural changes, and the indices were compared on their ability to respond to the resulting autonomic events in HRV recordings. The magnitude of the response to postural change (normalised by the measurement variability) was assessed by analysis of variance and multiple comparison testing. Four HRV indices were investigated for this study: the standard deviation of all normal R-R intervals; an HRV index commonly used in the clinic; detrended fluctuation analysis, an HRV index found to be the most powerful predictor of mortality in a study of patients with depressed left ventricular function; an HRV index developed using the maximum likelihood estimation (MLE) technique for a monofractal signal model; and an HRV index developed for the analysis of multifractional Brownian motion signals. The HRV index based on the MLE technique was found to respond most strongly to the induced postural changes (95% CI). The magnitude of its response (normalised by the measurement variability) was at least 25% greater than any of the other indices tested.

  2. A deterministic compressive sensing model for bat biosonar.

    PubMed

    Hague, David A; Buck, John R; Bilik, Igal

    2012-12-01

    The big brown bat (Eptesicus fuscus) uses frequency modulated (FM) echolocation calls to accurately estimate range and resolve closely spaced objects in clutter and noise. They resolve glints spaced down to 2 μs in time delay which surpasses what traditional signal processing techniques can achieve using the same echolocation call. The Matched Filter (MF) attains 10-12 μs resolution while the Inverse Filter (IF) achieves higher resolution at the cost of significantly degraded detection performance. Recent work by Fontaine and Peremans [J. Acoustic. Soc. Am. 125, 3052-3059 (2009)] demonstrated that a sparse representation of bat echolocation calls coupled with a decimating sensing method facilitates distinguishing closely spaced objects over realistic SNRs. Their work raises the intriguing question of whether sensing approaches structured more like a mammalian auditory system contains the necessary information for the hyper-resolution observed in behavioral tests. This research estimates sparse echo signatures using a gammatone filterbank decimation sensing method which loosely models the processing of the bat's auditory system. The decimated filterbank outputs are processed with [script-l](1) minimization. Simulations demonstrate that this model maintains higher resolution than the MF and significantly better detection performance than the IF for SNRs of 5-45 dB while undersampling the return signal by a factor of six.

  3. UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.

    PubMed

    Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun

    2013-12-01

    Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

  4. Seismic migration in generalized coordinates

    NASA Astrophysics Data System (ADS)

    Arias, C.; Duque, L. F.

    2017-06-01

    Reverse time migration (RTM) is a technique widely used nowadays to obtain images of the earth’s sub-surface, using artificially produced seismic waves. This technique has been developed for zones with flat surface and when applied to zones with rugged topography some corrections must be introduced in order to adapt it. This can produce defects in the final image called artifacts. We introduce a simple mathematical map that transforms a scenario with rugged topography into a flat one. The three steps of the RTM can be applied in a way similar to the conventional ones just by changing the Laplacian in the acoustic wave equation for a generalized one. We present a test of this technique using the Canadian foothills SEG velocity model.

  5. Parameter estimation using meta-heuristics in systems biology: a comprehensive review.

    PubMed

    Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie

    2012-01-01

    This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.

  6. Combining Relevance Vector Machines and exponential regression for bearing residual life estimation

    NASA Astrophysics Data System (ADS)

    Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico

    2012-08-01

    In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.

  7. Performance Analysis of Garbage Collection and Dynamic Reordering in a Lisp System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Llames, Rene Lim

    1991-01-01

    Generation based garbage collection and dynamic reordering of objects are two techniques for improving the efficiency of memory management in Lisp and similar dynamic language systems. An analysis of the effect of generation configuration is presented, focusing on the effect of a number of generations and generation capabilities. Analytic timing and survival models are used to represent garbage collection runtime and to derive structural results on its behavior. The survival model provides bounds on the age of objects surviving a garbage collection at a particular level. Empirical results show that execution time is most sensitive to the capacity of the youngest generation. A technique called scanning for transport statistics, for evaluating the effectiveness of reordering independent of main memory size, is presented.

  8. Automated synthesis and composition of taskblocks for control of manufacturing systems.

    PubMed

    Holloway, L E; Guan, X; Sundaravadivelu, R; Ashley, J R

    2000-01-01

    Automated control synthesis methods for discrete-event systems promise to reduce the time required to develop, debug, and modify control software. Such methods must be able to translate high-level control goals into detailed sequences of actuation and sensing signals. In this paper, we present such a technique. It relies on analysis of a system model, defined as a set of interacting components, each represented as a form of condition system Petri net. Control logic modules, called taskblocks, are synthesized from these individual models. These then interact hierarchically and sequentially to drive the system through specified control goals. The resulting controller is automatically converted to executable control code. The paper concludes with a discussion of a set of software tools developed to demonstrate the techniques on a small manufacturing system.

  9. Regarding on the prototype solutions for the nonlinear fractional-order biological population model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baskonus, Haci Mehmet, E-mail: hmbaskonus@gmail.com; Bulut, Hasan

    2016-06-08

    In this study, we have submitted to literature a method newly extended which is called as Improved Bernoulli sub-equation function method based on the Bernoulli Sub-ODE method. The proposed analytical scheme has been expressed with steps. We have obtained some new analytical solutions to the nonlinear fractional-order biological population model by using this technique. Two and three dimensional surfaces of analytical solutions have been drawn by wolfram Mathematica 9. Finally, a conclusion has been submitted by mentioning important acquisitions founded in this study.

  10. A New Modeling for the Changes in the Distribution of Scatterers in Cirrhotic Liver

    NASA Astrophysics Data System (ADS)

    Hara, Takashi; Hachiya, Hiroyuki

    2000-05-01

    The human liver is composed of small hexagonal structures called liver lobules. Cirrhosis destroys these liver lobules and replaces them with permanent connective tissue referred to as regenerative nodules. In this paper, we propose a new modeling technique for changes in the scatterer distribution in liver tissue considering the structure of liver lobules to obtain images of the cirrhotic liver over continuous stages. Using these images, we analyze the relationship between changes in characteristics of biological tissue and changes in B-mode images during progressive liver cirrhosis.

  11. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  12. Optimization technique for problems with an inequality constraint

    NASA Technical Reports Server (NTRS)

    Russell, K. J.

    1972-01-01

    General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.

  13. Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework

    NASA Astrophysics Data System (ADS)

    Achieng, K. O.; Zhu, J.

    2017-12-01

    There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?

  14. An adaptive replacement algorithm for paged-memory computer systems.

    NASA Technical Reports Server (NTRS)

    Thorington, J. M., Jr.; Irwin, J. D.

    1972-01-01

    A general class of adaptive replacement schemes for use in paged memories is developed. One such algorithm, called SIM, is simulated using a probability model that generates memory traces, and the results of the simulation of this adaptive scheme are compared with those obtained using the best nonlookahead algorithms. A technique for implementing this type of adaptive replacement algorithm with state of the art digital hardware is also presented.

  15. Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Loyola, D. G.

    2017-12-01

    The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.

  16. An Evaluation of Understandability of Patient Journey Models in Mental Health.

    PubMed

    Percival, Jennifer; McGregor, Carolyn

    2016-07-28

    There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on extending the pilot case study to a more diversified group of clinicians and health care support workers.

  17. dDocent: a RADseq, variant-calling pipeline designed for population genomics of non-model organisms.

    PubMed

    Puritz, Jonathan B; Hollenbeck, Christopher M; Gold, John R

    2014-01-01

    Restriction-site associated DNA sequencing (RADseq) has become a powerful and useful approach for population genomics. Currently, no software exists that utilizes both paired-end reads from RADseq data to efficiently produce population-informative variant calls, especially for non-model organisms with large effective population sizes and high levels of genetic polymorphism. dDocent is an analysis pipeline with a user-friendly, command-line interface designed to process individually barcoded RADseq data (with double cut sites) into informative SNPs/Indels for population-level analyses. The pipeline, written in BASH, uses data reduction techniques and other stand-alone software packages to perform quality trimming and adapter removal, de novo assembly of RAD loci, read mapping, SNP and Indel calling, and baseline data filtering. Double-digest RAD data from population pairings of three different marine fishes were used to compare dDocent with Stacks, the first generally available, widely used pipeline for analysis of RADseq data. dDocent consistently identified more SNPs shared across greater numbers of individuals and with higher levels of coverage. This is due to the fact that dDocent quality trims instead of filtering, incorporates both forward and reverse reads (including reads with INDEL polymorphisms) in assembly, mapping, and SNP calling. The pipeline and a comprehensive user guide can be found at http://dDocent.wordpress.com.

  18. dDocent: a RADseq, variant-calling pipeline designed for population genomics of non-model organisms

    PubMed Central

    Hollenbeck, Christopher M.; Gold, John R.

    2014-01-01

    Restriction-site associated DNA sequencing (RADseq) has become a powerful and useful approach for population genomics. Currently, no software exists that utilizes both paired-end reads from RADseq data to efficiently produce population-informative variant calls, especially for non-model organisms with large effective population sizes and high levels of genetic polymorphism. dDocent is an analysis pipeline with a user-friendly, command-line interface designed to process individually barcoded RADseq data (with double cut sites) into informative SNPs/Indels for population-level analyses. The pipeline, written in BASH, uses data reduction techniques and other stand-alone software packages to perform quality trimming and adapter removal, de novo assembly of RAD loci, read mapping, SNP and Indel calling, and baseline data filtering. Double-digest RAD data from population pairings of three different marine fishes were used to compare dDocent with Stacks, the first generally available, widely used pipeline for analysis of RADseq data. dDocent consistently identified more SNPs shared across greater numbers of individuals and with higher levels of coverage. This is due to the fact that dDocent quality trims instead of filtering, incorporates both forward and reverse reads (including reads with INDEL polymorphisms) in assembly, mapping, and SNP calling. The pipeline and a comprehensive user guide can be found at http://dDocent.wordpress.com. PMID:24949246

  19. Subcellular localization for Gram positive and Gram negative bacterial proteins using linear interpolation smoothing model.

    PubMed

    Saini, Harsh; Raicar, Gaurav; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2015-12-07

    Protein subcellular localization is an important topic in proteomics since it is related to a protein׳s overall function, helps in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinders other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Generating a Multiphase Equation of State with Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Cox, Geoffrey

    2017-06-01

    Hydrocode calculations require knowledge of the variation of pressure of a material with density and temperature, which is given by the equation of state. An accurate model needs to account for discontinuities in energy, density and properties of a material across a phase boundary. When generating a multiphase equation of state the modeller attempts to balance the agreement between the available data for compression, expansion and phase boundary location. However, this can prove difficult because minor adjustments in the equation of state for a single phase can have a large impact on the overall phase diagram. Recently, Cox and Christie described a method for combining statistical-mechanics-based condensed matter physics models with a stochastic analysis technique called particle swarm optimisation. The models produced show good agreement with experiment over a wide range of pressure-temperature space. This talk details the general implementation of this technique, shows example results, and describes the types of analysis that can be performed with this method.

  1. Multi-Spacecraft 3D differential emission measure tomography of the solar corona: STEREO results.

    NASA Astrophysics Data System (ADS)

    Vásquez, A. M.; Frazin, R. A.

    We have recently developed a novel technique (called DEMT) for the em- pirical determination of the three-dimensional (3D) distribution of the so- lar corona differential emission measure through multi-spacecraft solar ro- tational tomography of extreme-ultaviolet (EUV) image time series (like those provided by EIT/SOHO and EUVI/STEREO). The technique allows, for the first time, to develop global 3D empirical maps of the coronal elec- tron temperature and density, in the height range 1.0 to 1.25 RS . DEMT constitutes a simple and powerful 3D analysis tool that obviates the need for structure specific modeling.

  2. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  3. A burnout prediction model based around char morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao Wu; Edward Lester; Michael Cloke

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coalmore » particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.« less

  4. Speech enhancement using the modified phase-opponency model.

    PubMed

    Deshmukh, Om D; Espy-Wilson, Carol Y; Carney, Laurel H

    2007-06-01

    In this paper we present a model called the Modified Phase-Opponency (MPO) model for single-channel speech enhancement when the speech is corrupted by additive noise. The MPO model is based on the auditory PO model, proposed for detection of tones in noise. The PO model includes a physiologically realistic mechanism for processing the information in neural discharge times and exploits the frequency-dependent phase properties of the tuned filters in the auditory periphery by using a cross-auditory-nerve-fiber coincidence detection for extracting temporal cues. The MPO model alters the components of the PO model such that the basic functionality of the PO model is maintained but the properties of the model can be analyzed and modified independently. The MPO-based speech enhancement scheme does not need to estimate the noise characteristics nor does it assume that the noise satisfies any statistical model. The MPO technique leads to the lowest value of the LPC-based objective measures and the highest value of the perceptual evaluation of speech quality measure compared to other methods when the speech signals are corrupted by fluctuating noise. Combining the MPO speech enhancement technique with our aperiodicity, periodicity, and pitch detector further improves its performance.

  5. Evaluation of Generation Alternation Models in Evolutionary Robotics

    NASA Astrophysics Data System (ADS)

    Oiso, Masashi; Matsumura, Yoshiyuki; Yasuda, Toshiyuki; Ohkura, Kazuhiro

    For efficient implementation of Evolutionary Algorithms (EA) to a desktop grid computing environment, we propose a new generation alternation model called Grid-Oriented-Deletion (GOD) based on comparison with the conventional techniques. In previous research, generation alternation models are generally evaluated by using test functions. However, their exploration performance on the real problems such as Evolutionary Robotics (ER) has not been made very clear yet. Therefore we investigate the relationship between the exploration performance of EA on an ER problem and its generation alternation model. We applied four generation alternation models to the Evolutionary Multi-Robotics (EMR), which is the package-pushing problem to investigate their exploration performance. The results show that GOD is more effective than the other conventional models.

  6. Covert Channels in SIP for VoIP Signalling

    NASA Astrophysics Data System (ADS)

    Mazurczyk, Wojciech; Szczypiorski, Krzysztof

    In this paper, we evaluate available steganographic techniques for SIP (Session Initiation Protocol) that can be used for creating covert channels during signaling phase of VoIP (Voice over IP) call. Apart from characterizing existing steganographic methods we provide new insights by introducing new techniques. We also estimate amount of data that can be transferred in signalling messages for typical IP telephony call.

  7. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  8. Coarse analysis of collective behaviors: Bifurcation analysis of the optimal velocity model for traffic jam formation

    NASA Astrophysics Data System (ADS)

    Miura, Yasunari; Sugiyama, Yuki

    2017-12-01

    We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patnaik, P. C.

    The SIGMET mesoscale meteorology simulation code represents an extension, in terms of physical modelling detail and numerical approach, of the work of Anthes (1972) and Anthes and Warner (1974). The code utilizes a finite difference technique to solve the so-called primitive equations which describe transient flow in the atmosphere. The SIGMET modelling contains all of the physics required to simulate the time dependent meteorology of a region with description of both the planetary boundary layer and upper level flow as they are affected by synoptic forcing and complex terrain. The mathematical formulation of the SIGMET model and the various physicalmore » effects incorporated into it are summarized.« less

  10. Experiments to Determine Whether Recursive Partitioning (CART) or an Artificial Neural Network Overcomes Theoretical Limitations of Cox Proportional Hazards Regression

    NASA Technical Reports Server (NTRS)

    Kattan, Michael W.; Hess, Kenneth R.; Kattan, Michael W.

    1998-01-01

    New computationally intensive tools for medical survival analyses include recursive partitioning (also called CART) and artificial neural networks. A challenge that remains is to better understand the behavior of these techniques in effort to know when they will be effective tools. Theoretically they may overcome limitations of the traditional multivariable survival technique, the Cox proportional hazards regression model. Experiments were designed to test whether the new tools would, in practice, overcome these limitations. Two datasets in which theory suggests CART and the neural network should outperform the Cox model were selected. The first was a published leukemia dataset manipulated to have a strong interaction that CART should detect. The second was a published cirrhosis dataset with pronounced nonlinear effects that a neural network should fit. Repeated sampling of 50 training and testing subsets was applied to each technique. The concordance index C was calculated as a measure of predictive accuracy by each technique on the testing dataset. In the interaction dataset, CART outperformed Cox (P less than 0.05) with a C improvement of 0.1 (95% Cl, 0.08 to 0.12). In the nonlinear dataset, the neural network outperformed the Cox model (P less than 0.05), but by a very slight amount (0.015). As predicted by theory, CART and the neural network were able to overcome limitations of the Cox model. Experiments like these are important to increase our understanding of when one of these new techniques will outperform the standard Cox model. Further research is necessary to predict which technique will do best a priori and to assess the magnitude of superiority.

  11. 77 FR 56710 - Proposed Information Collection (Call Center Satisfaction Survey): Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-13

    ... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-0744] Proposed Information Collection (Call Center Satisfaction Survey): Comment Request AGENCY: Veterans Benefits Administration, Department of... techniques or the use of other forms of information technology. Title: VBA Call Center Satisfaction Survey...

  12. Proactive Security Testing and Fuzzing

    NASA Astrophysics Data System (ADS)

    Takanen, Ari

    Software is bound to have security critical flaws, and no testing or code auditing can ensure that software is flaw-less. But software security testing requirements have improved radically during the past years, largely due to criticism from security conscious consumers and Enterprise customers. Whereas in the past, security flaws were taken for granted (and patches were quietly and humbly installed), they now are probably one of the most common reasons why people switch vendors or software providers. The maintenance costs from security updates often add to become one of the biggest cost items to large Enterprise users. Fortunately test automation techniques have also improved. Techniques like model-based testing (MBT) enable efficient generation of security tests that reach good confidence levels in discovering zero-day mistakes in software. This technique is called fuzzing.

  13. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26752802','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26752802"><span>Digression and Value Concatenation to Enable Privacy-Preserving Regression.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Xiao-Bai; Sarkar, Sumit</p> <p>2014-09-01</p> <p>Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a "regression attack," has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression , which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSP...tmp..182N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSP...tmp..182N"><span>From Weakly Chaotic Dynamics to Deterministic Subdiffusion via Copula Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nazé, Pierre</p> <p>2018-03-01</p> <p>Copula modeling consists in finding a probabilistic distribution, called copula, whereby its coupling with the marginal distributions of a set of random variables produces their joint distribution. The present work aims to use this technique to connect the statistical distributions of weakly chaotic dynamics and deterministic subdiffusion. More precisely, we decompose the jumps distribution of Geisel-Thomae map into a bivariate one and determine the marginal and copula distributions respectively by infinite ergodic theory and statistical inference techniques. We verify therefore that the characteristic tail distribution of subdiffusion is an extreme value copula coupling Mittag-Leffler distributions. We also present a method to calculate the exact copula and joint distributions in the case where weakly chaotic dynamics and deterministic subdiffusion statistical distributions are already known. Numerical simulations and consistency with the dynamical aspects of the map support our results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000092106','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000092106"><span>The Modern Design of Experiments: A Technical and Marketing Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>DeLoach, R.</p> <p>2000-01-01</p> <p>A new wind tunnel testing process under development at NASA Langley Research Center, called Modern Design of Experiments (MDOE), differs from conventional wind tunnel testing techniques on a number of levels. Chief among these is that MDOE focuses on the generation of adequate prediction models rather than high-volume data collection. Some cultural issues attached to this and other distinctions between MDOE and conventional wind tunnel testing are addressed in this paper.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70159120','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70159120"><span>Modeling landscape evapotranspiration by integrating land surface phenology and a water balance algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Senay, Gabriel B.</p> <p>2008-01-01</p> <p>The main objective of this study is to present an improved modeling technique called Vegetation ET (VegET) that integrates commonly used water balance algorithms with remotely sensed Land Surface Phenology (LSP) parameter to conduct operational vegetation water balance modeling of rainfed systems at the LSP’s spatial scale using readily available global data sets. Evaluation of the VegET model was conducted using Flux Tower data and two-year simulation for the conterminous US. The VegET model is capable of estimating actual evapotranspiration (ETa) of rainfed crops and other vegetation types at the spatial resolution of the LSP on a daily basis, replacing the need to estimate crop- and region-specific crop coefficients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26787661','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26787661"><span>MultiGeMS: detection of SNVs from multiple samples using model selection on high-throughput sequencing data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Murillo, Gabriel H; You, Na; Su, Xiaoquan; Cui, Wei; Reilly, Muredach P; Li, Mingyao; Ning, Kang; Cui, Xinping</p> <p>2016-05-15</p> <p>Single nucleotide variant (SNV) detection procedures are being utilized as never before to analyze the recent abundance of high-throughput DNA sequencing data, both on single and multiple sample datasets. Building on previously published work with the single sample SNV caller genotype model selection (GeMS), a multiple sample version of GeMS (MultiGeMS) is introduced. Unlike other popular multiple sample SNV callers, the MultiGeMS statistical model accounts for enzymatic substitution sequencing errors. It also addresses the multiple testing problem endemic to multiple sample SNV calling and utilizes high performance computing (HPC) techniques. A simulation study demonstrates that MultiGeMS ranks highest in precision among a selection of popular multiple sample SNV callers, while showing exceptional recall in calling common SNVs. Further, both simulation studies and real data analyses indicate that MultiGeMS is robust to low-quality data. We also demonstrate that accounting for enzymatic substitution sequencing errors not only improves SNV call precision at low mapping quality regions, but also improves recall at reference allele-dominated sites with high mapping quality. The MultiGeMS package can be downloaded from https://github.com/cui-lab/multigems xinping.cui@ucr.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.G23A1055M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.G23A1055M"><span>GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.</p> <p>2015-12-01</p> <p>Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24956444','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24956444"><span>Understanding bistability in yeast glycolysis using general properties of metabolic pathways.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Planqué, Robert; Bruggeman, Frank J; Teusink, Bas; Hulshof, Josephus</p> <p>2014-09-01</p> <p>Glycolysis is the central pathway in energy metabolism in the majority of organisms. In a recent paper, van Heerden et al. showed experimentally and computationally that glycolysis can exist in two states, a global steady state and a so-called imbalanced state. In the imbalanced state, intermediary metabolites accumulate at low levels of ATP and inorganic phosphate. It was shown that Baker's yeast uses a peculiar regulatory mechanism--via trehalose metabolism--to ensure that most yeast cells reach the steady state and not the imbalanced state. Here we explore the apparent bistable behaviour in a core model of glycolysis that is based on a well-established detailed model, and study in great detail the bifurcation behaviour of solutions, without using any numerical information on parameter values. We uncover a rich suite of solutions, including so-called imbalanced states, bistability, and oscillatory behaviour. The techniques employed are generic, directly suitable for a wide class of biochemical pathways, and could lead to better analytical treatments of more detailed models. Copyright © 2014 Elsevier Inc. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4416183','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4416183"><span>Probabilistic topic modeling for the analysis and classification of genomic sequences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2015-01-01</p> <p>Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24180762','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24180762"><span>Offshore killer whale tracking using multiple hydrophone arrays.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gassmann, Martin; Henderson, E Elizabeth; Wiggins, Sean M; Roch, Marie A; Hildebrand, John A</p> <p>2013-11-01</p> <p>To study delphinid near surface movements and behavior, two L-shaped hydrophone arrays and one vertical hydrophone line array were deployed at shallow depths (<125 m) from the floating instrument platform R/P FLIP, moored northwest of San Clemente Island in the Southern California Bight. A three-dimensional propagation-model based passive acoustic tracking method was developed and used to track a group of five offshore killer whales (Orcinus orca) using their emitted clicks. In addition, killer whale pulsed calls and high-frequency modulated (HFM) signals were localized using other standard techniques. Based on these tracks sound source levels for the killer whales were estimated. The peak to peak source levels for echolocation clicks vary between 170-205 dB re 1 μPa @ 1 m, for HFM calls between 185-193 dB re 1 μPa @ 1 m, and for pulsed calls between 146-158 dB re 1 μPa @ 1 m.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/9541868','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/9541868"><span>Approximation algorithms for a genetic diagnostics problem.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kosaraju, S R; Schäffer, A A; Biesecker, L G</p> <p>1998-01-01</p> <p>We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29673483','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29673483"><span>Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian</p> <p>2018-04-18</p> <p>Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Artificial+AND+intelligent&pg=6&id=EJ809378','ERIC'); return false;" href="https://eric.ed.gov/?q=Artificial+AND+intelligent&pg=6&id=EJ809378"><span>AI in CALL--Artificially Inflated or Almost Imminent?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Schulze, Mathias</p> <p>2008-01-01</p> <p>The application of techniques from artificial intelligence (AI) to CALL has commonly been referred to as intelligent CALL (ICALL). ICALL is only slightly older than the "CALICO Journal", and this paper looks back at a quarter century of published research mainly in North America and by North American scholars. This "inventory…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930011043','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930011043"><span>Hierarchical Poly Tree Configurations for the Solution of Dynamically Refined Finte Element Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gute, G. D.; Padovan, J.</p> <p>1993-01-01</p> <p>This paper demonstrates how a multilevel substructuring technique, called the Hierarchical Poly Tree (HPT), can be used to integrate a localized mesh refinement into the original finite element model more efficiently. The optimal HPT configurations for solving isoparametrically square h-, p-, and hp-extensions on single and multiprocessor computers is derived. In addition, the reduced number of stiffness matrix elements that must be stored when employing this type of solution strategy is quantified. Moreover, the HPT inherently provides localize 'error-trapping' and a logical, efficient means with which to isolate physically anomalous and analytically singular behavior.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000PhR...329..199C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000PhR...329..199C"><span>Statistical physics of vehicular traffic and some related systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chowdhury, Debashish; Santen, Ludger; Schadschneider, Andreas</p> <p>2000-05-01</p> <p>In the so-called “microscopic” models of vehicular traffic, attention is paid explicitly to each individual vehicle each of which is represented by a “particle”; the nature of the “interactions” among these particles is determined by the way the vehicles influence each others’ movement. Therefore, vehicular traffic, modeled as a system of interacting “particles” driven far from equilibrium, offers the possibility to study various fundamental aspects of truly nonequilibrium systems which are of current interest in statistical physics. Analytical as well as numerical techniques of statistical physics are being used to study these models to understand rich variety of physical phenomena exhibited by vehicular traffic. Some of these phenomena, observed in vehicular traffic under different circumstances, include transitions from one dynamical phase to another, criticality and self-organized criticality, metastability and hysteresis, phase-segregation, etc. In this critical review, written from the perspective of statistical physics, we explain the guiding principles behind all the main theoretical approaches. But we present detailed discussions on the results obtained mainly from the so-called “particle-hopping” models, particularly emphasizing those which have been formulated in recent years using the language of cellular automata.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JPhCS.628a2090K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JPhCS.628a2090K"><span>Fuzzy Modal Control Applied to Smart Composite Structure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Koroishi, E. H.; Faria, A. W.; Lara-Molina, F. A.; Steffen, V., Jr.</p> <p>2015-07-01</p> <p>This paper proposes an active vibration control technique, which is based on Fuzzy Modal Control, as applied to a piezoelectric actuator bonded to a composite structure forming a so-called smart composite structure. Fuzzy Modal Controllers were found to be well adapted for controlling structures with nonlinear behavior, whose characteristics change considerably with respect to time. The smart composite structure was modelled by using a so called mixed theory. This theory uses a single equivalent layer for the discretization of the mechanical displacement field and a layerwise representation of the electrical field. Temperature effects are neglected. Due to numerical reasons it was necessary to reduce the size of the model of the smart composite structure so that the design of the controllers and the estimator could be performed. The role of the Kalman Estimator in the present contribution is to estimate the modal states of the system, which are used by the Fuzzy Modal controllers. Simulation results illustrate the effectiveness of the proposed vibration control methodology for composite structures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhRvE..89a2805M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhRvE..89a2805M"><span>Exploring the evolution of London's street network in the information space: A dual approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Masucci, A. Paolo; Stanilov, Kiril; Batty, Michael</p> <p>2014-01-01</p> <p>We study the growth of London's street network in its dual representation, as the city has evolved over the past 224 years. The dual representation of a planar graph is a content-based network, where each node is a set of edges of the planar graph and represents a transportation unit in the so-called information space, i.e., the space where information is handled in order to navigate through the city. First, we discuss a novel hybrid technique to extract dual graphs from planar graphs, called the hierarchical intersection continuity negotiation principle. Then we show that the growth of the network can be analytically described by logistic laws and that the topological properties of the network are governed by robust log-normal distributions characterizing the network's connectivity and small-world properties that are consistent over time. Moreover, we find that the double-Pareto-like distributions for the connectivity emerge for major roads and can be modeled via a stochastic content-based network model using simple space-filling principles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25500978','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25500978"><span>A community long-term hotline therapeutic intervention model for coping with the threat and trauma of war and terror.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gelkopf, Marc; Haimov, Sigal; Lapid, Liron</p> <p>2015-02-01</p> <p>Long-term tele-counseling can potentially be a potent intervention mode in war- and terror-related community crisis situations. We aimed to examine a unique long-term telephone-administered intervention, targeting community trauma-related crisis situations by use of various techniques and approaches. 142 participants were evaluated using a non-intrusive by-proxy methodology appraising counselors' standard verbatim reports. Various background measures and elements in the intervention were quantitatively assessed, along with symptomatology and functioning at the onset and end of intervention. About 1/4 of the wide variety of clients called for someone else in addition to themselves, and most called due to a past event rather than a present crisis situation. The intervention successfully reduced posttraumatic stress symptoms and improved functioning. Most interventions included psychosocial education with additional elements, e.g., self-help tools, and almost 60% included also in-depth processes. In sum, tele-counseling might be a viable and effective intervention model for community-related traumatic stress.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=MSFC-0101736&hterms=brick+mortar&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dbrick%2Bmortar','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=MSFC-0101736&hterms=brick+mortar&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dbrick%2Bmortar"><span>Ancient techniques for new materials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>2000-01-01</p> <p>NASA is looking to biological techniques that are millions of years old to help it develop new materials and technologies for the 21st century. Sponsored by NASA, Jeffrey Brinker of the University of New Mexico is studying how multiple elements can assemble themselves into a composite material that is clear, tough, and impermeable. His research is based on the model of how an abalone builds the nacre, also called mother-of-pearl, inside its shell. The mollusk layers bricks of calcium carbonate (the main ingredient in classroom chalk) and mortar of biopolymer to form a new material (top and bottom left) that is twice as hard and 1,000 times as tough as either of the original building materials.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=MSFC-0101748&hterms=self-organization&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dself-organization','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=MSFC-0101748&hterms=self-organization&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dself-organization"><span>Ancient techniques for new materials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>2000-01-01</p> <p>NASA is looking to biological techniques that are millions of years old to help it develop new materials and technologies for the 21st century. Sponsored by NASA, Jeffrey Brinker of the University of New Mexico is studying how multiple elements can assemble themselves into a composite material that is clear, tough, and impermeable. His research is based on the model of how an abalone builds the nacre, also called mother-of-pearl, inside its shell. Strong thin coatings, or lamellae, in Brinker's research are formed when objects are dip-coated. Evaporation drives the self-assembly of molecular aggregates (micelles) of surfactant, soluble silica, and organic monomers and their further self-organization into layered organic and inorganic assemblies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930009500','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930009500"><span>JIGSAW: Preference-directed, co-operative scheduling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Linden, Theodore A.; Gaw, David</p> <p>1992-01-01</p> <p>Techniques that enable humans and machines to cooperate in the solution of complex scheduling problems have evolved out of work on the daily allocation and scheduling of Tactical Air Force resources. A generalized, formal model of these applied techniques is being developed. It is called JIGSAW by analogy with the multi-agent, constructive process used when solving jigsaw puzzles. JIGSAW begins from this analogy and extends it by propagating local preferences into global statistics that dynamically influence the value and variable ordering decisions. The statistical projections also apply to abstract resources and time periods--allowing more opportunities to find a successful variable ordering by reserving abstract resources and deferring the choice of a specific resource or time period.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1991PhDT........39K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1991PhDT........39K"><span>Quantitative model validation of manipulative robot systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kartowisastro, Iman Herwidiana</p> <p></p> <p>This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of friction terms in the model was highlighted with the aid of the partition control technique. It was also shown that the conventional feedback control scheme was insufficient for a robot manipulative system due to high nonlinearity which was inherent in the robot manipulator.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950059033&hterms=digital+tools&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Ddigital%2Btools','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950059033&hterms=digital+tools&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Ddigital%2Btools"><span>Integrated performance and reliability specification for digital avionics systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Brehm, Eric W.; Goettge, Robert T.</p> <p>1995-01-01</p> <p>This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22490019','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22490019"><span>Computational model of chromosome aberration yield induced by high- and low-LET radiation exposures.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ponomarev, Artem L; George, Kerry; Cucinotta, Francis A</p> <p>2012-06-01</p> <p>We present a computational model for calculating the yield of radiation-induced chromosomal aberrations in human cells based on a stochastic Monte Carlo approach and calibrated using the relative frequencies and distributions of chromosomal aberrations reported in the literature. A previously developed DNA-fragmentation model for high- and low-LET radiation called the NASARadiationTrackImage model was enhanced to simulate a stochastic process of the formation of chromosomal aberrations from DNA fragments. The current version of the model gives predictions of the yields and sizes of translocations, dicentrics, rings, and more complex-type aberrations formed in the G(0)/G(1) cell cycle phase during the first cell division after irradiation. As the model can predict smaller-sized deletions and rings (<3 Mbp) that are below the resolution limits of current cytogenetic analysis techniques, we present predictions of hypothesized small deletions that may be produced as a byproduct of properly repaired DNA double-strand breaks (DSB) by nonhomologous end-joining. Additionally, the model was used to scale chromosomal exchanges in two or three chromosomes that were obtained from whole-chromosome FISH painting analysis techniques to whole-genome equivalent values.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006PhRvB..74f4304C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006PhRvB..74f4304C"><span>Existence of bound states of a polaron with a breather in soft potentials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cuevas, J.; Kevrekidis, P. G.; Frantzeskakis, D. J.; Bishop, A. R.</p> <p>2006-08-01</p> <p>We consider polarons in models of coupled electronic and vibrational degrees of freedom, in the presence of a soft nonlinear substrate potential (Morse potential). In particular, we focus on a bound state of a polaron with a breather, a so-called “polarobreather.” We analyze the existence of these states based on frequency resonance conditions and illustrate their stability using Floquet spectrum techniques. Multisite solutions of this type are also obtained both in the stationary case (bond-centered and twisted polarons) and in the breathing case (bond-centered and twisted polarobreathers). For all the branches examined, the dynamical evolution of instabilities pertinent to the corresponding solutions are also briefly discussed. Finally, a different branch of so-called phantom polarobreathers is also demonstrated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19780014181','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19780014181"><span>Candidate substances for space bioprocessing methodology and data specification for benefit evaluation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1978-01-01</p> <p>Analytical and quantitative economic techniques are applied to the evaluation of the economic benefits of a wide range of substances for space bioprocessing. On the basis of expected clinical applications, as well as the size of the patient that could be affected by the clinical applications, eight substances are recommended for further benefit evaluation. Results show that a transitional probability methodology can be used to model at least one clinical application for each of these substances. In each recommended case, the disease and its therapy are sufficiently well understood and documented, and the statistical data is available to operate the model and produce estimates of the impact of new therapy systems on the cost of treatment, morbidity, and mortality. Utilizing the morbidity and mortality information produced by the model, a standard economic technique called the Value of Human Capital is used to estimate the social welfare benefits that could be attributable to the new therapy systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007AIPC..894..265P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007AIPC..894..265P"><span>A 3D Model for Eddy Current Inspection in Aeronautics: Application to Riveted Structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Paillard, S.; Pichenot, G.; Lambert, M.; Voillaume, H.; Dominguez, N.</p> <p>2007-03-01</p> <p>Eddy current technique is currently an operational tool used for fastener inspection which is an important issue for the maintenance of aircraft structures. The industry calls for faster, more sensitive and reliable NDT techniques for the detection and characterization of potential flaws nearby rivet. In order to reduce the development time and to optimize the design and the performances assessment of an inspection procedure, the CEA and EADS have started a collaborative work aiming at extending the modeling features of the CIVA non destructive simulation plat-form in order to handle the configuration of a layered planar structure with a rivet and an embedded flaw nearby. Therefore, an approach based on the Volume Integral Method using the Green dyadic formalism which greatly increases computation efficiency has been developed. The first step, modeling the rivet without flaw as a hole in a multi-stratified structure, has been reached and validated in several configurations with experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25793249','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25793249"><span>Borrowing yet another technique from manufacturing, investigators find that 'operational flexibility' can offer dividends to ED operations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p></p> <p>2015-03-01</p> <p>Through the use of a sophisticated modeling technique, investigators at the University of Cincinnati have found that the creation of a so-called "flex track" that includes beds that can be assigned to either high-acuity or Iow-acuity- patients has the potential to lower mean wait times for patients when it is i added to the traditional fast-track and high-acuity areas of a 50-bed ED that sees 85,000 patients per year. Investigators used discrete-event simulation to model the patient flow and characteristics of the ED at the University of Cincinnati Medical Center, and to test out various operational scenarios without disrupting real-world operations. The investigators concluded that patient wait times were lowest when three flex beds were appropriated from the 10-bed fast track area of the EDs. In light of the results, three flex rooms are being incorporated into a newly remodeled ED scheduled for completion laterthis spring. Investigators suggest the modeling technique could be useful to other EDs interested in optimizing their operational plans. Further, they suggest that ED administrators consider ways to introduce flexibility into departments that are now more rigidly divided between high- and low-acuity areas.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009ems..confE.140C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009ems..confE.140C"><span>New technique for ensemble dressing combining Multimodel SuperEnsemble and precipitation PDF</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cane, D.; Milelli, M.</p> <p>2009-09-01</p> <p>The Multimodel SuperEnsemble technique (Krishnamurti et al., Science 285, 1548-1550, 1999) is a postprocessing method for the estimation of weather forecast parameters reducing direct model output errors. It differs from other ensemble analysis techniques by the use of an adequate weighting of the input forecast models to obtain a combined estimation of meteorological parameters. Weights are calculated by least-square minimization of the difference between the model and the observed field during a so-called training period. Although it can be applied successfully on the continuous parameters like temperature, humidity, wind speed and mean sea level pressure (Cane and Milelli, Meteorologische Zeitschrift, 15, 2, 2006), the Multimodel SuperEnsemble gives good results also when applied on the precipitation, a parameter quite difficult to handle with standard post-processing methods. Here we present our methodology for the Multimodel precipitation forecasts applied on a wide spectrum of results over Piemonte very dense non-GTS weather station network. We will focus particularly on an accurate statistical method for bias correction and on the ensemble dressing in agreement with the observed precipitation forecast-conditioned PDF. Acknowledgement: this work is supported by the Italian Civil Defence Department.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT........81C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT........81C"><span>Detecting dark matter in the Milky Way with cosmic and gamma radiation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Carlson, Eric C.</p> <p></p> <p>Over the last decade, experiments in high-energy astroparticle physics have reached unprecedented precision and sensitivity which span the electromagnetic and cosmic-ray spectra. These advances have opened a new window onto the universe for which little was previously known. Such dramatic increases in sensitivity lead naturally to claims of excess emission, which call for either revised astrophysical models or the existence of exotic new sources such as particle dark matter. Here we stand firmly with Occam, sharpening his razor by (i) developing new techniques for discriminating astrophysical signatures from those of dark matter, and (ii) by developing detailed foreground models which can explain excess signals and shed light on the underlying astrophysical processes at hand. We concentrate most directly on observations of Galactic gamma and cosmic rays, factoring the discussion into three related parts which each contain significant advancements from our cumulative works. In Part I we introduce concepts which are fundamental to the Indirect Detection of particle dark matter, including motivations, targets, experiments, production of Standard Model particles, and a variety of statistical techniques. In Part II we introduce basic and advanced modelling techniques for propagation of cosmic-rays through the Galaxy and describe astrophysical gamma-ray production, as well as presenting state-of-the-art propagation models of the Milky Way.Finally, in Part III, we employ these models and techniques in order to study several indirect detection signals, including the Fermi GeV excess at the Galactic center, the Fermi 135 GeV line, the 3.5 keV line, and the WMAP-Planck haze.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29082078','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29082078"><span>Automated red blood cells extraction from holographic images using fully convolutional neural networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yi, Faliu; Moon, Inkyu; Javidi, Bahram</p> <p>2017-10-01</p> <p>In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5654793','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5654793"><span>Automated red blood cells extraction from holographic images using fully convolutional neural networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yi, Faliu; Moon, Inkyu; Javidi, Bahram</p> <p>2017-01-01</p> <p>In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.6801P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.6801P"><span>Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann</p> <p>2017-04-01</p> <p>Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT........82R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT........82R"><span>Resonance Energy Transfer-Based Molecular Switch Designed Using a Systematic Design Process Based on Monte Carlo Methods and Markov Chains</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rallapalli, Arjun</p> <p></p> <p>A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical. In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications. We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the distribution of dipoles in a given sample of chromophore constructs. The model can be used to evaluate the feasibility of other potential orientation control techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=311019&keyword=cmaq&acttype=product&timstype=journal&timssubtypeid=+&deid=&epanumber=&ntisid=&archivestatus=both&ombcat=any&datebegincreated=&dateendcreated=&datebeginpublishedpresented=&dateendpublishedpresented=&datebeginupdated=&dateendupdated=&datebegincompleted=&dateendcompleted=&view=citation%20&personid=&role=any&journalid=&publisherid=&sortby=fy&count=25&cfid=77182256&cftoken=94527145','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=311019&keyword=cmaq&acttype=product&timstype=journal&timssubtypeid=+&deid=&epanumber=&ntisid=&archivestatus=both&ombcat=any&datebegincreated=&dateendcreated=&datebeginpublishedpresented=&dateendpublishedpresented=&datebeginupdated=&dateendupdated=&datebegincompleted=&dateendcompleted=&view=citation%20&personid=&role=any&journalid=&publisherid=&sortby=fy&count=25&cfid=77182256&cftoken=94527145"><span>Incorporating principal component analysis into air quality ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Principal Component Analysis (PCA) with the intent of motivating its use by the evaluation community. One of the main objectives of PCA is to identify, through data reduction, the recurring and independent modes of variations (or signals) within a very large dataset, thereby summarizing the essential information of that dataset so that meaningful and descriptive conclusions can be made. In this demonstration, PCA is applied to a simple evaluation metric – the model bias associated with EPA's Community Multi-scale Air Quality (CMAQ) model when compared to weekly observations of sulfate (SO42−) and ammonium (NH4+) ambient air concentrations measured by the Clean Air Status and Trends Network (CASTNet). The advantages of using this technique are demonstrated as it identifies strong and systematic patterns of CMAQ model bias across a myriad of spatial and temporal scales that are neither constrained to geopolitical boundaries nor monthly/seasonal time periods (a limitation of many current studies). The technique also identifies locations (station–grid cell pairs) that are used as indicators for a more thorough diagnostic evaluation thereby hastening and facilitating understanding of the prob</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160007750','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160007750"><span>Application of Interval Predictor Models to Space Radiation Shielding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.</p> <p>2016-01-01</p> <p>This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018FNL....1750015L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018FNL....1750015L"><span>Time-Varying Delay Estimation Applied to the Surface Electromyography Signals Using the Parametric Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Luu, Gia Thien; Boualem, Abdelbassit; Duy, Tran Trung; Ravier, Philippe; Butteli, Olivier</p> <p></p> <p>Muscle Fiber Conduction Velocity (MFCV) can be calculated from the time delay between the surface electromyographic (sEMG) signals recorded by electrodes aligned with the fiber direction. In order to take into account the non-stationarity during the dynamic contraction (the most daily life situation) of the data, the developed methods have to consider that the MFCV changes over time, which induces time-varying delays and the data is non-stationary (change of Power Spectral Density (PSD)). In this paper, the problem of TVD estimation is considered using a parametric method. First, the polynomial model of TVD has been proposed. Then, the TVD model parameters are estimated by using a maximum likelihood estimation (MLE) strategy solved by a deterministic optimization technique (Newton) and stochastic optimization technique, called simulated annealing (SA). The performance of the two techniques is also compared. We also derive two appropriate Cramer-Rao Lower Bounds (CRLB) for the estimated TVD model parameters and for the TVD waveforms. Monte-Carlo simulation results show that the estimation of both the model parameters and the TVD function is unbiased and that the variance obtained is close to the derived CRBs. A comparison with non-parametric approaches of the TVD estimation is also presented and shows the superiority of the method proposed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26301760','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26301760"><span>Essential Psychoanalysis: Toward a Re-Appraisal of the Relationship between Psychoanalysis and Dynamic Psychotherapy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sripada, Bhaskar</p> <p>2015-09-01</p> <p>Freud stated that any line of investigation which recognizes transference and resistance, regardless of its results, was entitled to call itself psychoanalysis (Freud, 1914a, p. 16). Separately he wrote that psychoanalysis was the science of unconscious mental processes (Freud, 1925, p. 70). Combining these two ideas defines Essential Psychoanalysis: Any line of treatment, theory, or science which recognizes the facts of unconscious, transference, or resistance, and takes them as the starting point of its work, regardless of its results, is psychoanalysis. Freud formulated two conflicting definitions of psychoanalysis: Essential Psychoanalysis, applicable to all analysts regardless of their individuality and Extensive Psychoanalysis, modeled on his individuality. They differ in how psychoanalytic technique is viewed. For Essential Psychoanalysis, flexible recommendations constitute psychoanalytic technique, whereas for Extensive Psychoanalysis, rules constitute a key part of psychoanalytic technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/1015346','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/1015346"><span>A comparison in Colorado of three methods to monitor breeding amphibians</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Corn, P.S.; Muths, E.; Iko, W.M.</p> <p>2000-01-01</p> <p>We surveyed amphibians at 4 montane and 2 plains lentic sites in northern Colorado using 3 techniques: standardized call surveys, automated recording devices (frog-loggers), and intensive surveys including capture-recapture techniques. Amphibians were observed at 5 sites. Species richness varied from 0 to 4 species at each site. Richness scores, the sums of species richness among sites, were similar among methods: 8 for call surveys, 10 for frog-loggers, and 11 for intensive surveys (9 if the non-vocal salamander Ambystoma tigrinum is excluded). The frog-logger at 1 site recorded Spea bombifrons which was not active during the times when call and intensive surveys were conducted. Relative abundance scores from call surveys failed to reflect a relatively large population of Bufo woodhousii at 1 site and only weakly differentiated among different-sized populations of Pseudacris maculata at 3 other sites. For extensive applications, call surveys have the lowest costs and fewest requirements for highly trained personnel. However, for a variety of reasons, call surveys cannot be used with equal effectiveness in all parts of North America.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002JSMEC..45..364N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002JSMEC..45..364N"><span>Modelling of Folding Patterns in Flat Membranes and Cylinders by Origami</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nojima, Taketoshi</p> <p></p> <p>This paper describes folding methods of thin flat sheets as well as cylindrical shells by modelling folding patterns through Japanese traditional Origami technique. New folding patterns have been devised in thin flat squared or circular membrane by modifying so called Miura-Ori in Japan (one node with 4 folding lines). Some folding patterns in cylindrical shells have newly been developed including spiral configurations. Devised foldable cylindrical shells were made by using polymer sheets, and it has been assured that they can be folded quite well. The devised models will make it possible to construct foldable/deployable space structures as well as to manufacture foldable industrial products and living goods, e. g., bottles for soft drinks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJGS...46..313A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJGS...46..313A"><span>Extraction of decision rules via imprecise probabilities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.</p> <p>2017-05-01</p> <p>Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1395099-line-monitoring-li-ion-battery-electrode-porosity-areal-loading-using-active-thermal-scanning-modeling-initial-experiment','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1395099-line-monitoring-li-ion-battery-electrode-porosity-areal-loading-using-active-thermal-scanning-modeling-initial-experiment"><span>In-line monitoring of Li-ion battery electrode porosity and areal loading using active thermal scanning - modeling and initial experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Rupnowski, Przemyslaw; Ulsh, Michael J.; Sopori, Bhushan; ...</p> <p>2017-08-18</p> <p>This work focuses on a new technique called active thermal scanning for in-line monitoring of porosity and areal loading of Li-ion battery electrodes. In this technique a moving battery electrode is subjected to thermal excitation and the induced temperature rise is monitored using an infra-red camera. Static and dynamic experiments with speeds up to 1.5 m min -1 are performed on both cathodes and anodes and a combined micro- and macro-scale finite element thermal model of the system is developed. It is shown experimentally and through simulations that during thermal scanning the temperature profile generated in an electrode depends onmore » both coating porosity (or area loading) and thickness. Here, it is concluded that by inverting this relation the porosity (or areal loading) can be determined, if thermal response and thickness are simultaneously measured.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1395099-line-monitoring-li-ion-battery-electrode-porosity-areal-loading-using-active-thermal-scanning-modeling-initial-experiment','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1395099-line-monitoring-li-ion-battery-electrode-porosity-areal-loading-using-active-thermal-scanning-modeling-initial-experiment"><span>In-line monitoring of Li-ion battery electrode porosity and areal loading using active thermal scanning - modeling and initial experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Rupnowski, Przemyslaw; Ulsh, Michael J.; Sopori, Bhushan</p> <p></p> <p>This work focuses on a new technique called active thermal scanning for in-line monitoring of porosity and areal loading of Li-ion battery electrodes. In this technique a moving battery electrode is subjected to thermal excitation and the induced temperature rise is monitored using an infra-red camera. Static and dynamic experiments with speeds up to 1.5 m min -1 are performed on both cathodes and anodes and a combined micro- and macro-scale finite element thermal model of the system is developed. It is shown experimentally and through simulations that during thermal scanning the temperature profile generated in an electrode depends onmore » both coating porosity (or area loading) and thickness. Here, it is concluded that by inverting this relation the porosity (or areal loading) can be determined, if thermal response and thickness are simultaneously measured.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPS...375..138R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPS...375..138R"><span>In-line monitoring of Li-ion battery electrode porosity and areal loading using active thermal scanning - modeling and initial experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rupnowski, Przemyslaw; Ulsh, Michael; Sopori, Bhushan; Green, Brian G.; Wood, David L.; Li, Jianlin; Sheng, Yangping</p> <p>2018-01-01</p> <p>This work focuses on a new technique called active thermal scanning for in-line monitoring of porosity and areal loading of Li-ion battery electrodes. In this technique a moving battery electrode is subjected to thermal excitation and the induced temperature rise is monitored using an infra-red camera. Static and dynamic experiments with speeds up to 1.5 m min-1 are performed on both cathodes and anodes and a combined micro- and macro-scale finite element thermal model of the system is developed. It is shown experimentally and through simulations that during thermal scanning the temperature profile generated in an electrode depends on both coating porosity (or area loading) and thickness. It is concluded that by inverting this relation the porosity (or areal loading) can be determined, if thermal response and thickness are simultaneously measured.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20090039489&hterms=understand&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dunderstand','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20090039489&hterms=understand&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dunderstand"><span>Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lum, Karen; Hihn, Jairus; Menzies, Tim</p> <p>2006-01-01</p> <p>While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900009908','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900009908"><span>Compensating for pneumatic distortion in pressure sensing devices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Whitmore, Stephen A.; Leondes, Cornelius T.</p> <p>1990-01-01</p> <p>A technique of compensating for pneumatic distortion in pressure sensing devices was developed and verified. This compensation allows conventional pressure sensing technology to obtain improved unsteady pressure measurements. Pressure distortion caused by frictional attenuation and pneumatic resonance within the sensing system makes obtaining unsteady pressure measurements by conventional sensors difficult. Most distortion occurs within the pneumatic tubing which transmits pressure impulses from the aircraft's surface to the measurement transducer. To avoid pneumatic distortion, experiment designers mount the pressure sensor at the surface of the aircraft, (called in-situ mounting). In-situ transducers cannot always fit in the available space and sometimes pneumatic tubing must be run from the aircraft's surface to the pressure transducer. A technique to measure unsteady pressure data using conventional pressure sensing technology was developed. A pneumatic distortion model is reduced to a low-order, state-variable model retaining most of the dynamic characteristics of the full model. The reduced-order model is coupled with results from minimum variance estimation theory to develop an algorithm to compensate for the effects of pneumatic distortion. Both postflight and real-time algorithms are developed and evaluated using simulated and flight data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6648925-electromagnetic-test-facility-characterization-identification-approach','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6648925-electromagnetic-test-facility-characterization-identification-approach"><span>Electromagnetic Test-Facility characterization: an identification approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zicker, J.E.; Candy, J.V.</p> <p></p> <p>The response of an object subjected to high energy, transient electromagnetic (EM) fields sometimes called electromagnetic pulses (EMP), is an important issue in the survivability of electronic systems (e.g., aircraft), especially when the field has been generated by a high altitude nuclear burst. The characterization of transient response information is a matter of national concern. In this report we discuss techniques to: (1) improve signal processing at a test facility; and (2) parameterize a particular object response. First, we discuss the application of identification-based signal processing techniques to improve signal levels at the Lawrence Livermore National Laboratory (LLNL) EM Transientmore » Test Facility. We identify models of test equipment and then use these models to deconvolve the input/output sequences for the object under test. A parametric model of the object is identified from this data. The model can be used to extrapolate the response to these threat level EMP. Also discussed is the development of a facility simulator (EMSIM) useful for experimental design and calibration and a deconvolution algorithm (DECONV) useful for removing probe effects from the measured data.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4981695','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4981695"><span>An Evaluation of Understandability of Patient Journey Models in Mental Health</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>Background There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. Objectives This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. Method Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. Results The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. Conclusions The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on extending the pilot case study to a more diversified group of clinicians and health care support workers. PMID:27471006</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26315733','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26315733"><span>Comparative Analysis of Sequential Proximal Optimizing Technique Versus Kissing Balloon Inflation Technique in Provisional Bifurcation Stenting: Fractal Coronary Bifurcation Bench Test.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Finet, Gérard; Derimay, François; Motreff, Pascal; Guerin, Patrice; Pilet, Paul; Ohayon, Jacques; Darremont, Olivier; Rioufol, Gilles</p> <p>2015-08-24</p> <p>This study used a fractal bifurcation bench model to compare 6 optimization sequences for coronary bifurcation provisional stenting, including 1 novel sequence without kissing balloon inflation (KBI), comprising initial proximal optimizing technique (POT) + side-branch inflation (SBI) + final POT, called "re-POT." In provisional bifurcation stenting, KBI fails to improve the rate of major adverse cardiac events. Proximal geometric deformation increases the rate of in-stent restenosis and target lesion revascularization. A bifurcation bench model was used to compare KBI alone, KBI after POT, KBI with asymmetric inflation pressure after POT, and 2 sequences without KBI: initial POT plus SBI, and initial POT plus SBI with final POT (called "re-POT"). For each protocol, 5 stents were tested using 2 different drug-eluting stent designs: that is, a total of 60 tests. Compared with the classic KBI-only sequence and those associating POT with modified KBI, the re-POT sequence gave significantly (p < 0.05) better geometric results: it reduced SB ostium stent-strut obstruction from 23.2 ± 6.0% to 5.6 ± 8.3%, provided perfect proximal stent apposition with almost perfect circularity (ellipticity index reduced from 1.23 ± 0.02 to 1.04 ± 0.01), reduced proximal area overstretch from 24.2 ± 7.6% to 8.0 ± 0.4%, and reduced global strut malapposition from 40 ± 6.2% to 2.6 ± 1.4%. In comparison with 5 other techniques, the re-POT sequence significantly optimized the final result of provisional coronary bifurcation stenting, maintaining circular geometry while significantly reducing SB ostium strut obstruction and global strut malapposition. These experimental findings confirm that provisional stenting may be optimized more effectively without KBI using re-POT. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008Icar..198...91T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008Icar..198...91T"><span>Asteroid shape and spin statistics from convex models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Torppa, J.; Hentunen, V.-P.; Pääkkönen, P.; Kehusmaa, P.; Muinonen, K.</p> <p>2008-11-01</p> <p>We introduce techniques for characterizing convex shape models of asteroids with a small number of parameters, and apply these techniques to a set of 87 models from convex inversion. We present three different approaches for determining the overall dimensions of an asteroid. With the first technique, we measured the dimensions of the shapes in the direction of the rotation axis and in the equatorial plane and with the two other techniques, we derived the best-fit ellipsoid. We also computed the inertia matrix of the model shape to test how well it represents the target asteroid, i.e., to find indications of possible non-convex features or albedo variegation, which the convex shape model cannot reproduce. We used shape models for 87 asteroids to perform statistical analyses and to study dependencies between shape and rotation period, size, and taxonomic type. We detected correlations, but more data are required, especially on small and large objects, as well as slow and fast rotators, to reach a more thorough understanding about the dependencies. Results show, e.g., that convex models of asteroids are not that far from ellipsoids in root-mean-square sense, even though clearly irregular features are present. We also present new spin and shape solutions for Asteroids (31) Euphrosyne, (54) Alexandra, (79) Eurynome, (93) Minerva, (130) Elektra, (376) Geometria, (471) Papagena, and (776) Berbericia. We used a so-called semi-statistical approach to obtain a set of possible spin state solutions. The number of solutions depends on the abundancy of the data, which for Eurynome, Elektra, and Geometria was extensive enough for determining an unambiguous spin and shape solution. Data of Euphrosyne, on the other hand, provided a wide distribution of possible spin solutions, whereas the rest of the targets have two or three possible solutions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ZNatA..72..733R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ZNatA..72..733R"><span>A New Homotopy Perturbation Scheme for Solving Singular Boundary Value Problems Arising in Various Physical Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Roul, Pradip; Warbhe, Ujwal</p> <p>2017-08-01</p> <p>The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012APS..DFDR21004C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012APS..DFDR21004C"><span>Compressible cavitation with stochastic field method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Class, Andreas; Dumond, Julien</p> <p>2012-11-01</p> <p>Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950008174','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950008174"><span>Radiometric resolution enhancement by lossy compression as compared to truncation followed by lossless compression</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tilton, James C.; Manohar, Mareboyana</p> <p>1994-01-01</p> <p>Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28114038','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28114038"><span>CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan</p> <p>2018-02-01</p> <p>In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JSMME...2..549B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JSMME...2..549B"><span>Measurement of Stress Distribution Around a Circular Hole in a Plate Under Bending Moment Using Phase-shifting Method with Reflective Polariscope Arrangement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baek, Tae Hyun</p> <p></p> <p>Photoelasticity is one of the most widely used whole-field optical methods for stress analysis. The technique of birefringent coatings, also called the method of photoelastic coatings, extends the classical procedures of model photoelasticity to the measurement of surface strains in opaque models made of any structural material. Photoelastic phase-shifting method can be used for the determination of the phase values of isochromatics and isoclinics. In this paper, photoelastic phase-shifting technique and conventional Babinet-Soleil compensation method were utilized to analyze a specimen with a triangular hole and a circular hole under bending. Photoelastic phase-shifting technique is whole-field measurement. On the other hand, conventional compensation method is point measurement. Three groups of results were obtained by phase-shifting method with reflective polariscope arrangement, conventional compensation method and FEM simulation, respectively. The results from the first two methods agree with each other relatively well considering experiment error. The advantage of photoelastic phase-shifting method is that it is possible to measure the stress distribution accurately close to the edge of holes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29294939','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29294939"><span>Multidisciplinary Responses to the Sexual Victimization of Children: Use of Control Phone Calls.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Canavan, J William; Borowski, Christine; Essex, Stacy; Perkowski, Stefan</p> <p>2017-10-01</p> <p>This descriptive study addresses the question of the value of one-party consent phone calls regarding the sexual victimization of children. The authors reviewed 4 years of experience with children between the ages of 3 and 18 years selected for the control phone calls after a forensic interview by the New York State Police forensic interviewer. The forensic interviewer identified appropriate cases for control phone calls considering New York State law, the child's capacity to make the call, the presence of another person to make the call and a supportive residence. The control phone call process has been extremely effective forensically. Offenders choose to avoid trial by taking a plea bargain thereby dramatically speeding up the criminal judicial and family court processes. An additional outcome of the control phone call is the alleged offender's own words saved the child from the trauma of testifying in court. The control phone call reduced the need for children to repeat their stories to various interviewers. A successful control phone call gives the child a sense of vindication. This technique is the only technique that preserves the actual communication pattern between the alleged victim and the alleged offender. This can be of great value to the mental health professionals working with both the child and the alleged offender. Cautions must be considered regarding potential serious adverse effects on the child. The multidisciplinary team members must work together in the control phone call. The descriptive nature of this study did not allow the authors adequate demographic data, a subject that should be addressed in future prospective study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/276693-fluid-mechanics-slurry-flow-through-grinding-media-ball-mills','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/276693-fluid-mechanics-slurry-flow-through-grinding-media-ball-mills"><span>Fluid mechanics of slurry flow through the grinding media in ball mills</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Songfack, P.K.; Rajamani, R.K.</p> <p>1995-12-31</p> <p>The slurry transport within the ball mill greatly influences the mill holdup, residence time, breakage rate, and hence the power draw and the particle size distribution of the mill product. However, residence-time distribution and holdup in industrial mills could not be predicted a priori. Indeed, it is impossible to determine the slurry loading in continuously operating mills by direct measurement, especially in industrial mills. In this paper, the slurry transport problem is solved using the principles of fluid mechanics. First, the motion of the ball charge and its expansion are predicted by a technique called discrete element method. Then themore » slurry flow through the porous ball charge is tackled with a fluid-flow technique called the marker and cell method. This may be the only numerical technique capable of tracking the slurry free surface as it fluctuates with the motion of the ball charge. The result is a prediction of the slurry profile in both the radial and axial directions. Hence, it leads to the detailed description of slurry mass and ball charge within the mill. The model predictions are verified with pilot-scale experimental work. This novel approach based on the physics of fluid flow is devoid of any empiricism. It is shown that the holdup of industrial mills at a given feed percent solids can be predicted successfully.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1983SPIE..359..249F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1983SPIE..359..249F"><span>Cardio-Surgical Thermography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fiorini, A. R.; Fumero, R.; Marchesi, R.</p> <p>1983-03-01</p> <p>Extracorporeal circulation allows direct access inside the chest: it may be used to carry out physiological research. The thermo-chemical protection of myocardium during heart surgery, called cardioplegy, is one of the latest outstanding techniques in patient safety. Thermocardiography monitoring during the infusion of the cardioplegic solution allows continuous assessment of rapid temperature distribution changes and shows exactly the extent of myocardium involved. Using a peculiar pseudocolor digital image enhancement, it is possible to emphasize involved areas coronary flow and to model the thermo-fluid-dynamical actions of inspected heart.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020091861','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020091861"><span>Simulation of Etching in Chlorine Discharges Using an Integrated Feature Evolution-Plasma Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hwang, Helen H.; Bose, Deepak; Govindan, T. R.; Meyyappan, M.; Biegel, Bryan (Technical Monitor)</p> <p>2002-01-01</p> <p>To better utilize its vast collection of heterogeneous resources that are geographically distributed across the United States, NASA is constructing a computational grid called the Information Power Grid (IPG). This paper describes various tools and techniques that we are developing to measure and improve the performance of a broad class of NASA applications when run on the IPG. In particular, we are investigating the areas of grid benchmarking, grid monitoring, user-level application scheduling, and decentralized system-level scheduling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..225a2196B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..225a2196B"><span>UAV and Computer Vision in 3D Modeling of Cultural Heritage in Southern Italy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barrile, Vincenzo; Gelsomino, Vincenzo; Bilotta, Giuliana</p> <p>2017-08-01</p> <p>On the Waterfront Italo Falcomatà of Reggio Calabria you can admire the most extensive tract of the walls of the Hellenistic period of ancient city of Rhegion. The so-called Greek Walls are one of the most significant and visible traces of the past linked to the culture of Ancient Greece in the site of Reggio Calabria territory. Over the years this stretch of wall has always been a part, to the reconstruction of Reggio after the earthquake of 1783, the outer walls at all times, restored countless times, to cope with the degradation of the time and the adjustments to the technical increasingly innovative and sophisticated siege. They were the subject of several studies on history, for the study of the construction techniques and the maintenance and restoration of the same. This note describes the methodology for the implementation of a three-dimensional model of the Greek Walls conducted by the Geomatics Laboratory, belonging to DICEAM Department of University “Mediterranea” of Reggio Calabria. 3D modeling we made is based on imaging techniques, such as Digital Photogrammetry and Computer Vision, by using a drone. The acquired digital images were then processed using commercial software Agisoft PhotoScan. The results denote the goodness of the technique used in the field of cultural heritage, attractive alternative to more expensive and demanding techniques such as laser scanning.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015HESS...19.4001W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015HESS...19.4001W"><span>Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.</p> <p>2015-09-01</p> <p>Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2216040','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2216040"><span>Reduced modeling of signal transduction – a modular approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Koschorreck, Markus; Conzelmann, Holger; Ebert, Sybille; Ederer, Michael; Gilles, Ernst Dieter</p> <p>2007-01-01</p> <p>Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. PMID:17854494</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017R%26QE...60...54R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017R%26QE...60...54R"><span>Processing of Antenna-Array Signals on the Basis of the Interference Model Including a Rank-Deficient Correlation Matrix</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rodionov, A. A.; Turchin, V. I.</p> <p>2017-06-01</p> <p>We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850022899','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850022899"><span>An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Juang, J. N.; Pappa, R. S.</p> <p>1985-01-01</p> <p>A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900019046','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900019046"><span>Encoding techniques for complex information structures in connectionist systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Barnden, John; Srinivas, Kankanahalli</p> <p>1990-01-01</p> <p>Two general information encoding techniques called relative position encoding and pattern similarity association are presented. They are claimed to be a convenient basis for the connectionist implementation of complex, short term information processing of the sort needed in common sense reasoning, semantic/pragmatic interpretation of natural language utterances, and other types of high level cognitive processing. The relationships of the techniques to other connectionist information-structuring methods, and also to methods used in computers, are discussed in detail. The rich inter-relationships of these other connectionist and computer methods are also clarified. The particular, simple forms are discussed that the relative position encoding and pattern similarity association techniques take in the author's own connectionist system, called Conposit, in order to clarify some issues and to provide evidence that the techniques are indeed useful in practice.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29732723','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29732723"><span>redNumerical modelling of a peripheral arterial stenosis using dimensionally reduced models and kernel methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Köppl, Tobias; Santin, Gabriele; Haasdonk, Bernard; Helmig, Rainer</p> <p>2018-05-06</p> <p>In this work, we consider two kinds of model reduction techniques to simulate blood flow through the largest systemic arteries, where a stenosis is located in a peripheral artery i.e. in an artery that is located far away from the heart. For our simulations we place the stenosis in one of the tibial arteries belonging to the right lower leg (right post tibial artery). The model reduction techniques that are used are on the one hand dimensionally reduced models (1-D and 0-D models, the so-called mixed-dimension model) and on the other hand surrogate models produced by kernel methods. Both methods are combined in such a way that the mixed-dimension models yield training data for the surrogate model, where the surrogate model is parametrised by the degree of narrowing of the peripheral stenosis. By means of a well-trained surrogate model, we show that simulation data can be reproduced with a satisfactory accuracy and that parameter optimisation or state estimation problems can be solved in a very efficient way. Furthermore it is demonstrated that a surrogate model enables us to present after a very short simulation time the impact of a varying degree of stenosis on blood flow, obtaining a speedup of several orders over the full model. This article is protected by copyright. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhDT.......252K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhDT.......252K"><span>Modeling of ultrasonic and terahertz radiations in defective tiles for condition monitoring of thermal protection systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kabiri Rahani, Ehsan</p> <p></p> <p>Condition based monitoring of Thermal Protection Systems (TPS) is necessary for safe operations of space shuttles when quick turn-around time is desired. In the current research Terahertz radiation (T-ray) has been used to detect mechanical and heat induced damages in TPS tiles. Voids and cracks inside the foam tile are denoted as mechanical damage while property changes due to long and short term exposures of tiles to high heat are denoted as heat induced damage. Ultrasonic waves cannot detect cracks and voids inside the tile because the tile material (silica foam) has high attenuation for ultrasonic energy. Instead, electromagnetic terahertz radiation can easily penetrate into the foam material and detect the internal voids although this electromagnetic radiation finds it difficult to detect delaminations between the foam tile and the substrate plate. Thus these two technologies are complementary to each other for TPS inspection. Ultrasonic and T-ray field modeling in free and mounted tiles with different types of mechanical and thermal damages has been the focus of this research. Shortcomings and limitations of FEM method in modeling 3D problems especially at high-frequencies has been discussed and a newly developed semi-analytical technique called Distributed Point Source Method (DPSM) has been used for this purpose. A FORTRAN code called DPSM3D has been developed to model both ultrasonic and electromagnetic problems using the conventional DPSM method. This code is designed in a general form capable of modeling a variety of geometries. DPSM has been extended from ultrasonic applications to electromagnetic to model THz Gaussian beams, multilayered dielectrics and Gaussian beam-scatterer interaction problems. Since the conventional DPSM has some drawbacks, to overcome it two modification methods called G-DPSM and ESM have been proposed. The conventional DPSM in the past was only capable of solving time harmonic (frequency domain) problems. Time history was obtained by FFT (Fast Fourier Transform) algorithm. In this research DPSM has been extended to model DPSM transient problems without using FFT. This modified technique has been denoted as t-DPSM. Using DPSM, scattering of focused ultrasonic fields by single and multiple cavities in fluid & solid media is studied. It is investigated when two cavities in close proximity can be distinguished and when it is not possible. A comparison between the radiation forces generated by the ultrasonic energies reflected from two small cavities versus a single big cavity is also carried out.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21721720','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21721720"><span>Improved 3-omega measurement of thermal conductivity in liquid, gases, and powders using a metal-coated optical fiber.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schiffres, Scott N; Malen, Jonathan A</p> <p>2011-06-01</p> <p>A novel 3ω thermal conductivity measurement technique called metal-coated 3ω is introduced for use with liquids, gases, powders, and aerogels. This technique employs a micron-scale metal-coated glass fiber as a heater/thermometer that is suspended within the sample. Metal-coated 3ω exceeds alternate 3ω based fluid sensing techniques in a number of key metrics enabling rapid measurements of small samples of materials with very low thermal effusivity (gases), using smaller temperature oscillations with lower parasitic conduction losses. Its advantages relative to existing fluid measurement techniques, including transient hot-wire, steady-state methods, and solid-wire 3ω are discussed. A generalized n-layer concentric cylindrical periodic heating solution that accounts for thermal boundary resistance is presented. Improved sensitivity to boundary conductance is recognized through this model. Metal-coated 3ω was successfully validated through a benchmark study of gases and liquids spanning two-orders of magnitude in thermal conductivity. © 2011 American Institute of Physics</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880006492','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880006492"><span>Numerical solution methods for viscoelastic orthotropic materials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.</p> <p>1988-01-01</p> <p>Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18255995','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18255995"><span>Full reinforcement operators in aggregation techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yager, R R; Rybalov, A</p> <p>1998-01-01</p> <p>We introduce the concept of upward reinforcement in aggregation as one in which a collection of high scores can reinforce or corroborate each other to give an even higher score than any of the individual arguments. The concept of downward reinforcement is also introduced as one in which low scores reinforce each other. Our concern is with full reinforcement aggregation operators, those exhibiting both upward and downward reinforcement. It is shown that the t-norm and t-conorm operators are not full reinforcement operators. A class of operators called fixed identity MICA operators are shown to exhibit the property of full reinforcement. We present some families of these operators. We use the fuzzy system modeling technique to provide further examples of these operators.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19840066140&hterms=decentralized&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Ddecentralized','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19840066140&hterms=decentralized&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Ddecentralized"><span>Wave scheduling - Decentralized scheduling of task forces in multicomputers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Van Tilborg, A. M.; Wittie, L. D.</p> <p>1984-01-01</p> <p>Decentralized operating systems that control large multicomputers need techniques to schedule competing parallel programs called task forces. Wave scheduling is a probabilistic technique that uses a hierarchical distributed virtual machine to schedule task forces by recursively subdividing and issuing wavefront-like commands to processing elements capable of executing individual tasks. Wave scheduling is highly resistant to processing element failures because it uses many distributed schedulers that dynamically assign scheduling responsibilities among themselves. The scheduling technique is trivially extensible as more processing elements join the host multicomputer. A simple model of scheduling cost is used by every scheduler node to distribute scheduling activity and minimize wasted processing capacity by using perceived workload to vary decentralized scheduling rules. At low to moderate levels of network activity, wave scheduling is only slightly less efficient than a central scheduler in its ability to direct processing elements to accomplish useful work.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016APS..DNP.EA024C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016APS..DNP.EA024C"><span>Probing Quark-Gluon-Plasma properties with a Bayesian model-to-data comparison</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cai, Tianji; Bernhard, Jonah; Ke, Weiyao; Bass, Steffen; Duke QCD Group Team</p> <p>2016-09-01</p> <p>Experiments at RHIC and LHC study a special state of matter called the Quark Gluon Plasma (QGP), where quarks and gluons roam freely, by colliding relativistic heavy-ions. Given the transitory nature of the QGP, its properties can only be explored by comparing computational models of its formation and evolution to experimental data. The models fall, roughly speaking, under two categories-those solely using relativistic viscous hydrodynamics (pure hydro model) and those that in addition couple to a microscopic Boltzmann transport for the later evolution of the hadronic decay products (hybrid model). Each of these models has multiple parameters that encode the physical properties we want to probe and that need to be calibrated to experimental data, a task which is computationally expensive, but necessary for the knowledge extraction and determination of the models' quality. Our group has developed an analysis technique based on Bayesian Statistics to perform the model calibration and to extract probability distributions for each model parameter. Following the previous work that applies the technique to the hybrid model, we now perform a similar analysis on a pure-hydro model and display the posterior distributions for the same set of model parameters. We also develop a set of criteria to assess the quality of the two models with respect to their ability to describe current experimental data. Funded by Duke University Goldman Sachs Research Fellowship.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/1001088','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/1001088"><span>Modeling the suppression of sea lamprey populations by use of the male sex pheromone</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Klassen, Waldemar; Adams, Jean V.; Twohey, Michael B.</p> <p>2005-01-01</p> <p>The suppression of sea lamprey populations, Petromyzon marinus (Linnaeus), was modeled using four different applications of the male sex pheromone: (1) pheromone-baited traps that remove females from the spawning population, (2) pheromone-baited decoys that exhaust females before they are able to spawn, (3) pheromone-enhanced sterile males that increase the proportion of non-fertile matings, and (4) camouflaging of the pheromone emitted by calling males to make it difficult for females to find a mate. The models indicated that thousands of traps or hundreds of thousands of decoys would be required to suppress a population of 100,000 animals. The potential efficacy of pheromone camouflages is largely unknown, and additional research is required to estimate how much pheromone is needed to camouflage the pheromone plumes of calling males. Pheromone-enhanced sterile males appear to be a promising application in the Great Lakes. Using this technique for three generations each of ca. 7 years duration could reduce sea lamprey populations by 90% for Lakes Huron and Ontario and by 98% for Lake Michigan, based on current trapping operations that capture 20 to 30% of the population each year.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/7705','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/7705"><span>Methods and materials, for locating and studying spotted owls.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Eric D. Forsman</p> <p>1983-01-01</p> <p>Nocturnal calling surveys are the most effective and most frequently used technique for locating spotted owls. Roosts and general nest locations may be located during the day by calling in suspected roost or nest areas. Specific nest trees are located by: (1) baiting with a live mouse to induce owls to visit the nest, (2) calling in suspected nest areas to stimulate...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900011359','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900011359"><span>Using an instrumented manikin for Space Station Freedom analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Orr, Linda; Hill, Richard</p> <p>1989-01-01</p> <p>One of the most intriguing and complex areas of current computer graphics research is animating human figures to behave in a realistic manner. Believable, accurate human models are desirable for many everyday uses including industrial and architectural design, medical applications, and human factors evaluations. For zero-gravity (0-g) spacecraft design and mission planning scenarios, they are particularly valuable since 0-g conditions are difficult to simulate in a one-gravity Earth environment. At NASA/JSC, an in-house human modeling package called PLAID is currently being used to produce animations for human factors evaluation of Space Station Freedom design issues. Presented here is an introductory background discussion of problems encountered in existing techniques for animating human models and how an instrumented manikin can help improve the realism of these models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ISPAr.XL5a..57D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ISPAr.XL5a..57D"><span>Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dore, C.; Murphy, M.</p> <p>2013-02-01</p> <p>This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23654413','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23654413"><span>A parametric model and estimation techniques for the inharmonicity and tuning of the piano.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rigaud, François; David, Bertrand; Daudet, Laurent</p> <p>2013-05-01</p> <p>Inharmonicity of piano tones is an essential property of their timbre that strongly influences the tuning, leading to the so-called octave stretching. It is proposed in this paper to jointly model the inharmonicity and tuning of pianos on the whole compass. While using a small number of parameters, these models are able to reflect both the specificities of instrument design and tuner's practice. An estimation algorithm is derived that can run either on a set of isolated note recordings, but also on chord recordings, assuming that the played notes are known. It is applied to extract parameters highlighting some tuner's choices on different piano types and to propose tuning curves for out-of-tune pianos or piano synthesizers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880014805&hterms=fashion+models&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dfashion%2Bmodels','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880014805&hterms=fashion+models&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dfashion%2Bmodels"><span>MTK: An AI tool for model-based reasoning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Erickson, William K.; Rudokas, Mary R.</p> <p>1988-01-01</p> <p>A 1988 goal for the Systems Autonomy Demonstration Project Office of the NASA Ames Research Office is to apply model-based representation and reasoning techniques in a knowledge-based system that will provide monitoring, fault diagnosis, control, and trend analysis of the Space Station Thermal Control System (TCS). A number of issues raised during the development of the first prototype system inspired the design and construction of a model-based reasoning tool called MTK, which was used in the building of the second prototype. These issues are outlined here with examples from the thermal system to highlight the motivating factors behind them, followed by an overview of the capabilities of MTK, which was developed to address these issues in a generic fashion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830013574','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830013574"><span>Linear approximations of nonlinear systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hunt, L. R.; Su, R.</p> <p>1983-01-01</p> <p>The development of a method for designing an automatic flight controller for short and vertical take off aircraft is discussed. This technique involves transformations of nonlinear systems to controllable linear systems and takes into account the nonlinearities of the aircraft. In general, the transformations cannot always be given in closed form. Using partial differential equations, an approximate linear system called the modified tangent model was introduced. A linear transformation of this tangent model to Brunovsky canonical form can be constructed, and from this the linear part (about a state space point x sub 0) of an exact transformation for the nonlinear system can be found. It is shown that a canonical expansion in Lie brackets about the point x sub 0 yields the same modified tangent model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005PhDT........30O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005PhDT........30O"><span>Calling behavior of blue and fin whales off California</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oleson, Erin Marie</p> <p></p> <p>Passive acoustic monitoring is an effective means for evaluating cetacean presence in remote regions and over long time periods, and may become an important component of cetacean abundance surveys. To use passive acoustic recordings for abundance estimation, an understanding of the behavioral ecology of cetacean calling is crucial. In this dissertation, I develop a better understanding of how blue (Balaenoptera musculus) and fin (B. physalus ) whales use sound with the goal of evaluating passive acoustic techniques for studying their populations. Both blue and fin whales produce several different call types, though the behavioral and environmental context of these calls have not been widely investigated. To better understand how calling is used by these whales off California I have employed both new technologies and traditional techniques, including acoustic recording tags, continuous long-term autonomous acoustic recordings, and simultaneous shipboard acoustic and visual surveys. The outcome of these investigations has led to several conclusions. The production of blue whale calls varies with sex, behavior, season, location, and time of day. Each blue whale call type has a distinct behavioral context, including a male-only bias in the production of song, a call type thought to function in reproduction, and the production of some calls by both sexes. Long-term acoustic records, when interpreted using all call types, provide a more accurate measure of the local seasonal presence of whales, and how they use the region annually, seasonally and daily. The relative occurrence of different call types may indicate prime foraging habitat and the presence of different segments of the population. The proportion of animals heard calling changes seasonally and geographically relative to the number seen, indicating the calibration of acoustic and visual surveys is complex and requires further study on the motivations behind call production and the behavior of calling whales. These findings will play a role in the future development of acoustic census methods and habitat studies for these species, and will provide baseline information for the determination of anthropogenic impacts on these populations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009LNCS.5829..234R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009LNCS.5829..234R"><span>Anchor Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia</p> <p></p> <p>Maintaining and evolving data warehouses is a complex, error prone, and time consuming activity. The main reason for this state of affairs is that the environment of a data warehouse is in constant change, while the warehouse itself needs to provide a stable and consistent interface to information spanning extended periods of time. In this paper, we propose a modeling technique for data warehousing, called anchor modeling, that offers non-destructive extensibility mechanisms, thereby enabling robust and flexible management of changes in source systems. A key benefit of anchor modeling is that changes in a data warehouse environment only require extensions, not modifications, to the data warehouse. This ensures that existing data warehouse applications will remain unaffected by the evolution of the data warehouse, i.e. existing views and functions will not have to be modified as a result of changes in the warehouse model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19073439','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19073439"><span>An experimental model for the study of cognitive disorders: the hippocampus and associative learning in mice.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Delgado-García, José M; Gruart, Agnès</p> <p>2008-12-01</p> <p>The availability of transgenic mice mimicking selective human neurodegenerative and psychiatric disorders calls for new electrophysiological and microstimulation techniques capable of being applied in vivo in this species. In this article, we will concentrate on experiments and techniques developed in our laboratory during the past few years. Thus we have developed different techniques for the study of learning and memory capabilities of wild-type and transgenic mice with deficits in cognitive functions, using classical conditioning procedures. These techniques include different trace (tone/SHOCK and shock/SHOCK) conditioning procedures ? that is, a classical conditioning task involving the cerebral cortex, including the hippocampus. We have also developed implantation and recording techniques for evoking long-term potentiation (LTP) in behaving mice and for recording the evolution of field excitatory postsynaptic potentials (fEPSP) evoked in the hippocampal CA1 area by the electrical stimulation of the commissural/Schaffer collateral pathway across conditioning sessions. Computer programs have also been developed to quantify the appearance and evolution of eyelid conditioned responses and the slope of evoked fEPSPs. According to the present results, the in vivo recording of the electrical activity of selected hippocampal sites during classical conditioning of eyelid responses appears to be a suitable experimental procedure for studying learning capabilities in genetically modified mice, and an excellent model for the study of selected neuropsychiatric disorders compromising cerebral cortex functioning.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015MNRAS.453.3120N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015MNRAS.453.3120N"><span>Self-consistent modelling of line-driven hot-star winds with Monte Carlo radiation hydrodynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Noebauer, U. M.; Sim, S. A.</p> <p>2015-11-01</p> <p>Radiative pressure exerted by line interactions is a prominent driver of outflows in astrophysical systems, being at work in the outflows emerging from hot stars or from the accretion discs of cataclysmic variables, massive young stars and active galactic nuclei. In this work, a new radiation hydrodynamical approach to model line-driven hot-star winds is presented. By coupling a Monte Carlo radiative transfer scheme with a finite volume fluid dynamical method, line-driven mass outflows may be modelled self-consistently, benefiting from the advantages of Monte Carlo techniques in treating multiline effects, such as multiple scatterings, and in dealing with arbitrary multidimensional configurations. In this work, we introduce our approach in detail by highlighting the key numerical techniques and verifying their operation in a number of simplified applications, specifically in a series of self-consistent, one-dimensional, Sobolev-type, hot-star wind calculations. The utility and accuracy of our approach are demonstrated by comparing the obtained results with the predictions of various formulations of the so-called CAK theory and by confronting the calculations with modern sophisticated techniques of predicting the wind structure. Using these calculations, we also point out some useful diagnostic capabilities our approach provides. Finally, we discuss some of the current limitations of our method, some possible extensions and potential future applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AcSpA.189..282G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AcSpA.189..282G"><span>Rapid classification of heavy metal-exposed freshwater bacteria by infrared spectroscopy coupled with chemometrics using supervised method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gurbanov, Rafig; Gozen, Ayse Gul; Severcan, Feride</p> <p>2018-01-01</p> <p>Rapid, cost-effective, sensitive and accurate methodologies to classify bacteria are still in the process of development. The major drawbacks of standard microbiological, molecular and immunological techniques call for the possible usage of infrared (IR) spectroscopy based supervised chemometric techniques. Previous applications of IR based chemometric methods have demonstrated outstanding findings in the classification of bacteria. Therefore, we have exploited an IR spectroscopy based chemometrics using supervised method namely Soft Independent Modeling of Class Analogy (SIMCA) technique for the first time to classify heavy metal-exposed bacteria to be used in the selection of suitable bacteria to evaluate their potential for environmental cleanup applications. Herein, we present the powerful differentiation and classification of laboratory strains (Escherichia coli and Staphylococcus aureus) and environmental isolates (Gordonia sp. and Microbacterium oxydans) of bacteria exposed to growth inhibitory concentrations of silver (Ag), cadmium (Cd) and lead (Pb). Our results demonstrated that SIMCA was able to differentiate all heavy metal-exposed and control groups from each other with 95% confidence level. Correct identification of randomly chosen test samples in their corresponding groups and high model distances between the classes were also achieved. We report, for the first time, the success of IR spectroscopy coupled with supervised chemometric technique SIMCA in classification of different bacteria under a given treatment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997GMS....98..243L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997GMS....98..243L"><span>AI techniques in geomagnetic storm forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lundstedt, Henrik</p> <p></p> <p>This review deals with how geomagnetic storms can be predicted with the use of Artificial Intelligence (AI) techniques. Today many different Al techniques have been developed, such as symbolic systems (expert and fuzzy systems) and connectionism systems (neural networks). Even integrations of AI techniques exist, so called Intelligent Hybrid Systems (IHS). These systems are capable of learning the mathematical functions underlying the operation of non-linear dynamic systems and also to explain the knowledge they have learned. Very few such powerful systems exist at present. Two such examples are the Magnetospheric Specification Forecast Model of Rice University and the Lund Space Weather Model of Lund University. Various attempts to predict geomagnetic storms on long to short-term are reviewed in this article. Predictions of a month to days ahead most often use solar data as input. The first SOHO data are now available. Due to the high temporal and spatial resolution new solar physics have been revealed. These SOHO data might lead to a breakthrough in these predictions. Predictions hours ahead and shorter rely on real-time solar wind data. WIND gives us real-time data for only part of the day. However, with the launch of the ACE spacecraft in 1997, real-time data during 24 hours will be available. That might lead to the second breakthrough for predictions of geomagnetic storms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvD..97j5027M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvD..97j5027M"><span>Fast summation of divergent series and resurgent transseries from Meijer-G approximants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mera, Héctor; Pedersen, Thomas G.; Nikolić, Branislav K.</p> <p>2018-05-01</p> <p>We develop a resummation approach based on Meijer-G functions and apply it to approximate the Borel sum of divergent series and the Borel-Écalle sum of resurgent transseries in quantum mechanics and quantum field theory (QFT). The proposed method is shown to vastly outperform the conventional Borel-Padé and Borel-Padé-Écalle summation methods. The resulting Meijer-G approximants are easily parametrized by means of a hypergeometric ansatz and can be thought of as a generalization to arbitrary order of the Borel-hypergeometric method [Mera et al., Phys. Rev. Lett. 115, 143001 (2015), 10.1103/PhysRevLett.115.143001]. Here we demonstrate the accuracy of this technique in various examples from quantum mechanics and QFT, traditionally employed as benchmark models for resummation, such as zero-dimensional ϕ4 theory; the quartic anharmonic oscillator; the calculation of critical exponents for the N -vector model; ϕ4 with degenerate minima; self-interacting QFT in zero dimensions; and the summation of one- and two-instanton contributions in the quantum-mechanical double-well problem.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12114175','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12114175"><span>A real-time computer model to assess resident work-hours scenarios.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McDonald, Furman S; Ramakrishna, Gautam; Schultz, Henry J</p> <p>2002-07-01</p> <p>To accurately model residents' work hours and assess options to forthrightly meet Residency Review Committee-Internal Medicine (RRC-IM) requirements. The requirements limiting residents' work hours are clearly defined by the Accreditation Council for Graduate Medical Education (ACGME) and the RRC-IM: "When averaged over any four-week rotation or assignment, residents must not spend more than 80 hours per week in patient care duties."(1) The call for the profession to realistically address work-hours violations is of paramount importance.(2) Unfortunately, work hours are hard to calculate. We developed an electronic model of residents' work-hours scenarios using Microsoft Excel 97. This model allows the input of multiple parameters (i.e., call frequency, call position, days off, short-call, weeks per rotation, outpatient weeks, clinic day of the week, additional time due to clinic) and start and stop times for post-call, non-call, short-call, and weekend days. For each resident on a rotation, the model graphically demonstrates call schedules, plots clinic days, and portrays all possible and preferred days off. We tested the model for accuracy in several scenarios. For example, the model predicted average work hours of 85.1 hours per week for fourth-night-call rotations. This was compared with logs of actual work hours of 84.6 hours per week. Model accuracy for this scenario was 99.4% (95% CI 96.2%-100%). The model prospectively predicted work hours of 89.9 hours/week in the cardiac intensive care unit (CCU). Subsequent surveys found mean CCU work hours of 88, 1 hours per week. Model accuracy for this scenario was 98% (95% CI 93.2-100%). Thus validated, we then used the model to test proposed scenarios for complying with RRC-IM limits. The flexibility of the model allowed demonstration of the full range of work-hours scenarios in every rotation of our 36-month program. Demonstrations of status-quo work-hours scenarios were presented to faculty as well as real-time demonstrations of the feasibility, or unfeasibility, of their proposed solutions. The model clearly demonstrated that non-call (i.e., short-call) admissions without concomitant decreases in overnight call frequency resulted in substantial increases in total work hours. Attempts to "get the resident out" an hour or two earlier each day had negligible effects on total hours and were unrealistic paper solutions. For fourth-night-call rotations, the addition of a "golden weekend" (i.e., a fifth day off per month) was found to significantly reduce work hours. The electronic model allowed the development of creative schedules for previously third-night-call rotations that limit resident work hours without decreasing continuity of care by scheduling overnight call every sixth night alternating with sixth-night-short-call rotations. Our electronic model is sufficiently robust to accurately estimate work hours on multiple and varied rotations. This model clearly demonstrates that it is very difficult to meet the RRC-IM work-hours limitations under standard fourth-night-call schedules with only four days off per month. We are successfully using our model to test proposed alternative scenarios, to overcome faculty misconceptions about resident work-hours "solutions," and to make changes to our call schedules that both are realistic for residents to accomplish and truly diminish total resident work hours toward the requirements of the RRC-IM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18480100','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18480100"><span>Computing chemical organizations in biological networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Centler, Florian; Kaleta, Christoph; di Fenizio, Pietro Speroni; Dittrich, Peter</p> <p>2008-07-15</p> <p>Novel techniques are required to analyze computational models of intracellular processes as they increase steadily in size and complexity. The theory of chemical organizations has recently been introduced as such a technique that links the topology of biochemical reaction network models to their dynamical repertoire. The network is decomposed into algebraically closed and self-maintaining subnetworks called organizations. They form a hierarchy representing all feasible system states including all steady states. We present three algorithms to compute the hierarchy of organizations for network models provided in SBML format. Two of them compute the complete organization hierarchy, while the third one uses heuristics to obtain a subset of all organizations for large models. While the constructive approach computes the hierarchy starting from the smallest organization in a bottom-up fashion, the flux-based approach employs self-maintaining flux distributions to determine organizations. A runtime comparison on 16 different network models of natural systems showed that none of the two exhaustive algorithms is superior in all cases. Studying a 'genome-scale' network model with 762 species and 1193 reactions, we demonstrate how the organization hierarchy helps to uncover the model structure and allows to evaluate the model's quality, for example by detecting components and subsystems of the model whose maintenance is not explained by the model. All data and a Java implementation that plugs into the Systems Biology Workbench is available from http://www.minet.uni-jena.de/csb/prj/ot/tools.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/8823630','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/8823630"><span>CNN: a speaker recognition system using a cascaded neural network.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zaki, M; Ghalwash, A; Elkouny, A A</p> <p>1996-05-01</p> <p>The main emphasis of this paper is to present an approach for combining supervised and unsupervised neural network models to the issue of speaker recognition. To enhance the overall operation and performance of recognition, the proposed strategy integrates the two techniques, forming one global model called the cascaded model. We first present a simple conventional technique based on the distance measured between a test vector and a reference vector for different speakers in the population. This particular distance metric has the property of weighting down the components in those directions along which the intraspeaker variance is large. The reason for presenting this method is to clarify the discrepancy in performance between the conventional and neural network approach. We then introduce the idea of using unsupervised learning technique, presented by the winner-take-all model, as a means of recognition. Due to several tests that have been conducted and in order to enhance the performance of this model, dealing with noisy patterns, we have preceded it with a supervised learning model--the pattern association model--which acts as a filtration stage. This work includes both the design and implementation of both conventional and neural network approaches to recognize the speakers templates--which are introduced to the system via a voice master card and preprocessed before extracting the features used in the recognition. The conclusion indicates that the system performance in case of neural network is better than that of the conventional one, achieving a smooth degradation in respect of noisy patterns, and higher performance in respect of noise-free patterns.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26861317','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26861317"><span>An Electrochemical Impedance Spectroscopy System for Monitoring Pineapple Waste Saccharification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Conesa, Claudia; Ibáñez Civera, Javier; Seguí, Lucía; Fito, Pedro; Laguarda-Miró, Nicolás</p> <p>2016-02-04</p> <p>Electrochemical impedance spectroscopy (EIS) has been used for monitoring the enzymatic pineapple waste hydrolysis process. The system employed consists of a device called Advanced Voltammetry, Impedance Spectroscopy & Potentiometry Analyzer (AVISPA) equipped with a specific software application and a stainless steel double needle electrode. EIS measurements were conducted at different saccharification time intervals: 0, 0.75, 1.5, 6, 12 and 24 h. Partial least squares (PLS) were used to model the relationship between the EIS measurements and the sugar determination by HPAEC-PAD. On the other hand, artificial neural networks: (multilayer feed forward architecture with quick propagation training algorithm and logistic-type transfer functions) gave the best results as predictive models for glucose, fructose, sucrose and total sugars. Coefficients of determination (R²) and root mean square errors of prediction (RMSEP) were determined as R² > 0.944 and RMSEP < 1.782 for PLS and R² > 0.973 and RMSEP < 0.486 for artificial neural networks (ANNs), respectively. Therefore, a combination of both an EIS-based technique and ANN models is suggested as a promising alternative to the traditional laboratory techniques for monitoring the pineapple waste saccharification step.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22547455','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22547455"><span>Interactive lesion segmentation with shape priors from offline and online learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shepherd, Tony; Prince, Simon J D; Alexander, Daniel C</p> <p>2012-09-01</p> <p>In medical image segmentation, tumors and other lesions demand the highest levels of accuracy but still call for the highest levels of manual delineation. One factor holding back automatic segmentation is the exemption of pathological regions from shape modelling techniques that rely on high-level shape information not offered by lesions. This paper introduces two new statistical shape models (SSMs) that combine radial shape parameterization with machine learning techniques from the field of nonlinear time series analysis. We then develop two dynamic contour models (DCMs) using the new SSMs as shape priors for tumor and lesion segmentation. From training data, the SSMs learn the lower level shape information of boundary fluctuations, which we prove to be nevertheless highly discriminant. One of the new DCMs also uses online learning to refine the shape prior for the lesion of interest based on user interactions. Classification experiments reveal superior sensitivity and specificity of the new shape priors over those previously used to constrain DCMs. User trials with the new interactive algorithms show that the shape priors are directly responsible for improvements in accuracy and reductions in user demand.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1533469','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1533469"><span>Modeling the chemistry of complex petroleum mixtures.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Quann, R J</p> <p>1998-01-01</p> <p>Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2931215','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2931215"><span>Strategies for Effective On-Call Supervision for Internal Medicine Residents: The Superb/Safety Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Farnan, Jeanne M.; Johnson, Julie K.; Meltzer, David O.; Harris, Ilene; Humphrey, Holly J.; Schwartz, Alan; Arora, Vineet M.</p> <p>2010-01-01</p> <p>Background Supervision is central to resident education and patient safety, yet there is little published evidence to describe a framework for clinical supervision. The aim of this study was to describe supervision strategies for on-call internal medicine residents. Methods Between January and November 2006, internal medicine residents and attending physicians at a single hospital were interviewed within 1 week of their final call on the general medicine rotation. Appreciative inquiry and critical incident technique were used to elicit perspectives on ideal and suboptimal supervision practices. A representative portion of transcripts were analyzed using an inductive approach to develop a coding scheme that was then applied to the entire set of transcripts. All discrepancies were resolved via discussion until consensus was achieved. Results Forty-four of 50 (88%) attending physicians and 46 of 50 (92%) eligible residents completed an interview. Qualitative analysis revealed a bidirectional model of suggested supervisory strategies, the “SUPERB/SAFETY” model; an interrater reliability of 0.70 was achieved. Suggestions for attending physicians providing supervision included setting expectations, recognizing uncertainty, planning communication, having easy availability, reassuring residents, balancing supervision, and having autonomy. Suggested resident strategies for seeking supervision from attending physicians included seeking input early, contacting for active clinical decisions or feeling uncertain, end of life issues, transitions in care, or help with systems issues. Common themes suggested by trainees and attending physicians included easy availability and preservation of resident decision-making autonomy. Discussion Residents and attending physicians have explicit expectations for optimal supervision. The SUPERB/SAFETY model of supervision may be an effective resource to enhance the clinical supervision of residents. PMID:21975883</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870019132','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870019132"><span>Application of a sensitivity analysis technique to high-order digital flight control systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Paduano, James D.; Downing, David R.</p> <p>1987-01-01</p> <p>A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/10290','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/10290"><span>Terrestrial Radiodetermination Performance and Cost</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1977-09-01</p> <p>The report summarizes information gathered during a study of the application of electronic techniques to geographical position determination on land and on inland waterways. Systems incorporating such techniques have been called terrestrial radiodete...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22748139','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22748139"><span>Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A</p> <p>2012-07-02</p> <p>Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AAS...23115309K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AAS...23115309K"><span>eGSM: A extended Sky Model of Diffuse Radio Emission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, Doyeon; Liu, Adrian; Switzer, Eric</p> <p>2018-01-01</p> <p>Both cosmic microwave background and 21cm cosmology observations must contend with astrophysical foreground contaminants in the form of diffuse radio emission. For precise cosmological measurements, these foregrounds must be accurately modeled over the entire sky Ideally, such full-sky models ought to be primarily motivated by observations. Yet in practice, these observations are limited, with data sets that are observed not only in a heterogenous fashion, but also over limited frequency ranges. Previously, the Global Sky Model (GSM) took some steps towards solving the problem of incomplete observational data by interpolating over multi-frequency maps using principal component analysis (PCA).In this poster, we present an extended version of GSM (called eGSM) that includes the following improvements: 1) better zero-level calibration 2) incorporation of non-uniform survey resolutions and sky coverage 3) the ability to quantify uncertainties in sky models 4) the ability to optimally select spectral models using Bayesian Evidence techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20953911','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20953911"><span>An effective automatic procedure for testing parameter identifiability of HIV/AIDS models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Saccomani, Maria Pia</p> <p>2011-08-01</p> <p>Realistic HIV models tend to be rather complex and many recent models proposed in the literature could not yet be analyzed by traditional identifiability testing techniques. In this paper, we check a priori global identifiability of some of these nonlinear HIV models taken from the recent literature, by using a differential algebra algorithm based on previous work of the author. The algorithm is implemented in a software tool, called DAISY (Differential Algebra for Identifiability of SYstems), which has been recently released (DAISY is freely available on the web site http://www.dei.unipd.it/~pia/ ). The software can be used to automatically check global identifiability of (linear and) nonlinear models described by polynomial or rational differential equations, thus providing a general and reliable tool to test global identifiability of several HIV models proposed in the literature. It can be used by researchers with a minimum of mathematical background.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AdWR...34.1082M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AdWR...34.1082M"><span>A FEniCS-based programming framework for modeling turbulent flow by the Reynolds-averaged Navier-Stokes equations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mortensen, Mikael; Langtangen, Hans Petter; Wells, Garth N.</p> <p>2011-09-01</p> <p>Finding an appropriate turbulence model for a given flow case usually calls for extensive experimentation with both models and numerical solution methods. This work presents the design and implementation of a flexible, programmable software framework for assisting with numerical experiments in computational turbulence. The framework targets Reynolds-averaged Navier-Stokes models, discretized by finite element methods. The novel implementation makes use of Python and the FEniCS package, the combination of which leads to compact and reusable code, where model- and solver-specific code resemble closely the mathematical formulation of equations and algorithms. The presented ideas and programming techniques are also applicable to other fields that involve systems of nonlinear partial differential equations. We demonstrate the framework in two applications and investigate the impact of various linearizations on the convergence properties of nonlinear solvers for a Reynolds-averaged Navier-Stokes model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AAS...22914103A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AAS...22914103A"><span>Gravitational Wave Detection of Compact Binaries Through Multivariate Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Atallah, Dany Victor; Dorrington, Iain; Sutton, Patrick</p> <p>2017-01-01</p> <p>The first detection of gravitational waves (GW), GW150914, as produced by a binary black hole merger, has ushered in the era of GW astronomy. The detection technique used to find GW150914 considered only a fraction of the information available describing the candidate event: mainly the detector signal to noise ratios and chi-squared values. In hopes of greatly increasing detection rates, we want to take advantage of all the information available about candidate events. We employ a technique called Multivariate Analysis (MVA) to improve LIGO sensitivity to GW signals. MVA techniques are efficient ways to scan high dimensional data spaces for signal/noise classification. Our goal is to use MVA to classify compact-object binary coalescence (CBC) events composed of any combination of black holes and neutron stars. CBC waveforms are modeled through numerical relativity. Templates of the modeled waveforms are used to search for CBCs and quantify candidate events. Different MVA pipelines are under investigation to look for CBC signals and un-modelled signals, with promising results. One such MVA pipeline used for the un-modelled search can theoretically analyze far more data than the MVA pipelines currently explored for CBCs, potentially making a more powerful classifier. In principle, this extra information could improve the sensitivity to GW signals. We will present the results from our efforts to adapt an MVA pipeline used in the un-modelled search to classify candidate events from the CBC search.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28011753','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28011753"><span>A review on machine learning principles for multi-view biological data integration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Yifeng; Wu, Fang-Xiang; Ngom, Alioune</p> <p>2018-03-01</p> <p>Driven by high-throughput sequencing techniques, modern genomic and clinical studies are in a strong need of integrative machine learning models for better use of vast volumes of heterogeneous information in the deep understanding of biological systems and the development of predictive models. How data from multiple sources (called multi-view data) are incorporated in a learning system is a key step for successful analysis. In this article, we provide a comprehensive review on omics and clinical data integration techniques, from a machine learning perspective, for various analyses such as prediction, clustering, dimension reduction and association. We shall show that Bayesian models are able to use prior information and model measurements with various distributions; tree-based methods can either build a tree with all features or collectively make a final decision based on trees learned from each view; kernel methods fuse the similarity matrices learned from individual views together for a final similarity matrix or learning model; network-based fusion methods are capable of inferring direct and indirect associations in a heterogeneous network; matrix factorization models have potential to learn interactions among features from different views; and a range of deep neural networks can be integrated in multi-modal learning for capturing the complex mechanism of biological systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120006038','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120006038"><span>Intelligent Systems Approach for Automated Identification of Individual Control Behavior of a Human Operator</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zaychik, Kirill B.; Cardullo, Frank M.</p> <p>2012-01-01</p> <p>Results have been obtained using conventional techniques to model the generic human operator?s control behavior, however little research has been done to identify an individual based on control behavior. The hypothesis investigated is that different operators exhibit different control behavior when performing a given control task. Two enhancements to existing human operator models, which allow personalization of the modeled control behavior, are presented. One enhancement accounts for the testing control signals, which are introduced by an operator for more accurate control of the system and/or to adjust the control strategy. This uses the Artificial Neural Network which can be fine-tuned to model the testing control. Another enhancement takes the form of an equiripple filter which conditions the control system power spectrum. A novel automated parameter identification technique was developed to facilitate the identification process of the parameters of the selected models. This utilizes a Genetic Algorithm based optimization engine called the Bit-Climbing Algorithm. Enhancements were validated using experimental data obtained from three different sources: the Manual Control Laboratory software experiments, Unmanned Aerial Vehicle simulation, and NASA Langley Research Center Visual Motion Simulator studies. This manuscript also addresses applying human operator models to evaluate the effectiveness of motion feedback when simulating actual pilot control behavior in a flight simulator.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18252406','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18252406"><span>Modeling and performance analysis using extended fuzzy-timing Petri nets for networked virtual environments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhou, Y; Murata, T; Defanti, T A</p> <p>2000-01-01</p> <p>Despite their attractive properties, networked virtual environments (net-VEs) are notoriously difficult to design, implement, and test due to the concurrency, real-time and networking features in these systems. Net-VEs demand high quality-of-service (QoS) requirements on the network to maintain natural and real-time interactions among users. The current practice for net-VE design is basically trial and error, empirical, and totally lacks formal methods. This paper proposes to apply a Petri net formal modeling technique to a net-VE-NICE (narrative immersive constructionist/collaborative environment), predict the net-VE performance based on simulation, and improve the net-VE performance. NICE is essentially a network of collaborative virtual reality systems called the CAVE-(CAVE automatic virtual environment). First, we introduce extended fuzzy-timing Petri net (EFTN) modeling and analysis techniques. Then, we present EFTN models of the CAVE, NICE, and transport layer protocol used in NICE: transmission control protocol (TCP). We show the possibility analysis based on the EFTN model for the CAVE. Then, by using these models and design/CPN as the simulation tool, we conducted various simulations to study real-time behavior, network effects and performance (latencies and jitters) of NICE. Our simulation results are consistent with experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2244178','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2244178"><span>Towards elicitation of users requirements for hospital information system: from a care process modelling technique to a web based collaborative tool.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Staccini, Pascal M.; Joubert, Michel; Quaranta, Jean-Francois; Fieschi, Marius</p> <p>2002-01-01</p> <p>Growing attention is being given to the use of process modeling methodology for user requirements elicitation. In the analysis phase of hospital information systems, the usefulness of care-process models has been investigated to evaluate the conceptual applicability and practical understandability by clinical staff and members of users teams. Nevertheless, there still remains a gap between users and analysts in their mutual ability to share conceptual views and vocabulary, keeping the meaning of clinical context while providing elements for analysis. One of the solutions for filling this gap is to consider the process model itself in the role of a hub as a centralized means of facilitating communication between team members. Starting with a robust and descriptive technique for process modeling called IDEF0/SADT, we refined the basic data model by extracting concepts from ISO 9000 process analysis and from enterprise ontology. We defined a web-based architecture to serve as a collaborative tool and implemented it using an object-oriented database. The prospects of such a tool are discussed notably regarding to its ability to generate data dictionaries and to be used as a navigation tool through the medium of hospital-wide documentation. PMID:12463921</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12463921','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12463921"><span>Towards elicitation of users requirements for hospital information system: from a care process modelling technique to a web based collaborative tool.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Staccini, Pascal M; Joubert, Michel; Quaranta, Jean-Francois; Fieschi, Marius</p> <p>2002-01-01</p> <p>Growing attention is being given to the use of process modeling methodology for user requirements elicitation. In the analysis phase of hospital information systems, the usefulness of care-process models has been investigated to evaluate the conceptual applicability and practical understandability by clinical staff and members of users teams. Nevertheless, there still remains a gap between users and analysts in their mutual ability to share conceptual views and vocabulary, keeping the meaning of clinical context while providing elements for analysis. One of the solutions for filling this gap is to consider the process model itself in the role of a hub as a centralized means of facilitating communication between team members. Starting with a robust and descriptive technique for process modeling called IDEF0/SADT, we refined the basic data model by extracting concepts from ISO 9000 process analysis and from enterprise ontology. We defined a web-based architecture to serve as a collaborative tool and implemented it using an object-oriented database. The prospects of such a tool are discussed notably regarding to its ability to generate data dictionaries and to be used as a navigation tool through the medium of hospital-wide documentation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT........33Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT........33Y"><span>Communication and cooperation in underwater acoustic networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yerramalli, Srinivas</p> <p></p> <p>In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24705953','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24705953"><span>Modelling hourly dissolved oxygen concentration (DO) using dynamic evolving neural-fuzzy inference system (DENFIS)-based approach: case study of Klamath River at Miller Island Boat Ramp, OR, USA.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Heddam, Salim</p> <p>2014-01-01</p> <p>In this study, we present application of an artificial intelligence (AI) technique model called dynamic evolving neural-fuzzy inference system (DENFIS) based on an evolving clustering method (ECM), for modelling dissolved oxygen concentration in a river. To demonstrate the forecasting capability of DENFIS, a one year period from 1 January 2009 to 30 December 2009, of hourly experimental water quality data collected by the United States Geological Survey (USGS Station No: 420853121505500) station at Klamath River at Miller Island Boat Ramp, OR, USA, were used for model development. Two DENFIS-based models are presented and compared. The two DENFIS systems are: (1) offline-based system named DENFIS-OF, and (2) online-based system, named DENFIS-ON. The input variables used for the two models are water pH, temperature, specific conductance, and sensor depth. The performances of the models are evaluated using root mean square errors (RMSE), mean absolute error (MAE), Willmott index of agreement (d) and correlation coefficient (CC) statistics. The lowest root mean square error and highest correlation coefficient values were obtained with the DENFIS-ON method. The results obtained with DENFIS models are compared with linear (multiple linear regression, MLR) and nonlinear (multi-layer perceptron neural networks, MLPNN) methods. This study demonstrates that DENFIS-ON investigated herein outperforms all the proposed techniques for DO modelling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25375433','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25375433"><span>Large-deviation properties of Brownian motion with dry friction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Yaming; Just, Wolfram</p> <p>2014-10-01</p> <p>We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvP...9b4024A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvP...9b4024A"><span>Mapping Base Modifications in DNA by Transverse-Current Sequencing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.</p> <p>2018-02-01</p> <p>Sequencing DNA modifications and lesions, such as methylation of cytosine and oxidation of guanine, is even more important and challenging than sequencing the genome itself. The traditional methods for detecting DNA modifications are either insensitive to these modifications or require additional processing steps to identify a particular type of modification. Transverse-current sequencing in nanopores can potentially identify the canonical bases and base modifications in the same run. In this work, we demonstrate that the most common DNA epigenetic modifications and lesions can be detected with any predefined accuracy based on their tunneling current signature. Our results are based on simulations of the nanopore tunneling current through DNA molecules, calculated using nonequilibrium electron-transport methodology within an effective multiorbital model derived from first-principles calculations, followed by a base-calling algorithm accounting for neighbor current-current correlations. This methodology can be integrated with existing experimental techniques to improve base-calling fidelity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70030354','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70030354"><span>Red-shouldered hawk occupancy surveys in central Minnesota, USA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Henneman, C.; McLeod, M.A.; Andersen, D.E.</p> <p>2007-01-01</p> <p>Forest-dwelling raptors are often difficult to detect because many species occur at low density or are secretive. Broadcasting conspecific vocalizations can increase the probability of detecting forest-dwelling raptors and has been shown to be an effective method for locating raptors and assessing their relative abundance. Recent advances in statistical techniques based on presence-absence data use probabilistic arguments to derive probability of detection when it is <1 and to provide a model and likelihood-based method for estimating proportion of sites occupied. We used these maximum-likelihood models with data from red-shouldered hawk (Buteo lineatus) call-broadcast surveys conducted in central Minnesota, USA, in 1994-1995 and 2004-2005. Our objectives were to obtain estimates of occupancy and detection probability 1) over multiple sampling seasons (yr), 2) incorporating within-season time-specific detection probabilities, 3) with call type and breeding stage included as covariates in models of probability of detection, and 4) with different sampling strategies. We visited individual survey locations 2-9 times per year, and estimates of both probability of detection (range = 0.28-0.54) and site occupancy (range = 0.81-0.97) varied among years. Detection probability was affected by inclusion of a within-season time-specific covariate, call type, and breeding stage. In 2004 and 2005 we used survey results to assess the effect that number of sample locations, double sampling, and discontinued sampling had on parameter estimates. We found that estimates of probability of detection and proportion of sites occupied were similar across different sampling strategies, and we suggest ways to reduce sampling effort in a monitoring program.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1405184-space-filling-designs-computer-experiments-review','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1405184-space-filling-designs-computer-experiments-review"><span>Space-filling designs for computer experiments: A review</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Joseph, V. Roshan</p> <p>2016-01-29</p> <p>Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1405184','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1405184"><span>Space-filling designs for computer experiments: A review</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Joseph, V. Roshan</p> <p></p> <p>Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22479595-control-spin-boson-systems','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22479595-control-spin-boson-systems"><span>On the control of spin-boson systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Boscain, Ugo, E-mail: ugo.boscain@polytechnique.edu; Mason, Paolo, E-mail: Paolo.Mason@l2s.centralesupelec.fr; Panati, Gianluca, E-mail: panati@mat.uniroma1.it</p> <p>2015-09-15</p> <p>In this paper, we study the so-called spin-boson system, namely, a two-level system in interaction with a distinguished mode of a quantized bosonic field. We give a brief description of the controlled Rabi and Jaynes–Cummings models and we discuss their appearance in the mathematics and physics literature. We then study the controllability of the Rabi model when the control is an external field acting on the bosonic part. Applying geometric control techniques to the Galerkin approximation and using perturbation theory to guarantee non-resonance of the spectrum of the drift operator, we prove approximate controllability of the system, for almost everymore » value of the interaction parameter.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1906s0007D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1906s0007D"><span>A discrete trinomial model for the birth and death of stock financial bubbles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Di Persio, Luca; Guida, Francesco</p> <p>2017-11-01</p> <p>The present work proposes a novel way to model the dynamic of financial bubbles. In particular we exploit the so called trinomial tree technique, which is mainly inspired by the typical market order book (MOB) structure. According to the typical MOB rules, we exploit a bottom-up approach to derive the relevant generator process for the financial quantities characterizing the market we are considering. Our proposal pays attention in considering the real world changes in probability levels characterizing the bid-ask preferences, focusing the attention on the market movements. In particular, we show that financial bubbles are originated by these movements which also act amplify their growth.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920005862','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920005862"><span>Control algorithms for aerobraking in the Martian atmosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ward, Donald T.; Shipley, Buford W., Jr.</p> <p>1991-01-01</p> <p>The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28129583','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28129583"><span>Blind source computer device identification from recorded VoIP calls for forensic investigation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jahanirad, Mehdi; Anuar, Nor Badrul; Wahab, Ainuddin Wahid Abdul</p> <p>2017-03-01</p> <p>The VoIP services provide fertile ground for criminal activity, thus identifying the transmitting computer devices from recorded VoIP call may help the forensic investigator to reveal useful information. It also proves the authenticity of the call recording submitted to the court as evidence. This paper extended the previous study on the use of recorded VoIP call for blind source computer device identification. Although initial results were promising but theoretical reasoning for this is yet to be found. The study suggested computing entropy of mel-frequency cepstrum coefficients (entropy-MFCC) from near-silent segments as an intrinsic feature set that captures the device response function due to the tolerances in the electronic components of individual computer devices. By applying the supervised learning techniques of naïve Bayesian, linear logistic regression, neural networks and support vector machines to the entropy-MFCC features, state-of-the-art identification accuracy of near 99.9% has been achieved on different sets of computer devices for both call recording and microphone recording scenarios. Furthermore, unsupervised learning techniques, including simple k-means, expectation-maximization and density-based spatial clustering of applications with noise (DBSCAN) provided promising results for call recording dataset by assigning the majority of instances to their correct clusters. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920023005&hterms=corporations&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dcorporations','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920023005&hterms=corporations&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dcorporations"><span>Overview of the relevant CFD work at Thiokol Corporation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chwalowski, Pawel; Loh, Hai-Tien</p> <p>1992-01-01</p> <p>An in-house developed proprietary advanced computational fluid dynamics code called SHARP (Trademark) is a primary tool for many flow simulations and design analyses. The SHARP code is a time dependent, two dimensional (2-D) axisymmetric numerical solution technique for the compressible Navier-Stokes equations. The solution technique in SHARP uses a vectorizable implicit, second order accurate in time and space, finite volume scheme based on an upwind flux-difference splitting of a Roe-type approximated Riemann solver, Van Leer's flux vector splitting, and a fourth order artificial dissipation scheme with a preconditioning to accelerate the flow solution. Turbulence is simulated by an algebraic model, and ultimately the kappa-epsilon model. Some other capabilities of the code are 2-D two-phase Lagrangian particle tracking and cell blockages. Extensive development and testing has been conducted on the 3-D version of the code with flow, combustion, and turbulence interactions. The emphasis here is on the specific applications of SHARP in Solid Rocket Motor design. Information is given in viewgraph form.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29372467','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29372467"><span>The formation of intestinal organoids in a hanging drop culture.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Panek, Malgorzata; Grabacka, Maja; Pierzchalska, Malgorzata</p> <p>2018-01-25</p> <p>Recently organoids have become widely used in vitro models of many tissue and organs. These type of structures, originated from embryonic or adult mammalian intestines, are called "mini guts". They organize spontaneously when intestinal crypts or stem cells are embedded in the extracellular matrix proteins preparation scaffold (Matrigel). This approach has some disadvantages, as Matrigel is undefined (the concentrations of growth factors and other biologically active components in it may vary from batch to batch), difficult to handle and expensive. Here we show that the organoids derived from chicken embryo intestine are formed in a hanging drop without embedding, providing an attractive alternative for currently used protocols. Using this technique we obtained compact structures composed of contiguous organoids, which were generally similar to chicken organoids cultured in Matrigel in terms of morphology and expression of intestinal epithelial markers. Due to the simplicity, high reproducibility and throughput capacity of hanging drop technique our model may be applied in various studies concerning the gut biology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830009753','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830009753"><span>Semiconductor photoelectrochemistry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Buoncristiani, A. M.; Byvik, C. E.</p> <p>1983-01-01</p> <p>Semiconductor photoelectrochemical reactions are investigated. A model of the charge transport processes in the semiconductor, based on semiconductor device theory, is presented. It incorporates the nonlinear processes characterizing the diffusion and reaction of charge carriers in the semiconductor. The model is used to study conditions limiting useful energy conversion, specifically the saturation of current flow due to high light intensity. Numerical results describing charge distributions in the semiconductor and its effects on the electrolyte are obtained. Experimental results include: an estimate rate at which a semiconductor photoelectrode is capable of converting electromagnetic energy into chemical energy; the effect of cell temperature on the efficiency; a method for determining the point of zero zeta potential for macroscopic semiconductor samples; a technique using platinized titanium dioxide powders and ultraviolet radiation to produce chlorine, bromine, and iodine from solutions containing their respective ions; the photoelectrochemical properties of a class of layered compounds called transition metal thiophosphates; and a technique used to produce high conversion efficiency from laser radiation to chemical energy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25110748','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25110748"><span>QoS measurement of workflow-based web service compositions using Colored Petri net.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nematzadeh, Hossein; Motameni, Homayun; Mohamad, Radziah; Nematzadeh, Zahra</p> <p>2014-01-01</p> <p>Workflow-based web service compositions (WB-WSCs) is one of the main composition categories in service oriented architecture (SOA). Eflow, polymorphic process model (PPM), and business process execution language (BPEL) are the main techniques of the category of WB-WSCs. Due to maturity of web services, measuring the quality of composite web services being developed by different techniques becomes one of the most important challenges in today's web environments. Business should try to provide good quality regarding the customers' requirements to a composed web service. Thus, quality of service (QoS) which refers to nonfunctional parameters is important to be measured since the quality degree of a certain web service composition could be achieved. This paper tried to find a deterministic analytical method for dependability and performance measurement using Colored Petri net (CPN) with explicit routing constructs and application of theory of probability. A computer tool called WSET was also developed for modeling and supporting QoS measurement through simulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/10837','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/10837"><span>Terrestrial Radiodetermination Potential Users and Their Requirements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1976-07-01</p> <p>The report summarizes information gathered during a preliminary study of the application of electronic techniques to geographical position determination on land and on inland waterways. Systems incorporating such techniques have been called terrestri...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1134246','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1134246"><span>Distributed Contour Trees</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Morozov, Dmitriy; Weber, Gunther H.</p> <p>2014-03-31</p> <p>Topological techniques provide robust tools for data analysis. They are used, for example, for feature extraction, for data de-noising, and for comparison of data sets. This chapter concerns contour trees, a topological descriptor that records the connectivity of the isosurfaces of scalar functions. These trees are fundamental to analysis and visualization of physical phenomena modeled by real-valued measurements. We study the parallel analysis of contour trees. After describing a particular representation of a contour tree, called local{global representation, we illustrate how di erent problems that rely on contour trees can be solved in parallel with minimal communication.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28044286','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28044286"><span>Fabrication and Evaluation of Microfluidic Immunoassay Devices with Antibody-Immobilized Microbeads Retained in Porous Hydrogel Micropillars.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kasama, Toshihiro; Kaji, Noritada; Tokeshi, Manabu; Baba, Yoshinobu</p> <p>2017-01-01</p> <p>Due to the inherent characteristics including confinement of molecular diffusion and high surface-to-volume ratio, microfluidic device-based immunoassay has great advantages in cost, speed, sensitivity, and so on, compared with conventional techniques such as microtiter plate-based ELISA, latex agglutination method, and lateral flow immunochromatography. In this paper, we explain the detection of C-reactive protein as a model antigen by using our microfluidic immunoassay device, so-called immuno-pillar device. We describe in detail how we fabricated and used the immuno-pillar devices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012IEITI..95.1547K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012IEITI..95.1547K"><span>Discriminative Projection Selection Based Face Image Hashing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Karabat, Cagatay; Erdogan, Hakan</p> <p></p> <p>Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26727539','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26727539"><span>Near-Field Infrared Pump-Probe Imaging of Surface Phonon Coupling in Boron Nitride Nanotubes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gilburd, Leonid; Xu, Xiaoji G; Bando, Yoshio; Golberg, Dmitri; Walker, Gilbert C</p> <p>2016-01-21</p> <p>Surface phonon modes are lattice vibrational modes of a solid surface. Two common surface modes, called longitudinal and transverse optical modes, exhibit lattice vibration along or perpendicular to the direction of the wave. We report a two-color, infrared pump-infrared probe technique based on scattering type near-field optical microscopy (s-SNOM) to spatially resolve coupling between surface phonon modes. Spatially varying couplings between the longitudinal optical and surface phonon polariton modes of boron nitride nanotubes are observed, and a simple model is proposed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12779446','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12779446"><span>On the deduction of chemical reaction pathways from measurements of time series of concentrations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Samoilov, Michael; Arkin, Adam; Ross, John</p> <p>2001-03-01</p> <p>We discuss the deduction of reaction pathways in complex chemical systems from measurements of time series of chemical concentrations of reacting species. First we review a technique called correlation metric construction (CMC) and show the construction of a reaction pathway from measurements on a part of glycolysis. Then we present two new improved methods for the analysis of time series of concentrations, entropy metric construction (EMC), and entropy reduction method (ERM), and illustrate (EMC) with calculations on a model reaction system. (c) 2001 American Institute of Physics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhyA..413....1C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhyA..413....1C"><span>Path integral pricing of Wasabi option in the Black-Scholes model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cassagnes, Aurelien; Chen, Yu; Ohashi, Hirotada</p> <p>2014-11-01</p> <p>In this paper, using path integral techniques, we derive a formula for a propagator arising in the study of occupation time derivatives. Using this result we derive a fair price for the case of the cumulative Parisian option. After confirming the validity of the derived result using Monte Carlo simulation, a new type of heavily path dependent derivative product is investigated. We derive an approximation for our so-called Wasabi option fair price and check the accuracy of our result with a Monte Carlo simulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997NIMPA.389...72A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997NIMPA.389...72A"><span>High-energy physics software parallelization using database techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Argante, E.; van der Stok, P. D. V.; Willers, I.</p> <p>1997-02-01</p> <p>A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradimg, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19860013811','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19860013811"><span>Interactive algebraic grid-generation technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Smith, R. E.; Wiese, M. R.</p> <p>1986-01-01</p> <p>An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18369994','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18369994"><span>Maize embryogenesis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fontanet, Pilar; Vicient, Carlos M</p> <p>2008-01-01</p> <p>Plant embryo development is a complex process that includes several coordinated events. Maize mature embryos consist of a well-differentiated embryonic axis surrounded by a single massive cotyledon called scutellum. Mature embryo axis also includes lateral roots and several developed leaves. In contrast to Arabidopsis, in which the orientation of cell divisions are perfectly established, only the first planes of cell division are predictable in maize embryos. These distinctive characteristics joined to the availability of a large collection of embryo mutants, well-developed molecular biology and tissue culture tools, an established genetics and its economical importance make maize a good model plant for grass embryogenesis. Here, we describe basic concepts and techniques necessary for studying maize embryo development: how to grow maize in greenhouses and basic techniques for in vitro embryo culture, somatic embryogenesis and in situ hybridization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27330264','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27330264"><span>Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher</p> <p>2015-12-01</p> <p>Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.971a2049C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.971a2049C"><span>Sentiment analysis: a comparison of deep learning neural network algorithm with SVM and naϊve Bayes for Indonesian text</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia</p> <p>2018-03-01</p> <p>Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015NatSR...515673R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015NatSR...515673R"><span>X-ray Scatter Imaging of Hepatocellular Carcinoma in a Mouse Model Using Nanoparticle Contrast Agents</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rand, Danielle; Derdak, Zoltan; Carlson, Rolf; Wands, Jack R.; Rose-Petruck, Christoph</p> <p>2015-10-01</p> <p>Hepatocellular carcinoma (HCC) is one of the most common malignant tumors worldwide and is almost uniformly fatal. Current methods of detection include ultrasound examination and imaging by CT scan or MRI; however, these techniques are problematic in terms of sensitivity and specificity, and the detection of early tumors (<1 cm diameter) has proven elusive. Better, more specific, and more sensitive detection methods are therefore urgently needed. Here we discuss the application of a newly developed x-ray imaging technique called Spatial Frequency Heterodyne Imaging (SFHI) for the early detection of HCC. SFHI uses x-rays scattered by an object to form an image and is more sensitive than conventional absorption-based x-radiography. We show that tissues labeled in vivo with gold nanoparticle contrast agents can be detected using SFHI. We also demonstrate that directed targeting and SFHI of HCC tumors in a mouse model is possible through the use of HCC-specific antibodies. The enhanced sensitivity of SFHI relative to currently available techniques enables the x-ray imaging of tumors that are just a few millimeters in diameter and substantially reduces the amount of nanoparticle contrast agent required for intravenous injection relative to absorption-based x-ray imaging.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AIPC.1174..202A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AIPC.1174..202A"><span>Artificial Neural Identification and LMI Transformation for Model Reduction-Based Control of the Buck Switch-Mode Regulator</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Al-Rabadi, Anas N.</p> <p>2009-10-01</p> <p>This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27822440','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27822440"><span>Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq</p> <p>2016-01-01</p> <p>This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900012219','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900012219"><span>Strategies for concurrent processing of complex algorithms in data driven architectures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony</p> <p>1990-01-01</p> <p>The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910011082','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910011082"><span>Strategies for concurrent processing of complex algorithms in data driven architectures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.</p> <p>1990-01-01</p> <p>Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29266430','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29266430"><span>DAMS: A Model to Assess Domino Effects by Using Agent-Based Modeling and Simulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Laobing; Landucci, Gabriele; Reniers, Genserik; Khakzad, Nima; Zhou, Jianfeng</p> <p>2017-12-19</p> <p>Historical data analysis shows that escalation accidents, so-called domino effects, have an important role in disastrous accidents in the chemical and process industries. In this study, an agent-based modeling and simulation approach is proposed to study the propagation of domino effects in the chemical and process industries. Different from the analytical or Monte Carlo simulation approaches, which normally study the domino effect at probabilistic network levels, the agent-based modeling technique explains the domino effects from a bottom-up perspective. In this approach, the installations involved in a domino effect are modeled as agents whereas the interactions among the installations (e.g., by means of heat radiation) are modeled via the basic rules of the agents. Application of the developed model to several case studies demonstrates the ability of the model not only in modeling higher-level domino effects and synergistic effects but also in accounting for temporal dependencies. The model can readily be applied to large-scale complicated cases. © 2017 Society for Risk Analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=marijuana+AND+pro&id=EJ752665','ERIC'); return false;" href="https://eric.ed.gov/?q=marijuana+AND+pro&id=EJ752665"><span>Intimate Debate Technique: Medicinal Use of Marijuana</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Herreid, Clyde Freeman; DeRei, Kristie</p> <p>2007-01-01</p> <p>Classroom debates used to be familiar exercises to students schooled in past generations. In this article, the authors describe the technique called "intimate debate". To cooperative learning specialists, the technique is known as "structured debate" or "constructive debate". It is a powerful method for dealing with case topics that involve…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3836915','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3836915"><span>A Generalized Measurement Model to Quantify Health: The Multi-Attribute Preference Response Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Krabbe, Paul F. M.</p> <p>2013-01-01</p> <p>After 40 years of deriving metric values for health status or health-related quality of life, the effective quantification of subjective health outcomes is still a challenge. Here, two of the best measurement tools, the discrete choice and the Rasch model, are combined to create a new model for deriving health values. First, existing techniques to value health states are briefly discussed followed by a reflection on the recent revival of interest in patients’ experience with regard to their possible role in health measurement. Subsequently, three basic principles for valid health measurement are reviewed, namely unidimensionality, interval level, and invariance. In the main section, the basic operation of measurement is then discussed in the framework of probabilistic discrete choice analysis (random utility model) and the psychometric Rasch model. It is then shown how combining the main features of these two models yields an integrated measurement model, called the multi-attribute preference response (MAPR) model, which is introduced here. This new model transforms subjective individual rank data into a metric scale using responses from patients who have experienced certain health states. Its measurement mechanism largely prevents biases such as adaptation and coping. Several extensions of the MAPR model are presented. The MAPR model can be applied to a wide range of research problems. If extended with the self-selection of relevant health domains for the individual patient, this model will be more valid than existing valuation techniques. PMID:24278141</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010081056','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010081056"><span>Using Runtime Analysis to Guide Model Checking of Java Programs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Havelund, Klaus; Norvig, Peter (Technical Monitor)</p> <p>2001-01-01</p> <p>This paper describes how two runtime analysis algorithms, an existing data race detection algorithm and a new deadlock detection algorithm, have been implemented to analyze Java programs. Runtime analysis is based on the idea of executing the program once. and observing the generated run to extract various kinds of information. This information can then be used to predict whether other different runs may violate some properties of interest, in addition of course to demonstrate whether the generated run itself violates such properties. These runtime analyses can be performed stand-alone to generate a set of warnings. It is furthermore demonstrated how these warnings can be used to guide a model checker, thereby reducing the search space. The described techniques have been implemented in the b e grown Java model checker called PathFinder.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1327231','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1327231"><span>A computer model for the 30S ribosome subunit.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kuntz, I D; Crippen, G M</p> <p>1980-01-01</p> <p>We describe a computer-generated model for the locations of the 21 proteins of the 30S subunit of the E. coli ribosome. The model uses a new method of incorporating experimental measurements based on a mathematical technique called distance geometry. In this paper, we use data from two sources: immunoelectron microscopy and neutron-scattering studies. The data are generally self-consistent and lead to a set of relatively well-defined structures in which individual protein coordinates differ by approximately 20 A from one structure to another. Two important features of this calculation are the use of extended proteins rather than just the centers of mass, and the ability to confine the protein locations within an arbitrary boundary surface so that only solutions with an approximate 30S "shape" are permitted. PMID:7020786</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900059051&hterms=time+series+modeling&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dtime%2Bseries%2Bmodeling','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900059051&hterms=time+series+modeling&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dtime%2Bseries%2Bmodeling"><span>Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Scargle, Jeffrey D.</p> <p>1990-01-01</p> <p>While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19322685','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19322685"><span>[Key informers. When and How?].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Martín González, R</p> <p>2009-03-01</p> <p>When information obtained through duly designed and developed studies is not available, the solution to certain problems that affect the population or that respond to certain questions may be approached by using the information and experience provided by the so-called key informer. The key informer is defined as a person who is in contact with the community or with the problem to be studied, who is considered to have good knowledge of the situation and therefore who is considered an expert. The search for consensus is the basis to obtain information through the key informers. The techniques used have different characteristics based on whether the experts chosen meet together or not, whether they are guided or not, whether they interact with each other or not. These techniques include the survey, the Delphi technique, the nominal group technique, brainwriting, brainstorming, the Phillips 66 technique, the 6-3-5 technique, the community forum and the community impressions technique. Information provided by key informers through the search for consensus is relevant when this is not available or cannot be obtained by other methods. It has permitted the analysis of the existing neurological care model, elaboration of recommendations on visit times for the out-patient neurological care, and the elaboration of guidelines and recommendations for the management of prevalent neurological problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19680000532','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19680000532"><span>Digital computer technique for setup and checkout of an analog computer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ambaruch, R.</p> <p>1968-01-01</p> <p>Computer program technique, called Analog Computer Check-Out Routine Digitally /ACCORD/, generates complete setup and checkout data for an analog computer. In addition, the correctness of the analog program implementation is validated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUOSME14B0602S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUOSME14B0602S"><span>Habitat of calling blue and fin whales in the Southern California Bight</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sirovic, A.; Chou, E.; Roch, M. A.</p> <p>2016-02-01</p> <p>Northeast Pacific blue whale B calls and fin whale 20 Hz calls were detected from passive acoustic data collected over seven years at 16 sites in the Southern California Bight (SCB). Calling blue whales were most common in the coastal areas, during the summer and fall months. Fin whales began calling in fall and continued through winter, in the southcentral SCB. These data were used to develop habitat models of calling blue and fin whales in areas of high and low abundance in the SCB, using remotely sensed variables such as sea surface temperature, sea surface height, chlorophyll a, and primary productivity as model covariates. A random forest framework was used for variable selection and generalized additive models were developed to explain functional relationships, evaluate relative contribution of each significant variable, and investigate predictive abilities of models of calling whales. Seasonal component was an important feature of all models. Additionally, areas of high calling blue and fin whale abundance both had a positive relationship with the sea surface temperature. In areas of lower abundance, chlorophyll a concentration and primary productivity were important variables for blue whale models and sea surface height and primary productivity were significant covariates in fin whale models. Predictive models were generally better for predicting general trends than absolute values, but there was a large degree of variation in year-to-year predictability across different sites.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED119063.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED119063.pdf"><span>Essentials of Suggestopedia: A Primer for Practitioners.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Caskey, Owen L.; Flake, Muriel H.</p> <p></p> <p>Suggestology is the scientific study of the psychology of suggestion and Suggestopedia in the application of relaxation and suggestion techniques to learning. The approach applied to learning processes (called Suggestopedic) developed by Dr. Georgi Lozanov (called the Lozanov Method) utilizes mental and physical relaxation, deep breathing,…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1244632','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1244632"><span>Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie</p> <p></p> <p>The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with othermore » experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20010012159&hterms=autocad&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dautocad','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20010012159&hterms=autocad&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dautocad"><span>Structural Modeling Using "Scanning and Mapping" Technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)</p> <p>2000-01-01</p> <p>Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar components, then passed to AutoCAD for modification and correction of any discrepancies seen in the Photomodeler version of the 3Dmodel. These three software packages are fully compatible. The DXF file can be used to transfer drawings among those packages. To begin this entire process, we are using a small replica of an actual engine blade as a test object. This paper introduces the accomplishment of our recent work.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1793f0022S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1793f0022S"><span>Alternate methodologies to experimentally investigate shock initiation properties of explosives</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Svingala, Forrest R.; Lee, Richard J.; Sutherland, Gerrit T.; Benjamin, Richard; Boyle, Vincent; Sickels, William; Thompson, Ronnie; Samuels, Phillip J.; Wrobel, Erik; Cornell, Rodger</p> <p>2017-01-01</p> <p>Reactive flow models are desired for new explosive formulations early in the development stage. Traditionally, these models are parameterized by carefully-controlled 1-D shock experiments, including gas-gun testing with embedded gauges and wedge testing with explosive plane wave lenses (PWL). These experiments are easy to interpret due to their 1-D nature, but are expensive to perform and cannot be performed at all explosive test facilities. This work investigates alternative methods to probe shock-initiation behavior of new explosives using widely-available pentolite gap test donors and simple time-of-arrival type diagnostics. These experiments can be performed at a low cost at most explosives testing facilities. This allows experimental data to parameterize reactive flow models to be collected much earlier in the development of an explosive formulation. However, the fundamentally 2-D nature of these tests may increase the modeling burden in parameterizing these models and reduce general applicability. Several variations of the so-called modified gap test were investigated and evaluated for suitability as an alternative to established 1-D gas gun and PWL techniques. At least partial agreement with 1-D test methods was observed for the explosives tested, and future work is planned to scope the applicability and limitations of these experimental techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22522535-calibration-semi-analytic-models-galaxy-formation-using-particle-swarm-optimization','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22522535-calibration-semi-analytic-models-galaxy-formation-using-particle-swarm-optimization"><span>CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila</p> <p>2015-03-10</p> <p>We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080030288','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080030288"><span>Assimilation of GRACE Terrestrial Water Storage Data into a Land Surface Model: Results for the Mississippi River Basin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zaitchik, Benjamin F.; Rodell, Matthew; Reichle, Rolf H.</p> <p>2007-01-01</p> <p>NASA's GRACE mission has the potential to be extremely valuable for water resources applications and global water cycle research. What makes GRACE unique among Earth Science satellite systems is that it is able to monitor variations in water stored in all forms, from snow and surface water to soil moisture to groundwater in the deepest aquifers. However, the space and time resolutions of GRACE observations are coarse. GRACE typically resolves water storage changes over regions the size of Nebraska on a monthly basis, while city-scale, daily observations would be more useful for water management, agriculture, and weather prediction. High resolution numerical (computer) hydrology models have been developed, which predict the fates of water and energy after they strike the land surface as precipitation and sunlight. These are similar to weather and climate forecast models, which simulate atmospheric processes. We integrated the GRACE observations into a hydrology model using an advanced technique called data assimilation. The results were new estimates of groundwater, soil moisture, and snow variations, which combined the veracity of GRACE with the high resolution of the model. We tested the technique over the Mississippi River basin, but it will be even more valuable in parts of the world which lack reliable data on water availability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20110013170&hterms=feather&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dfeather','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20110013170&hterms=feather&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dfeather"><span>Optimizing Requirements Decisions with KEYS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jalali, Omid; Menzies, Tim; Feather, Martin</p> <p>2008-01-01</p> <p>Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017Chaos..27l1102P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017Chaos..27l1102P"><span>Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pathak, Jaideep; Lu, Zhixin; Hunt, Brian R.; Girvan, Michelle; Ott, Edward</p> <p>2017-12-01</p> <p>We use recent advances in the machine learning area known as "reservoir computing" to formulate a method for model-free estimation from data of the Lyapunov exponents of a chaotic process. The technique uses a limited time series of measurements as input to a high-dimensional dynamical system called a "reservoir." After the reservoir's response to the data is recorded, linear regression is used to learn a large set of parameters, called the "output weights." The learned output weights are then used to form a modified autonomous reservoir designed to be capable of producing an arbitrarily long time series whose ergodic properties approximate those of the input signal. When successful, we say that the autonomous reservoir reproduces the attractor's "climate." Since the reservoir equations and output weights are known, we can compute the derivatives needed to determine the Lyapunov exponents of the autonomous reservoir, which we then use as estimates of the Lyapunov exponents for the original input generating system. We illustrate the effectiveness of our technique with two examples, the Lorenz system and the Kuramoto-Sivashinsky (KS) equation. In the case of the KS equation, we note that the high dimensional nature of the system and the large number of Lyapunov exponents yield a challenging test of our method, which we find the method successfully passes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EJASP2014...54M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EJASP2014...54M"><span>Interference tables: a useful model for interference analysis in asynchronous multicarrier transmission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Medjahdi, Yahia; Terré, Michel; Ruyet, Didier Le; Roviras, Daniel</p> <p>2014-12-01</p> <p>In this paper, we investigate the impact of timing asynchronism on the performance of multicarrier techniques in a spectrum coexistence context. Two multicarrier schemes are considered: cyclic prefix-based orthogonal frequency division multiplexing (CP-OFDM) with a rectangular pulse shape and filter bank-based multicarrier (FBMC) with physical layer for dynamic spectrum access and cognitive radio (PHYDYAS) and isotropic orthogonal transform algorithm (IOTA) waveforms. First, we present the general concept of the so-called power spectral density (PSD)-based interference tables which are commonly used for multicarrier interference characterization in spectrum sharing context. After highlighting the limits of this approach, we propose a new family of interference tables called `instantaneous interference tables'. The proposed tables give the interference power caused by a given interfering subcarrier on a victim one, not only as a function of the spectral distance separating both subcarriers but also with respect to the timing misalignment between the subcarrier holders. In contrast to the PSD-based interference tables, the accuracy of the proposed tables has been validated through different simulation results. Furthermore, due to the better frequency localization of both PHYDYAS and IOTA waveforms, FBMC technique is demonstrated to be more robust to timing asynchronism compared to OFDM one. Such a result makes FBMC a potential candidate for the physical layer of future cognitive radio systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880025315&hterms=multitasking&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dmultitasking','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880025315&hterms=multitasking&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dmultitasking"><span>Efficient Ada multitasking on a RISC register window architecture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kearns, J. P.; Quammen, D.</p> <p>1987-01-01</p> <p>This work addresses the problem of reducing context switch overhead on a processor which supports a large register file - a register file much like that which is part of the Berkeley RISC processors and several other emerging architectures (which are not necessarily reduced instruction set machines in the purest sense). Such a reduction in overhead is particularly desirable in a real-time embedded application, in which task-to-task context switch overhead may result in failure to meet crucial deadlines. A storage management technique by which a context switch may be implemented as cheaply as a procedure call is presented. The essence of this technique is the avoidance of the save/restore of registers on the context switch. This is achieved through analysis of the static source text of an Ada tasking program. Information gained during that analysis directs the optimized storage management strategy for that program at run time. A formal verification of the technique in terms of an operational control model and an evaluation of the technique's performance via simulations driven by synthetic Ada program traces are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19111714','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19111714"><span>Computerized planning of prostate cryosurgery using variable cryoprobe insertion depth.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rossi, Michael R; Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed</p> <p>2010-02-01</p> <p>The current study presents a computerized planning scheme for prostate cryosurgery using a variable insertion depth strategy. This study is a part of an ongoing effort to develop computerized tools for cryosurgery. Based on typical clinical practices, previous automated planning schemes have required that all cryoprobes be aligned at a single insertion depth. The current study investigates the benefit of removing this constraint, in comparison with results based on uniform insertion depth planning as well as the so-called "pullback procedure". Planning is based on the so-called "bubble-packing method", and its quality is evaluated with bioheat transfer simulations. This study is based on five 3D prostate models, reconstructed from ultrasound imaging, and cryoprobe active length in the range of 15-35 mm. The variable insertion depth technique is found to consistently provide superior results when compared to the other placement methods. Furthermore, it is shown that both the optimal active length and the optimal number of cryoprobes vary among prostate models, based on the size and shape of the target region. Due to its low computational cost, the new scheme can be used to determine the optimal cryoprobe layout for a given prostate model in real time. Copyright 2008 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4474945','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4474945"><span>Non-song social call bouts of migrating humpback whales</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rekdahl, Melinda L.; Dunlop, Rebecca A.; Goldizen, Anne W.; Garland, Ellen C.; Biassoni, Nicoletta; Miller, Patrick; Noad, Michael J.</p> <p>2015-01-01</p> <p>The use of stereotyped calls within structured bouts has been described for a number of species and may increase the information potential of call repertoires. Humpback whales produce a repertoire of social calls, although little is known about the complexity or function of these calls. In this study, digital acoustic tag recordings were used to investigate social call use within bouts, the use of bouts across different social contexts, and whether particular call type combinations were favored. Call order within bouts was investigated using call transition frequencies and information theory techniques. Call bouts were defined through analysis of inter-call intervals, as any calls within 3.9 s of each other. Bouts were produced significantly more when new whales joined a group compared to groups that did not change membership, and in groups containing multiple adults escorting a female and calf compared to adult only groups. Although social calls tended to be produced in bouts, there were few repeated bout types. However, the order in which most call types were produced within bouts was non-random and dependent on the preceding call type. These bouts appear to be at least partially governed by rules for how individual components are combined. PMID:26093396</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26093396','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26093396"><span>Non-song social call bouts of migrating humpback whales.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rekdahl, Melinda L; Dunlop, Rebecca A; Goldizen, Anne W; Garland, Ellen C; Biassoni, Nicoletta; Miller, Patrick; Noad, Michael J</p> <p>2015-06-01</p> <p>The use of stereotyped calls within structured bouts has been described for a number of species and may increase the information potential of call repertoires. Humpback whales produce a repertoire of social calls, although little is known about the complexity or function of these calls. In this study, digital acoustic tag recordings were used to investigate social call use within bouts, the use of bouts across different social contexts, and whether particular call type combinations were favored. Call order within bouts was investigated using call transition frequencies and information theory techniques. Call bouts were defined through analysis of inter-call intervals, as any calls within 3.9 s of each other. Bouts were produced significantly more when new whales joined a group compared to groups that did not change membership, and in groups containing multiple adults escorting a female and calf compared to adult only groups. Although social calls tended to be produced in bouts, there were few repeated bout types. However, the order in which most call types were produced within bouts was non-random and dependent on the preceding call type. These bouts appear to be at least partially governed by rules for how individual components are combined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70188037','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70188037"><span>Parameter estimation for groundwater models under uncertain irrigation data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen</p> <p>2015-01-01</p> <p>The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SciEd..90..579Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SciEd..90..579Z"><span>Expert models and modeling processes associated with a computer-modeling tool</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.</p> <p>2006-07-01</p> <p>Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=function&id=EJ1166055','ERIC'); return false;" href="https://eric.ed.gov/?q=function&id=EJ1166055"><span>Inverse Function: Pre-Service Teachers' Techniques and Meanings</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Paoletti, Teo; Stevens, Irma E.; Hobson, Natalie L. F.; Moore, Kevin C.; LaForest, Kevin R.</p> <p>2018-01-01</p> <p>Researchers have argued teachers and students are not developing connected meanings for function inverse, thus calling for a closer examination of teachers' and students' inverse function meanings. Responding to this call, we characterize 25 pre-service teachers' inverse function meanings as inferred from our analysis of clinical interviews. After…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=electoral+AND+politics&id=EJ973467','ERIC'); return false;" href="https://eric.ed.gov/?q=electoral+AND+politics&id=EJ973467"><span>Perspectives: A Challenging Patriotism</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Boyte, Harry C.</p> <p>2012-01-01</p> <p>In a time of alarm about the poisoning of electoral politics, public passions inflamed by sophisticated techniques of mass polarization, and fears that the country is losing control of its collective future, higher education is called upon to take leadership in "reinventing citizenship." It needs to respond to that call on a scale unprecedented in…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=The+AND+Value+AND+Free+AND+Speech&pg=7&id=EJ429737','ERIC'); return false;" href="https://eric.ed.gov/?q=The+AND+Value+AND+Free+AND+Speech&pg=7&id=EJ429737"><span>Teaching Free Expression in Word and Example (Commentary).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Merrill, John</p> <p>1991-01-01</p> <p>Suggests that the teaching of free expression may be the highest calling of a communications or journalism professor. Argues that freedom must be tempered by a sense of ethics. Calls upon teachers to encourage students to analyze the questions surrounding free expression. Describes techniques for scrutinizing journalistic myths. (SG)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004ASAJ..115.2483J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004ASAJ..115.2483J"><span>Hands-free human-machine interaction with voice</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Juang, B. H.</p> <p>2004-05-01</p> <p>Voice is natural communication interface between a human and a machine. The machine, when placed in today's communication networks, may be configured to provide automation to save substantial operating cost, as demonstrated in AT&T's VRCP (Voice Recognition Call Processing), or to facilitate intelligent services, such as virtual personal assistants, to enhance individual productivity. These intelligent services often need to be accessible anytime, anywhere (e.g., in cars when the user is in a hands-busy-eyes-busy situation or during meetings where constantly talking to a microphone is either undersirable or impossible), and thus call for advanced signal processing and automatic speech recognition techniques which support what we call ``hands-free'' human-machine communication. These techniques entail a broad spectrum of technical ideas, ranging from use of directional microphones and acoustic echo cancellatiion to robust speech recognition. In this talk, we highlight a number of key techniques that were developed for hands-free human-machine communication in the mid-1990s after Bell Labs became a unit of Lucent Technologies. A video clip will be played to demonstrate the accomplishement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140012756','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140012756"><span>Infrared Contrast Analysis Technique for Flash Thermography Nondestructive Evaluation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Koshti, Ajay</p> <p>2014-01-01</p> <p>The paper deals with the infrared flash thermography inspection to detect and analyze delamination-like anomalies in nonmetallic materials. It provides information on an IR Contrast technique that involves extracting normalized contrast verses time evolutions from the flash thermography infrared video data. The paper provides the analytical model used in the simulation of infrared image contrast. The contrast evolution simulation is achieved through calibration on measured contrast evolutions from many flat bottom holes in the subject material. The paper also provides formulas to calculate values of the thermal measurement features from the measured contrast evolution curve. Many thermal measurement features of the contrast evolution that relate to the anomaly characteristics are calculated. The measurement features and the contrast simulation are used to evaluate flash thermography inspection data in order to characterize the delamination-like anomalies. In addition, the contrast evolution prediction is matched to the measured anomaly contrast evolution to provide an assessment of the anomaly depth and width in terms of depth and diameter of the corresponding equivalent flat-bottom hole (EFBH) or equivalent uniform gap (EUG). The paper provides anomaly edge detection technique called the half-max technique which is also used to estimate width of an indication. The EFBH/EUG and half-max width estimations are used to assess anomaly size. The paper also provides some information on the "IR Contrast" software application, half-max technique and IR Contrast feature imaging application, which are based on models provided in this paper.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFM.H21J..06C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFM.H21J..06C"><span>Treatment of systematic errors in land data assimilation systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Crow, W. T.; Yilmaz, M.</p> <p>2012-12-01</p> <p>Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=natural+AND+language+AND+generation&pg=5&id=EJ605297','ERIC'); return false;" href="https://eric.ed.gov/?q=natural+AND+language+AND+generation&pg=5&id=EJ605297"><span>Comparing Noun Phrasing Techniques for Use with Medical Digital Library Tools.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Tolle, Kristin M.; Chen, Hsinchun</p> <p>2000-01-01</p> <p>Describes a study that investigated the use of a natural language processing technique called noun phrasing to determine whether it is a viable technique for medical information retrieval. Evaluates four noun phrase generation tools for their ability to isolate noun phrases from medical journal abstracts, focusing on precision and recall.…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=ocean+AND+literacy&id=EJ1179163','ERIC'); return false;" href="https://eric.ed.gov/?q=ocean+AND+literacy&id=EJ1179163"><span>Underwater Photo-Elicitation: A New Experiential Marine Education Technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Andrews, Steve; Stocker, Laura; Oechel, Walter</p> <p>2018-01-01</p> <p>Underwater photo-elicitation is a novel experiential marine education technique that combines direct experience in the marine environment with the use of digital underwater cameras. A program called Show Us Your Ocean! (SUYO!) was created, utilising a mixed methodology (qualitative and quantitative methods) to test the efficacy of this technique.…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=typography&pg=5&id=ED158284','ERIC'); return false;" href="https://eric.ed.gov/?q=typography&pg=5&id=ED158284"><span>Q-Technique and Graphics Research.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kahle, Roger R.</p> <p></p> <p>Because Q-technique is as appropriate for use with visual and design items as for use with words, it is not stymied by the topics one is likely to encounter in graphics research. In particular Q-technique is suitable for studying the so-called "congeniality" of typography, for various copytesting usages, and for multivariate graphics research. The…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED263612.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED263612.pdf"><span>Writing with Basals: A Sentence Combining Approach to Comprehension.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Reutzel, D. Ray; Merrill, Jimmie D.</p> <p></p> <p>Sentence combining techniques can be used with basal readers to help students develop writing skills. The first technique is addition, characterized by using the connecting word "and" to join two or more base sentences together. The second technique is called "embedding," and is characterized by putting parts of two or more base sentences together…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005SPIE.5812..309C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005SPIE.5812..309C"><span>The pearls of using real-world evidence to discover social groups</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cardillo, Raymond A.; Salerno, John J.</p> <p>2005-03-01</p> <p>In previous work, we introduced a new paradigm called Uni-Party Data Community Generation (UDCG) and a new methodology to discover social groups (a.k.a., community models) called Link Discovery based on Correlation Analysis (LDCA). We further advanced this work by experimenting with a corpus of evidence obtained from a Ponzi scheme investigation. That work identified several UDCG algorithms, developed what we called "Importance Measures" to compare the accuracy of the algorithms based on ground truth, and presented a Concept of Operations (CONOPS) that criminal investigators could use to discover social groups. However, that work used a rather small random sample of manually edited documents because the evidence contained far too many OCR and other extraction errors. Deferring the evidence extraction errors allowed us to continue experimenting with UDCG algorithms, but only used a small fraction of the available evidence. In attempt to discover techniques that are more practical in the near-term, our most recent work focuses on being able to use an entire corpus of real-world evidence to discover social groups. This paper discusses the complications of extracting evidence, suggests a method of performing name resolution, presents a new UDCG algorithm, and discusses our future direction in this area.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JEI....27a1009S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JEI....27a1009S"><span>Automatic luminous reflections detector using global threshold with increased luminosity contrast in images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Silva, Ricardo Petri; Naozuka, Gustavo Taiji; Mastelini, Saulo Martiello; Felinto, Alan Salvany</p> <p>2018-01-01</p> <p>The incidence of luminous reflections (LR) in captured images can interfere with the color of the affected regions. These regions tend to oversaturate, becoming whitish and, consequently, losing the original color information of the scene. Decision processes that employ images acquired from digital cameras can be impaired by the LR incidence. Such applications include real-time video surgeries, facial, and ocular recognition. This work proposes an algorithm called contrast enhancement of potential LR regions, which is a preprocessing to increase the contrast of potential LR regions, in order to improve the performance of automatic LR detectors. In addition, three automatic detectors were compared with and without the employment of our preprocessing method. The first one is a technique already consolidated in the literature called the Chang-Tseng threshold. We propose two automatic detectors called adapted histogram peak and global threshold. We employed four performance metrics to evaluate the detectors, namely, accuracy, precision, exactitude, and root mean square error. The exactitude metric is developed by this work. Thus, a manually defined reference model was created. The global threshold detector combined with our preprocessing method presented the best results, with an average exactitude rate of 82.47%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28951246','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28951246"><span>A short review of variants calling for single-cell-sequencing data with applications.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wei, Zhuohui; Shu, Chang; Zhang, Changsheng; Huang, Jingying; Cai, Hongmin</p> <p>2017-11-01</p> <p>The field of single-cell sequencing is fleetly expanding, and many techniques have been developed in the past decade. With this technology, biologists can study not only the heterogeneity between two adjacent cells in the same tissue or organ, but also the evolutionary relationships and degenerative processes in a single cell. Calling variants is the main purpose in analyzing single cell sequencing (SCS) data. Currently, some popular methods used for bulk-cell-sequencing data analysis are tailored directly to be applied in dealing with SCS data. However, SCS requires an extra step of genome amplification to accumulate enough quantity for satisfying sequencing needs. The amplification yields large biases and thus raises challenge for using the bulk-cell-sequencing methods. In order to provide guidance for the development of specialized analyzed methods as well as using currently developed tools for SNS, this paper aims to bridge the gap. In this paper, we firstly introduced two popular genome amplification methods and compared their capabilities. Then we introduced a few popular models for calling single-nucleotide polymorphisms and copy-number variations. Finally, break-through applications of SNS were summarized to demonstrate its potential in researching cell evolution. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MsT..........8V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MsT..........8V"><span>Early Breast Cancer Diagnosis Using Microwave Imaging via Space-Frequency Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vemulapalli, Spandana</p> <p></p> <p>The conventional breast cancer detection methods have limitations ranging from ionizing radiations, low specificity to high cost. These limitations make way for a suitable alternative called Microwave Imaging, as a screening technique in the detection of breast cancer. The discernible differences between the benign, malignant and healthy breast tissues and the ability to overcome the harmful effects of ionizing radiations make microwave imaging, a feasible breast cancer detection technique. Earlier studies have shown the variation of electrical properties of healthy and malignant tissues as a function of frequency and hence stimulates high bandwidth requirement. A Ultrawideband, Wideband and Narrowband arrays have been designed, simulated and optimized for high (44%), medium (33%) and low (7%) bandwidths respectively, using the EM (electromagnetic software) called FEKO. These arrays are then used to illuminate the breast model (phantom) and the received backscattered signals are obtained in the near field for each case. The Microwave Imaging via Space-Time (MIST) beamforming algorithm in the frequency domain, is next applied to these near field backscattered monostatic frequency response signals for the image reconstruction of the breast model. The main purpose of this investigation is to access the impact of bandwidth and implement a novel imaging technique for use in the early detection of breast cancer. Earlier studies show the implementation of the MIST imaging algorithm on the time domain signals via a frequency domain beamformer. The performance evaluation of the imaging algorithm on the frequency response signals has been carried out in the frequency domain. The energy profile of the breast in the spatial domain is created via the frequency domain Parseval's theorem. The beamformer weights calculated using these the MIST algorithm (not including the effect of the skin) has been calculated for Ultrawideband, Wideband and Narrowband arrays, respectively. Quality metrics such as dynamic range, radiometric resolution etc. are also evaluated for all the three types of arrays.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28849580','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28849580"><span>Diminishing-cues retrieval practice: A memory-enhancing technique that works when regular testing doesn't.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fiechter, Joshua L; Benjamin, Aaron S</p> <p>2017-08-28</p> <p>Retrieval practice has been shown to be a highly effective tool for enhancing memory, a fact that has led to major changes to educational practice and technology. However, when initial learning is poor, initial retrieval practice is unlikely to be successful and long-term benefits of retrieval practice are compromised or nonexistent. Here, we investigate the benefit of a scaffolded retrieval technique called diminishing-cues retrieval practice (Finley, Benjamin, Hays, Bjork, & Kornell, Journal of Memory and Language, 64, 289-298, 2011). Under learning conditions that favored a strong testing effect, diminishing cues and standard retrieval practice both enhanced memory performance relative to restudy. Critically, under learning conditions where standard retrieval practice was not helpful, diminishing cues enhanced memory performance substantially. These experiments demonstrate that diminishing-cues retrieval practice can widen the range of conditions under which testing can benefit memory, and so can serve as a model for the broader application of testing-based techniques for enhancing learning.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005SPIE.5837..785G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005SPIE.5837..785G"><span>Evaluation of architectures for an ASP MPEG-4 decoder using a system-level design methodology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Garcia, Luz; Reyes, Victor; Barreto, Dacil; Marrero, Gustavo; Bautista, Tomas; Nunez, Antonio</p> <p>2005-06-01</p> <p>Trends in multimedia consumer electronics, digital video and audio, aim to reach users through low-cost mobile devices connected to data broadcasting networks with limited bandwidth. An emergent broadcasting network is the digital audio broadcasting network (DAB) which provides CD quality audio transmission together with robustness and efficiency techniques to allow good quality reception in motion conditions. This paper focuses on the system-level evaluation of different architectural options to allow low bandwidth digital video reception over DAB, based on video compression techniques. Profiling and design space exploration techniques are applied over the ASP MPEG-4 decoder in order to find out the best HW/SW partition given the application and platform constraints. An innovative SystemC-based system-level design tool, called CASSE, is being used for modelling, exploration and evaluation of different ASP MPEG-4 decoder HW/SW partitions. System-level trade offs and quantitative data derived from this analysis are also presented in this work.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28933027','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28933027"><span>Learning directed acyclic graphs from large-scale genomics data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nikolay, Fabio; Pesavento, Marius; Kritikos, George; Typas, Nassos</p> <p>2017-09-20</p> <p>In this paper, we consider the problem of learning the genetic interaction map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double-knockout (DK) data. Based on a set of well-established biological interaction models, we detect and classify the interactions between genes. We propose a novel linear integer optimization program called the Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies among genes and to compute the DAG topology that matches the DK measurements best. Furthermore, we extend the GENIE program by incorporating genetic interaction profile (GI-profile) data to further enhance the detection performance. In addition, we propose a sequential scalability technique for large sets of genes under study, in order to provide statistically significant results for real measurement data. Finally, we show via numeric simulations that the GENIE program and the GI-profile data extended GENIE (GI-GENIE) program clearly outperform the conventional techniques and present real data results for our proposed sequential scalability technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.G51A0741M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.G51A0741M"><span>GRAVTool, Advances on the Package to Compute Geoid Model path by the Remove-Compute-Restore Technique, Following Helmert's Condensation Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Marotta, G. S.</p> <p>2017-12-01</p> <p>Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astrogeodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove Compute Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and Global Geopotential Model (GGM), respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and adjust these models to one local vertical datum. This research presents the advances on the package called GRAVTool to compute geoid models path by the RCR, following Helmert's condensation method, and its application in a study area. The studied area comprehends the federal district of Brazil, with 6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show a geoid model computed by the GRAVTool package, after analysis of the density, DTM and GGM values, more adequate to the reference values used on the study area. The accuracy of the computed model (σ = ± 0.058 m, RMS = 0.067 m, maximum = 0.124 m and minimum = -0.155 m), using density value of 2.702 g/cm³ ±0.024 g/cm³, DTM SRTM Void Filled 3 arc-second and GGM EIGEN-6C4 up to degree and order 250, matches the uncertainty (σ =± 0.073) of 26 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.076 m, RMS = 0.098 m, maximum = 0.320 m and minimum = -0.061 m).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960001202','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960001202"><span>An implementation and performance measurement of the progressive retry technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Suri, Gaurav; Huang, Yennun; Wang, Yi-Min; Fuchs, W. Kent; Kintala, Chandra</p> <p>1995-01-01</p> <p>This paper describes a recovery technique called progressive retry for bypassing software faults in message-passing applications. The technique is implemented as reusable modules to provide application-level software fault tolerance. The paper describes the implementation of the technique and presents results from the application of progressive retry to two telecommunications systems. the results presented show that the technique is helpful in reducing the total recovery time for message-passing applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://images.nasa.gov/#/details-PIA19051.html','SCIGOVIMAGE-NASA'); return false;" href="https://images.nasa.gov/#/details-PIA19051.html"><span>Perspective on Kraken Mare Shores</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://images.nasa.gov/">NASA Image and Video Library</a></p> <p></p> <p>2015-02-12</p> <p>This Cassini Synthetic Aperture Radar (SAR) image is presented as a perspective view and shows a landscape near the eastern shoreline of Kraken Mare, a hydrocarbon sea in Titan's north polar region. This image was processed using a technique for handling noise that results in clearer views that can be easier for researchers to interpret. The technique, called despeckling, also is useful for producing altimetry data and 3-D views called digital elevation maps. Scientists have used a technique called radargrammetry to determine the altitude of surface features in this view at a resolution of approximately half a mile, or 1 kilometer. The altimetry reveals that the area is smooth overall, with a maximum amplitude of 0.75 mile (1.2 kilometers) in height. The topography also shows that all observed channels flow downhill. The presence of what scientists call "knickpoints" -- locations on a river where a sharp change in slope occurs -- might indicate stratification in the bedrock, erosion mechanisms at work or a particular way the surface responds to runoff events, such as floods following large storms. One such knickpoint is visible just above the lower left corner, where an area of bright slopes is seen. The image was obtained during a flyby of Titan on April 10, 2007. A more traditional radar image of this area on Titan is seen in PIA19046. http://photojournal.jpl.nasa.gov/catalog/PIA19051</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://nei.nih.gov/news/briefs/stem_cell_therapies','NIH-MEDLINEPLUS'); return false;" href="https://nei.nih.gov/news/briefs/stem_cell_therapies"><span>Gene Profiling Technique to Accelerate Stem Cell Therapies for Eye Diseases</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://medlineplus.gov/">MedlinePlus</a></p> <p></p> <p></p> <p>... like RPE. They also use a technique called quantitative RT-PCR to measure the expression of genes ... higher in iPS cells than mature RPE. But quantitative RT-PCR only permits the simultaneous measurement of ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017HydJ...25.1063M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017HydJ...25.1063M"><span>The effects of ionic strength and organic matter on virus inactivation at low temperatures: general likelihood uncertainty estimation (GLUE) as an alternative to least-squares parameter optimization for the fitting of virus inactivation models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin</p> <p>2017-06-01</p> <p>This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29045758','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29045758"><span>RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Stránský, V; Thinová, L</p> <p>2017-11-01</p> <p>In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910066365&hterms=structured+systems+analysis+design&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dstructured%2Bsystems%2Banalysis%2Bdesign','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910066365&hterms=structured+systems+analysis+design&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dstructured%2Bsystems%2Banalysis%2Bdesign"><span>Application of structured analysis to a telerobotic system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dashman, Eric; Mclin, David; Harrison, F. W.; Soloway, Donald; Young, Steven</p> <p>1990-01-01</p> <p>The analysis and evaluation of a multiple arm telerobotic research and demonstration system developed by the NASA Intelligent Systems Research Laboratory (ISRL) is described. Structured analysis techniques were used to develop a detailed requirements model of an existing telerobotic testbed. Performance models generated during this process were used to further evaluate the total system. A commercial CASE tool called Teamwork was used to carry out the structured analysis and development of the functional requirements model. A structured analysis and design process using the ISRL telerobotic system as a model is described. Evaluation of this system focused on the identification of bottlenecks in this implementation. The results demonstrate that the use of structured methods and analysis tools can give useful performance information early in a design cycle. This information can be used to ensure that the proposed system meets its design requirements before it is built.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19860005818','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19860005818"><span>Development of an unsteady aerodynamics model to improve correlation of computed blade stresses with test data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gangwani, S. T.</p> <p>1985-01-01</p> <p>A reliable rotor aeroelastic analysis operational that correctly predicts the vibration levels for a helicopter is utilized to test various unsteady aerodynamics models with the objective of improving the correlation between test and theory. This analysis called Rotor Aeroelastic Vibration (RAVIB) computer program is based on a frequency domain forced response analysis which utilizes the transfer matrix techniques to model helicopter/rotor dynamic systems of varying degrees of complexity. The results for the AH-1G helicopter rotor were compared with the flight test data during high speed operation and they indicated a reasonably good correlation for the beamwise and chordwise blade bending moments, but for torsional moments the correlation was poor. As a result, a new aerodynamics model based on unstalled synthesized data derived from the large amplitude oscillating airfoil experiments was developed and tested.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatCo...712965H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatCo...712965H"><span>Modelling proteins' hidden conformations to predict antibiotic resistance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.</p> <p>2016-10-01</p> <p>TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26109906','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26109906"><span>Modeling structural change in spatial system dynamics: A Daisyworld example.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Neuwirth, C; Peck, A; Simonović, S P</p> <p>2015-03-01</p> <p>System dynamics (SD) is an effective approach for helping reveal the temporal behavior of complex systems. Although there have been recent developments in expanding SD to include systems' spatial dependencies, most applications have been restricted to the simulation of diffusion processes; this is especially true for models on structural change (e.g. LULC modeling). To address this shortcoming, a Python program is proposed to tightly couple SD software to a Geographic Information System (GIS). The approach provides the required capacities for handling bidirectional and synchronized interactions of operations between SD and GIS. In order to illustrate the concept and the techniques proposed for simulating structural changes, a fictitious environment called Daisyworld has been recreated in a spatial system dynamics (SSD) environment. The comparison of spatial and non-spatial simulations emphasizes the importance of considering spatio-temporal feedbacks. Finally, practical applications of structural change models in agriculture and disaster management are proposed.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1994AtmEn..28.2211H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1994AtmEn..28.2211H"><span>Kuwaiti oil fires—Modeling revisited</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Husain, Tahir</p> <p></p> <p>Just after the invasion of Kuwait, scientists began predictions on the environmental disaster due to threat by the Iraqi regime to blow out oil wells in the Kuwaiti oil fields. The findings with the speculations ranging from a nuclear winter to super-acid rain and global warming were presented in the World Climate Conference in Geneva in November 1990. Just before the war erupted in the middle of January 1991, a conference in London was called to discuss the potential risks to human life and ecological systems in case of blow out of oil fields. The scientists, using modeling techniques, raised the speculations about the global impact which, however, was discounted at a later stage. This paper presents an overview of the selected models used to assess the local, regional, and global impacts. The paper also highlights the model and data limitations and suggests future research directions to respond more effectively under emergency situations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29254352','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29254352"><span>Cognitive Abilities Explain Wording Effects in the Rosenberg Self-Esteem Scale.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gnambs, Timo; Schroeders, Ulrich</p> <p>2017-12-01</p> <p>There is consensus that the 10 items of the Rosenberg Self-Esteem Scale (RSES) reflect wording effects resulting from positively and negatively keyed items. The present study examined the effects of cognitive abilities on the factor structure of the RSES with a novel, nonparametric latent variable technique called local structural equation models. In a nationally representative German large-scale assessment including 12,437 students competing measurement models for the RSES were compared: a bifactor model with a common factor and a specific factor for all negatively worded items had an optimal fit. Local structural equation models showed that the unidimensionality of the scale increased with higher levels of reading competence and reasoning, while the proportion of variance attributed to the negatively keyed items declined. Wording effects on the factor structure of the RSES seem to represent a response style artifact associated with cognitive abilities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29515491','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29515491"><span>Consistent Partial Least Squares Path Modeling via Regularization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jung, Sunho; Park, JaeHong</p> <p>2018-01-01</p> <p>Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26SS....4..184S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26SS....4..184S"><span>Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory</p> <p>2017-04-01</p> <p>The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018RuCRv..87..192I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018RuCRv..87..192I"><span>Polyelectrolyte multilayers: preparation and applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Izumrudov, V. A.; Mussabayeva, B. Kh; Murzagulova, K. B.</p> <p>2018-02-01</p> <p>The review concerns the results of studies on the synthesis of polyelectrolyte coatings on charged surfaces. These coatings represent nanostructured systems with clearly defined tendency to self-assembly and self-adjustment, which is of particular interest for materials science, biomedicine and pharmacology. A breakthrough in this area of knowledge is due to the development and introduction of a new technique, so-called layer-by-layer (LbL) deposition of nanofilms. The technique is very simple, viz., multilayers are formed as a result of alternating treatment of a charged substrate of arbitrary shape with water-salt solutions of differently charged polyelectrolytes. Nevertheless, efficient use of the LbL method to fabricate nanofilms requires meeting certain conditions and limitations that were revealed in the course of research on model systems. Prospects for applications of polyelectrolyte layers in various fields are discussed. The bibliography includes 58 references.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4132989','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4132989"><span>Improving high resolution retinal image quality using speckle illumination HiLo imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew</p> <p>2014-01-01</p> <p>Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis. PMID:25136486</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25136486','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25136486"><span>Improving high resolution retinal image quality using speckle illumination HiLo imaging.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew</p> <p>2014-08-01</p> <p>Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19860044295&hterms=finite+fourier+transform&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dfinite%2Bfourier%2Btransform','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19860044295&hterms=finite+fourier+transform&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dfinite%2Bfourier%2Btransform"><span>On the electromagnetic scattering from infinite rectangular grids with finite conductivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Christodoulou, C. G.; Kauffman, J. F.</p> <p>1986-01-01</p> <p>A variety of methods can be used in constructing solutions to the problem of mesh scattering. However, each of these methods has certain drawbacks. The present paper is concerned with a new technique which is valid for all spacings. The new method involved, called the fast Fourier transform-conjugate gradient method (FFT-CGM), represents an iterative technique which employs the conjugate gradient method to improve upon each iterate, utilizing the fast Fourier transform. The FFT-CGM method provides a new accurate model which can be extended and applied to the more difficult problems of woven mesh surfaces. The formulation of the FFT-conjugate gradient method for aperture fields and current densities for a planar periodic structure is considered along with singular operators, the formulation of the FFT-CG method for thin wires with finite conductivity, and reflection coefficients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8008E..0BR','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8008E..0BR"><span>GTAG: architecture and design of miniature transmitter with position logging for radio telemetry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Řeřucha, Šimon; Bartonička, Tomáš; Jedlička, Petr</p> <p>2011-10-01</p> <p>The radio telemetry is a well-known technique used within zoological research to exploit the behaviour of animal species. A usage of GPS for a frequent and precise position recording gives interesting possibility for a further enhancement of this method. We present our proposal of an architecture and design concepts of telemetry transmitter with GPS module, called GTAG, that is suited for study of the Egyptian fruit bat (Rousettus aegyptiacus). The model group we study set particular constrains, especially the weight limit (9 g) and prevention of any power resources recharging technique. We discuss the aspect of physical realization and the energyconsumption issues. We have developed a reference implementation that has been already deployed during telemetry sessions and we evaluate the experience and compare the estimated performance of our device to a real data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..276a2014F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..276a2014F"><span>Large eddy simulation of the tidal power plant deep green using the actuator line method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fredriksson, S. T.; Broström, G.; Jansson, M.; Nilsson, H.; Bergqvist, B.</p> <p>2017-12-01</p> <p>Tidal energy has the potential to provide a substantial part of the sustainable electric power generation. The tidal power plant developed by Minesto, called Deep Green, is a novel technology using a ‘flying’ kite with an attached turbine, moving at a speed several times higher than the mean flow. Multiple Deep Green power plants will eventually form arrays, which require knowledge of both flow interactions between individual devices and how the array influences the surrounding environment. The present study uses large eddy simulations (LES) and an actuator line model (ALM) to analyze the oscillating turbulent boundary layer flow in tidal currents without and with a Deep Green power plant. We present the modeling technique and preliminary results so far.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150004089','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150004089"><span>Learning Extended Finite State Machines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cassel, Sofia; Howar, Falk; Jonsson, Bengt; Steffen, Bernhard</p> <p>2014-01-01</p> <p>We present an active learning algorithm for inferring extended finite state machines (EFSM)s, combining data flow and control behavior. Key to our learning technique is a novel learning model based on so-called tree queries. The learning algorithm uses the tree queries to infer symbolic data constraints on parameters, e.g., sequence numbers, time stamps, identifiers, or even simple arithmetic. We describe sufficient conditions for the properties that the symbolic constraints provided by a tree query in general must have to be usable in our learning model. We have evaluated our algorithm in a black-box scenario, where tree queries are realized through (black-box) testing. Our case studies include connection establishment in TCP and a priority queue from the Java Class Library.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JInst..13P5021B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JInst..13P5021B"><span>Principle of the electrically induced Transient Current Technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bronuzzi, J.; Moll, M.; Bouvet, D.; Mapelli, A.; Sallese, J. M.</p> <p>2018-05-01</p> <p>In the field of detector development for High Energy Physics, the so-called Transient Current Technique (TCT) is used to characterize the electric field profile and the charge trapping inside silicon radiation detectors where particles or photons create electron-hole pairs in the bulk of a semiconductor device, as PiN diodes. In the standard approach, the TCT signal originates from the free carriers generated close to the surface of a silicon detector, by short pulses of light or by alpha particles. This work proposes a new principle of charge injection by means of lateral PN junctions implemented in one of the detector electrodes, called the electrical TCT (el-TCT). This technique is fully compatible with CMOS technology and therefore opens new perspectives for assessment of radiation detectors performances.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10133E..0QK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10133E..0QK"><span>Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.</p> <p>2017-02-01</p> <p>CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28559566','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28559566"><span>Chaos as an intermittently forced linear system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kaiser, Eurika; Kutz, J Nathan</p> <p>2017-05-30</p> <p>Understanding the interplay of order and disorder in chaos is a central challenge in modern quantitative science. Approximate linear representations of nonlinear dynamics have long been sought, driving considerable interest in Koopman theory. We present a universal, data-driven decomposition of chaos as an intermittently forced linear system. This work combines delay embedding and Koopman theory to decompose chaotic dynamics into a linear model in the leading delay coordinates with forcing by low-energy delay coordinates; this is called the Hankel alternative view of Koopman (HAVOK) analysis. This analysis is applied to the Lorenz system and real-world examples including Earth's magnetic field reversal and measles outbreaks. In each case, forcing statistics are non-Gaussian, with long tails corresponding to rare intermittent forcing that precedes switching and bursting phenomena. The forcing activity demarcates coherent phase space regions where the dynamics are approximately linear from those that are strongly nonlinear.The huge amount of data generated in fields like neuroscience or finance calls for effective strategies that mine data to reveal underlying dynamics. Here Brunton et al.develop a data-driven technique to analyze chaotic systems and predict their dynamics in terms of a forced linear model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CompM..60..613S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CompM..60..613S"><span>A numerical study of different projection-based model reduction techniques applied to computational homogenisation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia</p> <p>2017-10-01</p> <p>Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GMD....11..257T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GMD....11..257T"><span>The ALADIN System and its canonical model configurations AROME CY41T1 and ALARO CY40T1</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Termonia, Piet; Fischer, Claude; Bazile, Eric; Bouyssel, François; Brožková, Radmila; Bénard, Pierre; Bochenek, Bogdan; Degrauwe, Daan; Derková, Mariá; El Khatib, Ryad; Hamdi, Rafiq; Mašek, Ján; Pottier, Patricia; Pristov, Neva; Seity, Yann; Smolíková, Petra; Španiel, Oldřich; Tudor, Martina; Wang, Yong; Wittmann, Christoph; Joly, Alain</p> <p>2018-01-01</p> <p>The ALADIN System is a numerical weather prediction (NWP) system developed by the international ALADIN consortium for operational weather forecasting and research purposes. It is based on a code that is shared with the global model IFS of the ECMWF and the ARPEGE model of Météo-France. Today, this system can be used to provide a multitude of high-resolution limited-area model (LAM) configurations. A few configurations are thoroughly validated and prepared to be used for the operational weather forecasting in the 16 partner institutes of this consortium. These configurations are called the ALADIN canonical model configurations (CMCs). There are currently three CMCs: the ALADIN baseline CMC, the AROME CMC and the ALARO CMC. Other configurations are possible for research, such as process studies and climate simulations. The purpose of this paper is (i) to define the ALADIN System in relation to the global counterparts IFS and ARPEGE, (ii) to explain the notion of the CMCs, (iii) to document their most recent versions, and (iv) to illustrate the process of the validation and the porting of these configurations to the operational forecast suites of the partner institutes of the ALADIN consortium. This paper is restricted to the forecast model only; data assimilation techniques and postprocessing techniques are part of the ALADIN System but they are not discussed here.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=business+AND+management&pg=2&id=EJ1151341','ERIC'); return false;" href="https://eric.ed.gov/?q=business+AND+management&pg=2&id=EJ1151341"><span>Teaching Business Management to Engineers: The Impact of Interactive Lectures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Rambocas, Meena; Sastry, Musti K. S.</p> <p>2017-01-01</p> <p>Some education specialists are challenging the use of traditional strategies in classrooms and are calling for the use of contemporary teaching and learning techniques. In response to these calls, many field experiments that compare different teaching and learning strategies have been conducted. However, to date, little is known on the outcomes of…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2013-11-20/pdf/2013-27804.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2013-11-20/pdf/2013-27804.pdf"><span>78 FR 69705 - 60-Day Notice of Proposed Information Collection: Mortgagee's Application for Partial Settlement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2013-11-20</p> <p>... calling the toll-free Federal Relay Service at (800) 877-8339. FOR FURTHER INFORMATION CONTACT: Steve... through TTY by calling the toll-free Federal Relay Service at (800) 877-8339. Copies of available... techniques or other forms of information technology, e.g., permitting electronic submission of responses. HUD...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJBm...59.1385H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJBm...59.1385H"><span>Influence of atmospheric properties on detection of wood-warbler nocturnal flight calls</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Horton, Kyle G.; Stepanian, Phillip M.; Wainwright, Charlotte E.; Tegeler, Amy K.</p> <p>2015-10-01</p> <p>Avian migration monitoring can take on many forms; however, monitoring active nocturnal migration of land birds is limited to a few techniques. Avian nocturnal flight calls are currently the only method for describing migrant composition at the species level. However, as this method develops, more information is needed to understand the sources of variation in call detection. Additionally, few studies examine how detection probabilities differ under varying atmospheric conditions. We use nocturnal flight call recordings from captive individuals to explore the dependence of flight call detection on atmospheric temperature and humidity. Height or distance from origin had the largest influence on call detection, while temperature and humidity also influenced detectability at higher altitudes. Because flight call detection varies with both atmospheric conditions and flight height, improved monitoring across time and space will require correction for these factors to generate standardized metrics of songbird migration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930016774','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930016774"><span>Knowledge-based control for robot self-localization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bennett, Bonnie Kathleen Holte</p> <p>1993-01-01</p> <p>Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005PhDT.......215M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005PhDT.......215M"><span>Post-exposed fiber Bragg gratings</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miller, Gary A.</p> <p></p> <p>This thesis explains the development and characterization of a novel technique to fabricate weak fiber Bragg gratings for highly specific multi-element sensor arrays. This method, termed the "rescan technique," involves re-exposing a local region of a grating to fringeless ultraviolet light to "trim" unwanted portions of the reflection spectrum. The spectral effects that result from a rescan can only be adequately described by inventing the concept of a three-dimensional index growth surface, where induced index is a function of both the writing intensity and the exposure time. Using this information, it is possible to predict the spectral response of a rescanned grating using a numerical model. For our model, we have modified the piecewise-uniform approach to include coefficients within the coupled-mode formulism that imitate the same scattering properties as the actual grating. By taking high accuracy measurements of the refractive index change in germanosilicate fiber, we have created the necessary 3D map of photoinduced index to accurately model gratings and their post-exposure spectra. We will also demonstrate that optical fiber exhibits what we call "exposure history"; the final index change in a region depends on the previous exposures conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4801565','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4801565"><span>An Electrochemical Impedance Spectroscopy System for Monitoring Pineapple Waste Saccharification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Conesa, Claudia; Ibáñez Civera, Javier; Seguí, Lucía; Fito, Pedro; Laguarda-Miró, Nicolás</p> <p>2016-01-01</p> <p>Electrochemical impedance spectroscopy (EIS) has been used for monitoring the enzymatic pineapple waste hydrolysis process. The system employed consists of a device called Advanced Voltammetry, Impedance Spectroscopy & Potentiometry Analyzer (AVISPA) equipped with a specific software application and a stainless steel double needle electrode. EIS measurements were conducted at different saccharification time intervals: 0, 0.75, 1.5, 6, 12 and 24 h. Partial least squares (PLS) were used to model the relationship between the EIS measurements and the sugar determination by HPAEC-PAD. On the other hand, artificial neural networks: (multilayer feed forward architecture with quick propagation training algorithm and logistic-type transfer functions) gave the best results as predictive models for glucose, fructose, sucrose and total sugars. Coefficients of determination (R2) and root mean square errors of prediction (RMSEP) were determined as R2 > 0.944 and RMSEP < 1.782 for PLS and R2 > 0.973 and RMSEP < 0.486 for artificial neural networks (ANNs), respectively. Therefore, a combination of both an EIS-based technique and ANN models is suggested as a promising alternative to the traditional laboratory techniques for monitoring the pineapple waste saccharification step. PMID:26861317</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007epsc.conf..568T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007epsc.conf..568T"><span>Characterising Hot-Jupiters' atmospheres with observations and modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tinetti, G.</p> <p>2007-08-01</p> <p>Exoplanet transit photometry and spectroscopy are currently the best techniques to probe the atmospheres of extrasolar worlds. The best targets to be observed with these methods, are the planets that orbit very close to their parent star, both because their probability to transit grows and their atmospheres are warmer and more expanded, hence easier to probe. These characteristics are met by the so called Hot-Jupiters, massive low-density gaseous planets orbiting very close-in. Phase-curves allow to observe the change in brightness in the combined light of the planet-star system, also for non-transiting exoplanets. We review here the most crucial observations performed with the Hubble and Spitzer Space Telescopes at multiple wavelenghts, and the most successful models proposed in the literature to plan and interpret those observations. In particular we will focus on most recent observations and modelling claiming the detection of water vapour in the atmospheres of these planets. Further into the future, the JamesWebb Space Telescope will allow to probe the atmospheres of smaller size-planets with the same techniques. We briefly report here the results expected for hot and warm Neptunes, or transiting terrestrial planets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010erbi.conf....1S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010erbi.conf....1S"><span>Queries over Unstructured Data: Probabilistic Methods to the Rescue</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sarawagi, Sunita</p> <p></p> <p>Unstructured data like emails, addresses, invoices, call transcripts, reviews, and press releases are now an integral part of any large enterprise. A challenge of modern business intelligence applications is analyzing and querying data seamlessly across structured and unstructured sources. This requires the development of automated techniques for extracting structured records from text sources and resolving entity mentions in data from various sources. The success of any automated method for extraction and integration depends on how effectively it unifies diverse clues in the unstructured source and in existing structured databases. We argue that statistical learning techniques like Conditional Random Fields (CRFs) provide a accurate, elegant and principled framework for tackling these tasks. Given the inherent noise in real-world sources, it is important to capture the uncertainty of the above operations via imprecise data models. CRFs provide a sound probability distribution over extractions but are not easy to represent and query in a relational framework. We present methods of approximating this distribution to query-friendly row and column uncertainty models. Finally, we present models for representing the uncertainty of de-duplication and algorithms for various Top-K count queries on imprecise duplicates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JEI....26c1202C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JEI....26c1202C"><span>SLMRACE: a noise-free RACE implementation with reduced computational time</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chauvin, Juliet; Provenzi, Edoardo</p> <p>2017-05-01</p> <p>We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19890024347&hterms=artificial+intelligence+employment&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dartificial%2Bintelligence%2Bemployment','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19890024347&hterms=artificial+intelligence+employment&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dartificial%2Bintelligence%2Bemployment"><span>A data analysis expert system for large established distributed databases</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick</p> <p>1987-01-01</p> <p>A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10615E..0LW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10615E..0LW"><span>Deep classification hashing for person re-identification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Jiabao; Li, Yang; Zhang, Xiancai; Miao, Zhuang; Tao, Gang</p> <p>2018-04-01</p> <p>As the development of surveillance in public, person re-identification becomes more and more important. The largescale databases call for efficient computation and storage, hashing technique is one of the most important methods. In this paper, we proposed a new deep classification hashing network by introducing a new binary appropriation layer in the traditional ImageNet pre-trained CNN models. It outputs binary appropriate features, which can be easily quantized into binary hash-codes for hamming similarity comparison. Experiments show that our deep hashing method can outperform the state-of-the-art methods on the public CUHK03 and Market1501 datasets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148c4902A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148c4902A"><span>Hydrodynamically Coupled Brownian Dynamics: A coarse-grain particle-based Brownian dynamics technique with hydrodynamic interactions for modeling self-developing flow of polymer solutions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ahuja, V. R.; van der Gucht, J.; Briels, W. J.</p> <p>2018-01-01</p> <p>We present a novel coarse-grain particle-based simulation technique for modeling self-developing flow of dilute and semi-dilute polymer solutions. The central idea in this paper is the two-way coupling between a mesoscopic polymer model and a phenomenological fluid model. As our polymer model, we choose Responsive Particle Dynamics (RaPiD), a Brownian dynamics method, which formulates the so-called "conservative" and "transient" pair-potentials through which the polymers interact besides experiencing random forces in accordance with the fluctuation dissipation theorem. In addition to these interactions, our polymer blobs are also influenced by the background solvent velocity field, which we calculate by solving the Navier-Stokes equation discretized on a moving grid of fluid blobs using the Smoothed Particle Hydrodynamics (SPH) technique. While the polymers experience this frictional force opposing their motion relative to the background flow field, our fluid blobs also in turn are influenced by the motion of the polymers through an interaction term. This makes our technique a two-way coupling algorithm. We have constructed this interaction term in such a way that momentum is conserved locally, thereby preserving long range hydrodynamics. Furthermore, we have derived pairwise fluctuation terms for the velocities of the fluid blobs using the Fokker-Planck equation, which have been alternatively derived using the General Equation for the Non-Equilibrium Reversible-Irreversible Coupling (GENERIC) approach in Smoothed Dissipative Particle Dynamics (SDPD) literature. These velocity fluctuations for the fluid may be incorporated into the velocity updates for our fluid blobs to obtain a thermodynamically consistent distribution of velocities. In cases where these fluctuations are insignificant, however, these additional terms may well be dropped out as they are in a standard SPH simulation. We have applied our technique to study the rheology of two different concentrations of our model linear polymer solutions. The results show that the polymers and the fluid are coupled very well with each other, showing no lag between their velocities. Furthermore, our results show non-Newtonian shear thinning and the characteristic flattening of the Poiseuille flow profile typically observed for polymer solutions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29352779','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29352779"><span>Hydrodynamically Coupled Brownian Dynamics: A coarse-grain particle-based Brownian dynamics technique with hydrodynamic interactions for modeling self-developing flow of polymer solutions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ahuja, V R; van der Gucht, J; Briels, W J</p> <p>2018-01-21</p> <p>We present a novel coarse-grain particle-based simulation technique for modeling self-developing flow of dilute and semi-dilute polymer solutions. The central idea in this paper is the two-way coupling between a mesoscopic polymer model and a phenomenological fluid model. As our polymer model, we choose Responsive Particle Dynamics (RaPiD), a Brownian dynamics method, which formulates the so-called "conservative" and "transient" pair-potentials through which the polymers interact besides experiencing random forces in accordance with the fluctuation dissipation theorem. In addition to these interactions, our polymer blobs are also influenced by the background solvent velocity field, which we calculate by solving the Navier-Stokes equation discretized on a moving grid of fluid blobs using the Smoothed Particle Hydrodynamics (SPH) technique. While the polymers experience this frictional force opposing their motion relative to the background flow field, our fluid blobs also in turn are influenced by the motion of the polymers through an interaction term. This makes our technique a two-way coupling algorithm. We have constructed this interaction term in such a way that momentum is conserved locally, thereby preserving long range hydrodynamics. Furthermore, we have derived pairwise fluctuation terms for the velocities of the fluid blobs using the Fokker-Planck equation, which have been alternatively derived using the General Equation for the Non-Equilibrium Reversible-Irreversible Coupling (GENERIC) approach in Smoothed Dissipative Particle Dynamics (SDPD) literature. These velocity fluctuations for the fluid may be incorporated into the velocity updates for our fluid blobs to obtain a thermodynamically consistent distribution of velocities. In cases where these fluctuations are insignificant, however, these additional terms may well be dropped out as they are in a standard SPH simulation. We have applied our technique to study the rheology of two different concentrations of our model linear polymer solutions. The results show that the polymers and the fluid are coupled very well with each other, showing no lag between their velocities. Furthermore, our results show non-Newtonian shear thinning and the characteristic flattening of the Poiseuille flow profile typically observed for polymer solutions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JHyd..524..522Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JHyd..524..522Z"><span>Analysis of flood modeling through innovative geomatic methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zazo, Santiago; Molina, José-Luis; Rodríguez-Gonzálvez, Pablo</p> <p>2015-05-01</p> <p>A suitable assessment and management of the exposure level to natural flood risks necessarily requires an exhaustive knowledge of the terrain. This study, primarily aimed to evaluate flood risk, firstly assesses the suitability of an innovative technique, called Reduced Cost Aerial Precision Photogrammetry (RC-APP), based on a motorized technology ultra-light aircraft ULM (Ultra-Light Motor), together with the hybridization of reduced costs sensors, for the acquisition of geospatial information. Consequently, this research generates the RC-APP technique which is found to be a more accurate-precise, economical and less time consuming geomatic product. This technique is applied in river engineering for the geometric modeling and risk assessment to floods. Through the application of RC-APP, a high spatial resolution image (orthophoto of 2.5 cm), and a Digital Elevation Model (DEM) of 0.10 m mesh size and high density points (about 100 points/m2), with altimetric accuracy of -0.02 ± 0.03 m have been obtained. These products have provided a detailed knowledge of the terrain, afterward used for the hydraulic simulation which has allowed a better definition of the inundated area, with important implications for flood risk assessment and management. In this sense, it should be noted that the achieved spatial resolution of DEM is 0.10 m which is especially interesting and useful in hydraulic simulations through 2D software. According to the results, the developed methodology and technology allows for a more accurate riverbed representation, compared with other traditional techniques such as Light Detection and Ranging (LiDAR), with a Root-Mean-Square Error (RMSE ± 0.50 m). This comparison has revealed that RC-APP has one lower magnitude order of error than the LiDAR method. Consequently, this technique arises as an efficient and appropriate tool, especially in areas with high exposure to risk of flooding. In hydraulic terms, the degree of detail achieved in the 3D model, has allowed reaching a significant increase in the knowledge of hydraulic variables in natural waterways.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27842598','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27842598"><span>Exploratory analysis of real personal emergency response call conversations: considerations for personal emergency response spoken dialogue systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Young, Victoria; Rochon, Elizabeth; Mihailidis, Alex</p> <p>2016-11-14</p> <p>The purpose of this study was to derive data from real, recorded, personal emergency response call conversations to help improve the artificial intelligence and decision making capability of a spoken dialogue system in a smart personal emergency response system. The main study objectives were to: develop a model of personal emergency response; determine categories for the model's features; identify and calculate measures from call conversations (verbal ability, conversational structure, timing); and examine conversational patterns and relationships between measures and model features applicable for improving the system's ability to automatically identify call model categories and predict a target response. This study was exploratory and used mixed methods. Personal emergency response calls were pre-classified according to call model categories identified qualitatively from response call transcripts. The relationships between six verbal ability measures, three conversational structure measures, two timing measures and three independent factors: caller type, risk level, and speaker type, were examined statistically. Emergency medical response services were the preferred response for the majority of medium and high risk calls for both caller types. Older adult callers mainly requested non-emergency medical service responders during medium risk situations. By measuring the number of spoken words-per-minute and turn-length-in-words for the first spoken utterance of a call, older adult and care provider callers could be identified with moderate accuracy. Average call taker response time was calculated using the number-of-speaker-turns and time-in-seconds measures. Care providers and older adults used different conversational strategies when responding to call takers. The words 'ambulance' and 'paramedic' may hold different latent connotations for different callers. The data derived from the real personal emergency response recordings may help a spoken dialogue system classify incoming calls by caller type with moderate probability shortly after the initial caller utterance. Knowing the caller type, the target response for the call may be predicted with some degree of probability and the output dialogue could be tailored to this caller type. The average call taker response time measured from real calls may be used to limit the conversation length in a spoken dialogue system before defaulting to a live call taker.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15376646','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15376646"><span>Directional frequency and recording (DIFAR) sensors in seafloor recorders to locate calling bowhead whales during their fall migration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Greene, Charles R; McLennan, Miles Wm; Norman, Robert G; McDonald, Trent L; Jakubczak, Ray S; Richardson, W John</p> <p>2004-08-01</p> <p>Bowhead whales, Balaena mysticetus, migrate west during fall approximately 10-75 km off the north coast of Alaska, passing the petroleum developments around Prudhoe Bay. Oil production operations on an artificial island 5 km offshore create sounds heard by some whales. As part of an effort to assess whether migrating whales deflect farther offshore at times with high industrial noise, an acoustical approach was selected for localizing calling whales. The technique incorporated DIFAR (directional frequency and recording) sonobuoy techniques. An array of 11 DASARs (directional autonomous seafloor acoustic recorders) was built and installed with unit-to-unit separation of 5 km. When two or more DASARs detected the same call, the whale location was determined from the bearing intersections. This article describes the acoustic methods used to determine the locations of the calling bowhead whales and shows the types and precision of the data acquired. Calibration transmissions at GPS-measured times and locations provided measures of the individual DASAR clock drift and directional orientation. The standard error of the bearing measurements at distances of 3-4 km was approximately 1.35 degrees after corrections for gain imbalance in the two directional sensors. During 23 days in 2002, 10,587 bowhead calls were detected and 8383 were localized.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29665779','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29665779"><span>A machine learning model to determine the accuracy of variant calls in capture-based next generation sequencing.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van den Akker, Jeroen; Mishne, Gilad; Zimmer, Anjali D; Zhou, Alicia Y</p> <p>2018-04-17</p> <p>Next generation sequencing (NGS) has become a common technology for clinical genetic tests. The quality of NGS calls varies widely and is influenced by features like reference sequence characteristics, read depth, and mapping accuracy. With recent advances in NGS technology and software tools, the majority of variants called using NGS alone are in fact accurate and reliable. However, a small subset of difficult-to-call variants that still do require orthogonal confirmation exist. For this reason, many clinical laboratories confirm NGS results using orthogonal technologies such as Sanger sequencing. Here, we report the development of a deterministic machine-learning-based model to differentiate between these two types of variant calls: those that do not require confirmation using an orthogonal technology (high confidence), and those that require additional quality testing (low confidence). This approach allows reliable NGS-based calling in a clinical setting by identifying the few important variant calls that require orthogonal confirmation. We developed and tested the model using a set of 7179 variants identified by a targeted NGS panel and re-tested by Sanger sequencing. The model incorporated several signals of sequence characteristics and call quality to determine if a variant was identified at high or low confidence. The model was tuned to eliminate false positives, defined as variants that were called by NGS but not confirmed by Sanger sequencing. The model achieved very high accuracy: 99.4% (95% confidence interval: +/- 0.03%). It categorized 92.2% (6622/7179) of the variants as high confidence, and 100% of these were confirmed to be present by Sanger sequencing. Among the variants that were categorized as low confidence, defined as NGS calls of low quality that are likely to be artifacts, 92.1% (513/557) were found to be not present by Sanger sequencing. This work shows that NGS data contains sufficient characteristics for a machine-learning-based model to differentiate low from high confidence variants. Additionally, it reveals the importance of incorporating site-specific features as well as variant call features in such a model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22010755','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22010755"><span>Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F</p> <p>2011-09-01</p> <p>Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.7143E..2JL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.7143E..2JL"><span>VRLane: a desktop virtual safety management program for underground coal mine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Mei; Chen, Jingzhu; Xiong, Wei; Zhang, Pengpeng; Wu, Daozheng</p> <p>2008-10-01</p> <p>VR technologies, which generate immersive, interactive, and three-dimensional (3D) environments, are seldom applied to coal mine safety work management. In this paper, a new method that combined the VR technologies with underground mine safety management system was explored. A desktop virtual safety management program for underground coal mine, called VRLane, was developed. The paper mainly concerned about the current research advance in VR, system design, key techniques and system application. Two important techniques were introduced in the paper. Firstly, an algorithm was designed and implemented, with which the 3D laneway models and equipment models can be built on the basis of the latest mine 2D drawings automatically, whereas common VR programs established 3D environment by using 3DS Max or the other 3D modeling software packages with which laneway models were built manually and laboriously. Secondly, VRLane realized system integration with underground industrial automation. VRLane not only described a realistic 3D laneway environment, but also described the status of the coal mining, with functions of displaying the run states and related parameters of equipment, per-alarming the abnormal mining events, and animating mine cars, mine workers, or long-wall shearers. The system, with advantages of cheap, dynamic, easy to maintenance, provided a useful tool for safety production management in coal mine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26352635','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26352635"><span>Sparse Feature Extraction for Pose-Tolerant Face Recognition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Abiantun, Ramzi; Prabhu, Utsav; Savvides, Marios</p> <p>2014-10-01</p> <p>Automatic face recognition performance has been steadily improving over years of research, however it remains significantly affected by a number of factors such as illumination, pose, expression, resolution and other factors that can impact matching scores. The focus of this paper is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of gallery images. We propose a method that relies on two fundamental components: (a) A 3D modeling step to geometrically correct the viewpoint of the face. For this purpose, we extend a recent technique for efficient synthesis of 3D face models called 3D Generic Elastic Model. (b) A sparse feature extraction step using subspace modeling and ℓ1-minimization to induce pose-tolerance in coefficient space. This in return enables the synthesis of an equivalent frontal-looking face, which can be used towards recognition. We show significant performance improvements in verification rates compared to commercial matchers, and also demonstrate the resilience of the proposed method with respect to degrading input quality. We find that the proposed technique is able to match non-frontal images to other non-frontal images of varying angles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.H13F1421U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.H13F1421U"><span>Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ushijima, T.; Yeh, W.</p> <p>2013-12-01</p> <p>An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1454559-nest-comprehensive-model-scintillation-yield-liquid-xenon','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1454559-nest-comprehensive-model-scintillation-yield-liquid-xenon"><span>NEST: a comprehensive model for scintillation yield in liquid xenon</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Szydagis, M.; Barry, N.; Kazkaz, K.; ...</p> <p>2011-10-03</p> <p>Here, a comprehensive model for explaining scintillation yield in liquid xenon is introduced. We unify various definitions of work function which abound in the literature and incorporate all available data on electron recoil scintillation yield. This results in a better understanding of electron recoil, and facilitates an improved description of nuclear recoil. An incident gamma energy range of O(1 keV) to O(1 MeV) and electric fields between 0 and O(10 kV/cm) are incorporated into this heuristic model. We show results from a Geant4 implementation, but because the model has a few free parameters, implementation in any simulation package should bemore » simple. We use a quasi-empirical approach, with an objective of improving detector calibrations and performance verification. The model will aid in the design and optimization of future detectors. This model is also easy to extend to other noble elements. In this paper we lay the foundation for an exhaustive simulation code which we call NEST (Noble Element Simulation Technique).« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016WRR....52.2403C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016WRR....52.2403C"><span>Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.</p> <p>2016-04-01</p> <p>The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002PhDT........49D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002PhDT........49D"><span>Modeling discourse management compared to other classroom management styles in university physics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Desbien, Dwain Michael</p> <p>2002-01-01</p> <p>A classroom management technique called modeling discourse management was developed to enhance the modeling theory of physics. Modeling discourse management is a student-centered management that focuses on the epistemology of science. Modeling discourse is social constructivist in nature and was designed to encourage students to present classroom material to each other. In modeling discourse management, the instructor's primary role is of questioner rather than provider of knowledge. Literature is presented that helps validate the components of modeling discourse. Modeling discourse management was compared to other classroom management styles using multiple measures. Both regular and honors university physics classes were investigated. This style of management was found to enhance student understanding of forces, problem-solving skills, and student views of science compared to traditional classroom management styles for both honors and regular students. Compared to other reformed physics classrooms, modeling discourse classes performed as well or better on student understanding of forces. Outside evaluators viewed modeling discourse classes to be reformed, and it was determined that modeling discourse could be effectively disseminated.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1000a2167B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1000a2167B"><span>Optimal service using Matlab - simulink controlled Queuing system at call centers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Balaji, N.; Siva, E. P.; Chandrasekaran, A. D.; Tamilazhagan, V.</p> <p>2018-04-01</p> <p>This paper presents graphical integrated model based academic research on telephone call centres. This paper introduces an important feature of impatient customers and abandonments in the queue system. However the modern call centre is a complex socio-technical system. Queuing theory has now become a suitable application in the telecom industry to provide better online services. Through this Matlab-simulink multi queuing structured models provide better solutions in complex situations at call centres. Service performance measures analyzed at optimal level through Simulink queuing model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=pompeii&id=EJ667054','ERIC'); return false;" href="https://eric.ed.gov/?q=pompeii&id=EJ667054"><span>Uncovering Pompeii: Examining Evidence.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Yell, Michael M.</p> <p>2001-01-01</p> <p>Presents a lesson plan on Pompeii (Italy) for middle school students that utilizes a teaching technique called interactive presentation. Describes the technique's five phases: (1) discrepant event inquiry; (2) discussion/presentation; (3) cooperative learning activity; (4) writing for understanding activity; and (5) whole-class discussion and…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28809200','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28809200"><span>Alarm Fatigue vs User Expectations Regarding Context-Aware Alarm Handling in Hospital Environments Using CallMeSmart.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Solvoll, Terje; Arntsen, Harald; Hartvigsen, Gunnar</p> <p>2017-01-01</p> <p>Surveys and research show that mobile communication systems in hospital settings are old and cause frequent interruptions. In the quest to remedy this, an Android based communication system called CallMeSmart tries to encapsulate most of the frequent communication into one hand held device focusing on reducing interruptions and at the same time make the workday easier for healthcare workers. The objective of CallMeSmart is to use context-awareness techniques to automatically monitor the availability of physicians' and nurses', and use this information to prevent or route phone calls, text messages, pages and alarms that would otherwise compromise patient care. In this paper, we present the results from interviewing nurses on alarm fatigue and their expectations regarding context-aware alarm handling using CallMeSmart.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014MSSP...45..360D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014MSSP...45..360D"><span>Inverse dynamic substructuring using the direct hybrid assembly in the frequency domain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>D'Ambrogio, Walter; Fregolent, Annalisa</p> <p>2014-04-01</p> <p>The paper deals with the identification of the dynamic behaviour of a structural subsystem, starting from the known dynamic behaviour of both the coupled system and the remaining part of the structural system (residual subsystem). This topic is also known as decoupling problem, subsystem subtraction or inverse dynamic substructuring. Whenever it is necessary to combine numerical models (e.g. FEM) and test models (e.g. FRFs), one speaks of experimental dynamic substructuring. Substructure decoupling techniques can be classified as inverse coupling or direct decoupling techniques. In inverse coupling, the equations describing the coupling problem are rearranged to isolate the unknown substructure instead of the coupled structure. On the contrary, direct decoupling consists in adding to the coupled system a fictitious subsystem that is the negative of the residual subsystem. Starting from a reduced version of the 3-field formulation (dynamic equilibrium using FRFs, compatibility and equilibrium of interface forces), a direct hybrid assembly is developed by requiring that both compatibility and equilibrium conditions are satisfied exactly, either at coupling DoFs only, or at additional internal DoFs of the residual subsystem. Equilibrium and compatibility DoFs might not be the same: this generates the so-called non-collocated approach. The technique is applied using experimental data from an assembled system made by a plate and a rigid mass.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJWC.16003005B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJWC.16003005B"><span>Asteroseismic inversions in the Kepler era: application to the Kepler Legacy sample</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Buldgen, Gaël; Reese, Daniel; Dupret, Marc-Antoine</p> <p>2017-10-01</p> <p>In the past few years, the CoRoT and Kepler missions have carried out what is now called the space photometry revolution. This revolution is still ongoing thanks to K2 and will be continued by the Tess and Plato2.0 missions. However, the photometry revolution must also be followed by progress in stellar modelling, in order to lead to more precise and accurate determinations of fundamental stellar parameters such as masses, radii and ages. In this context, the long-lasting problems related to mixing processes in stellar interior is the main obstacle to further improvements of stellar modelling. In this contribution, we will apply structural asteroseismic inversion techniques to targets from the Kepler Legacy sample and analyse how these can help us constrain the fundamental parameters and mixing processes in these stars. Our approach is based on previous studies using the SOLA inversion technique [1] to determine integrated quantities such as the mean density [2], the acoustic radius, and core conditions indicators [3], and has already been successfully applied to the 16Cyg binary system [4]. We will show how this technique can be applied to the Kepler Legacy sample and how new indicators can help us to further constrain the chemical composition profiles of stars as well as provide stringent constraints on stellar ages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1239142-ray-scatter-imaging-hepatocellular-carcinoma-mouse-model-using-nanoparticle-contrast-agents','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1239142-ray-scatter-imaging-hepatocellular-carcinoma-mouse-model-using-nanoparticle-contrast-agents"><span>X-ray scatter imaging of hepatocellular carcinoma in a mouse model using nanoparticle contrast agents</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Rand, Danielle; Derdak, Zoltan; Carlson, Rolf; ...</p> <p>2015-10-29</p> <p>Hepatocellular carcinoma (HCC) is one of the most common malignant tumors worldwide and is almost uniformly fatal. Current methods of detection include ultrasound examination and imaging by CT scan or MRI; however, these techniques are problematic in terms of sensitivity and specificity, and the detection of early tumors (<1 cm diameter) has proven elusive. Better, more specific, and more sensitive detection methods are therefore urgently needed. Here we discuss the application of a newly developed x-ray imaging technique called Spatial Frequency Heterodyne Imaging (SFHI) for the early detection of HCC. SFHI uses x-rays scattered by an object to form anmore » image and is more sensitive than conventional absorption-based x-radiography. We show that tissues labeled in vivo with gold nanoparticle contrast agents can be detected using SFHI. We also demonstrate that directed targeting and SFHI of HCC tumors in a mouse model is possible through the use of HCC-specific antibodies. As a result, the enhanced sensitivity of SFHI relative to currently available techniques enables the x-ray imaging of tumors that are just a few millimeters in diameter and substantially reduces the amount of nanoparticle contrast agent required for intravenous injection relative to absorption-based x-ray imaging.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGRC..122.1827S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGRC..122.1827S"><span>Significant wave heights from Sentinel-1 SAR: Validation and applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stopa, J. E.; Mouche, A.</p> <p>2017-03-01</p> <p>Two empirical algorithms are developed for wave mode images measured from the synthetic aperture radar aboard Sentinel-1 A. The first method, called CWAVE_S1A, is an extension of previous efforts developed for ERS2 and the second method, called Fnn, uses the azimuth cutoff among other parameters to estimate significant wave heights (Hs) and average wave periods without using a modulation transfer function. Neural networks are trained using colocated data generated from WAVEWATCH III and independently verified with data from altimeters and in situ buoys. We use neural networks to relate the nonlinear relationships between the input SAR image parameters and output geophysical wave parameters. CWAVE_S1A performs well and has reduced precision compared to Fnn with Hs root mean square errors within 0.5 and 0.6 m, respectively. The developed neural networks extend the SAR's ability to retrieve useful wave information under a large range of environmental conditions including extratropical and tropical cyclones in which Hs estimation is traditionally challenging.<abstract type="synopsis"><title type="main">Plain Language SummaryTwo empirical algorithms are developed to estimate integral wave parameters from high resolution synthetic aperture radar (SAR) ocean images measured from recently launched the Sentinel 1 satellite. These methods avoid the use of the complicated image to wave mapping typically used to estimate sea state parameters. In addition, we are able to estimate wave parameters that are not able to be measured using existing techniques for the Sentinel 1 satellite. We use a machine learning technique to create a model that relates the ocean image properties to geophysical wave parameters. The models are developed using data from a numerical model because of the sufficiently large sample of global ocean conditions. We then verify that our developed models perform well with respect to independently measured wave observations from other satellite sensors and buoys. We successfully created models that estimate integrated wave parameters, like the commonly used significant wave height, accurately in a large range of sea states (up to 13 m). This allows the data from the SAR technology to be applied under a large range of environmental conditions including extra-tropical and tropical cyclones.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23238906','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23238906"><span>Development of snake-directed antipredator behavior by wild white-faced capuchin monkeys: II. Influence of the social environment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Meno, Whitney; Coss, Richard G; Perry, Susan</p> <p>2013-03-01</p> <p>Young animals are known to direct alarm calls at a wider range of animals than adults. If social cues are safer and/or more reliable to use than asocial cues for learning about predators, then it is expected that the development of this behavior will be affected by the social environment. Our study examined the influence of the social environment on antipredator behavior in infant, juvenile, and adult wild white-faced capuchin monkeys (Cebus capucinus) at Lomas Barbudal Biological Reserve in Costa Rica during presentations of different species of model snakes and novel models. We examined (a) the alarm calling behavior of the focal animal when alone versus in the vicinity of conspecific alarm callers and (b) the latency of conspecifics to alarm call once the focal animal alarm called. Focal animals alarm called more when alone than after hearing a conspecific alarm call. No reliable differences were found in the latencies of conspecifics to alarm call based on age or model type. Conspecifics were more likely to alarm call when focal individuals alarm called at snake models than when they alarm called at novel models. Results indicate (a) that alarm calling may serve to attract others to the predator's location and (b) that learning about specific predators may begin with a generalized response to a wide variety of species, including some nonthreatening ones, that is winnowed down via Pavlovian conditioned inhibition into a response directed toward specific dangerous species. This study reveals that conspecifics play a role in the development of antipredator behavior in white-faced capuchins. © 2012 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000AtmEn..34..699S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000AtmEn..34..699S"><span>Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Samanta, A.; Todd, L. A.</p> <p></p> <p>A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AIPC.1557..220G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AIPC.1557..220G"><span>Random trinomial tree models and vanilla options</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ganikhodjaev, Nasir; Bayram, Kamola</p> <p>2013-09-01</p> <p>In this paper we introduce and study random trinomial model. The usual trinomial model is prescribed by triple of numbers (u, d, m). We call the triple (u, d, m) an environment of the trinomial model. A triple (Un, Dn, Mn), where {Un}, {Dn} and {Mn} are the sequences of independent, identically distributed random variables with 0 < Dn < 1 < Un and Mn = 1 for all n, is called a random environment and trinomial tree model with random environment is called random trinomial model. The random trinomial model is considered to produce more accurate results than the random binomial model or usual trinomial model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3962102','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3962102"><span>Thumb-loops up for catalysis: a structure/function investigation of a functional loop movement in a GH11 xylanase</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Paës, Gabriel; Cortés, Juan; Siméon, Thierry; O'Donohue, Michael J.; Tran, Vinh</p> <p>2012-01-01</p> <p>Dynamics is a key feature of enzyme catalysis. Unfortunately, current experimental and computational techniques do not yet provide a comprehensive understanding and description of functional macromolecular motions. In this work, we have extended a novel computational technique, which combines molecular modeling methods and robotics algorithms, to investigate functional motions of protein loops. This new approach has been applied to study the functional importance of the so-called thumb-loop in the glycoside hydrolase family 11 xylanase from Thermobacillus xylanilyticus (Tx-xyl). The results obtained provide new insight into the role of the loop in the glycosylation/deglycosylation catalytic cycle, and underline the key importance of the nature of the residue located at the tip of the thumb-loop. The effect of mutations predicted in silico has been validated by in vitro site-directed mutagenesis experiments. Overall, we propose a comprehensive model of Tx-xyl catalysis in terms of substrate and product dynamics by identifying the action of the thumb-loop motion during catalysis. PMID:24688637</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050216398','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050216398"><span>Application of a Constant Gain Extended Kalman Filter for In-Flight Estimation of Aircraft Engine Performance Parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kobayashi, Takahisa; Simon, Donald L.; Litt, Jonathan S.</p> <p>2005-01-01</p> <p>An approach based on the Constant Gain Extended Kalman Filter (CGEKF) technique is investigated for the in-flight estimation of non-measurable performance parameters of aircraft engines. Performance parameters, such as thrust and stall margins, provide crucial information for operating an aircraft engine in a safe and efficient manner, but they cannot be directly measured during flight. A technique to accurately estimate these parameters is, therefore, essential for further enhancement of engine operation. In this paper, a CGEKF is developed by combining an on-board engine model and a single Kalman gain matrix. In order to make the on-board engine model adaptive to the real engine s performance variations due to degradation or anomalies, the CGEKF is designed with the ability to adjust its performance through the adjustment of artificial parameters called tuning parameters. With this design approach, the CGEKF can maintain accurate estimation performance when it is applied to aircraft engines at offnominal conditions. The performance of the CGEKF is evaluated in a simulation environment using numerous component degradation and fault scenarios at multiple operating conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15351137','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15351137"><span>Spontaneous switching of frequency-locking by periodic stimulus in oscillators of plasmodium of the true slime mold.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Takamatsu, A; Yamamoto, T; Fujii, T</p> <p>2004-01-01</p> <p>Microfabrication technique was used to construct a model system with a living cell of plasmodium of the true slime mold, Physarum polycephalum, a living coupled oscillator system. Its parameters can be systematically controlled as in computer simulations, so that results are directly comparable to those of general mathematical models. As the first step, we investigated responses in oscillatory cells, the oscillators of the plasmodium, to periodic stimuli by temperature changes to elucidate characteristics of the cells as nonlinear systems whose internal dynamics are unknown because of their complexity. We observed that the forced oscillator of the plasmodium show 1:1, 2:1, 3:1 frequency locking inside so-called Arnold tongues regions as well as in other nonlinear systems such as chemical systems and other biological systems. In addition, we found spontaneous switching behavior from certain frequency locking states to other states, even under certain fixed parameters. This technique can be applied to more complex systems with multiple elements, such as coupled oscillator systems, and would be useful to investigate complicated phenomena in biological systems such as information processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2013-11-27/pdf/2013-28514.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2013-11-27/pdf/2013-28514.pdf"><span>78 FR 70957 - 60-Day Notice of Proposed Information Collection: HUD-Owned Real Estate Good Neighbor Next Door...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2013-11-27</p> <p>... calling the toll-free Federal Relay Service at (800) 877-8339. FOR FURTHER INFORMATION CONTACT: Ivery W... number through TTY by calling the toll-free Federal Relay Service at (800) 877-8339. Copies of available... automated collection techniques or other forms of information technology, e.g., permitting electronic...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2013-11-12/pdf/2013-27033.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2013-11-12/pdf/2013-27033.pdf"><span>78 FR 67384 - 60-Day Notice of Proposed Information Collection: FHA-Insured Mortgage Loan Servicing Involving...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2013-11-12</p> <p>... hearing or speech impairments may access this number through TTY by calling the toll-free Federal Relay... calling the toll-free Federal Relay Service at (800) 877-8339. Copies of available documents submitted to... techniques or other forms of information technology, e.g., permitting electronic submission of responses. HUD...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2013-12-11/pdf/2013-29548.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2013-12-11/pdf/2013-29548.pdf"><span>78 FR 75364 - 60-Day Notice of Proposed Information Collection: Application for FHA Insured Mortgages</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2013-12-11</p> <p>... through TTY by calling the toll-free Federal Relay Service at (800) 877-8339. FOR FURTHER INFORMATION... through TTY by calling the toll-free Federal Relay Service at (800) 877-8339. Copies of available... techniques or other forms of information technology, e.g., permitting electronic submission of responses. HUD...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=clouds&pg=7&id=EJ951970','ERIC'); return false;" href="https://eric.ed.gov/?q=clouds&pg=7&id=EJ951970"><span>Encourage Students to Read through the Use of Data Visualization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Bandeen, Heather M.; Sawin, Jason E.</p> <p>2012-01-01</p> <p>Instructors are always looking for new ways to engage students in reading assignments. The authors present a few techniques that rely on a web-based data visualization tool called Wordle (wordle.net). Wordle creates word frequency representations called word clouds. The larger a word appears within a cloud, the more frequently it occurs within a…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29072144','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29072144"><span>A new method for enhancer prediction based on deep belief network.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bu, Hongda; Gan, Yanglan; Wang, Yang; Zhou, Shuigeng; Guan, Jihong</p> <p>2017-10-16</p> <p>Studies have shown that enhancers are significant regulatory elements to play crucial roles in gene expression regulation. Since enhancers are unrelated to the orientation and distance to their target genes, it is a challenging mission for scholars and researchers to accurately predicting distal enhancers. In the past years, with the high-throughout ChiP-seq technologies development, several computational techniques emerge to predict enhancers using epigenetic or genomic features. Nevertheless, the inconsistency of computational models across different cell-lines and the unsatisfactory prediction performance call for further research in this area. Here, we propose a new Deep Belief Network (DBN) based computational method for enhancer prediction, which is called EnhancerDBN. This method combines diverse features, composed of DNA sequence compositional features, DNA methylation and histone modifications. Our computational results indicate that 1) EnhancerDBN outperforms 13 existing methods in prediction, and 2) GC content and DNA methylation can serve as relevant features for enhancer prediction. Deep learning is effective in boosting the performance of enhancer prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2901897','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2901897"><span>LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Einstein, Daniel R.; Dyedov, Vladimir</p> <p>2010-01-01</p> <p>Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...853...51Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...853...51Z"><span>Ultralight Axion Dark Matter and Its Impact on Dark Halo Structure in N-body Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Jiajun; Sming Tsai, Yue-Lin; Kuo, Jui-Lin; Cheung, Kingman; Chu, Ming-Chung</p> <p>2018-01-01</p> <p>Ultralight axion is a dark matter candidate with mass { O }({10}-22){eV} and de Broglie wavelength of order kiloparsec. Such an axion, also called fuzzy dark matter (FDM), thermalizes via gravitational force and forms a Bose–Einstein condensate. Recent studies suggested that the quantum pressure from FDM can significantly affect structure formation in small scales, thus alleviating the so-called “small-scale crisis.” In this paper, we develop a new technique to discretize the quantum pressure and illustrate the interactions among FDM particles in an N-body simulation that accurately simulates the formation of the dark matter halo and its inner structure in the region outside the softening length. In a self-gravitationally bound virialized halo, we find a constant density solitonic core, which is consistent with theoretical prediction. The existence of the solitonic core reveals the nonlinear effect of quantum pressure and impacts structure formation in the FDM model.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010LNCS.6466...62G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010LNCS.6466...62G"><span>Multi Sensor Fusion Using Fitness Adaptive Differential Evolution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Giri, Ritwik; Ghosh, Arnob; Chowdhury, Aritra; Das, Swagatam</p> <p></p> <p>The rising popularity of multi-source, multi-sensor networks supports real-life applications calls for an efficient and intelligent approach to information fusion. Traditional optimization techniques often fail to meet the demands. The evolutionary approach provides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems. We present a new evolutionary approach based on a modified version of Differential Evolution (DE), called Fitness Adaptive Differential Evolution (FiADE). FiADE treats sensors in the network as distributed intelligent agents with various degrees of autonomy. Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment. The proposed approach is formulated to produce good result for the problems that are high-dimensional, highly nonlinear, and random. The proposed approach gives better result in case of optimal allocation of sensors. The performance of the proposed approach is compared with an evolutionary algorithm coordination generalized particle model (C-GPM).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27852969','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27852969"><span>[Acceptance and Commitment Therapy: Theoretical background and practice].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Eisenbeck, Nikolett; Schlosser, Károly Kornél; Szondy, Máté; Szabó-Bartha, Anett</p> <p></p> <p>The Acceptance and Commitment Therapy (ACT) is one of the modern, so-called third-wave behavioural therapies. Among them the most successful is ACT, both in the number of therapists and respective scientific research. ACT's theoretical and philosophical background is described explicitly and its therapeutic interventions were developed according to this philosophy. Its psychopathological model is based on the idea that mainly the person's regulatory efforts of their own thoughts and feelings lead to psychological problems. That is, the source of human suffering and various psychological problems is the so called psychological inflexibility: control attempts of private events instead of living a life based on personal values and long-term goals. Therefore, clinical work in ACT focuses on the acceptance and defusion of the unwanted inner experiences and on the development of a meaningful life. The present article aims to provide a comprehensive description of ACT in Hungarian: its theoretical background, clinical techniques, and efficacy. At the end of the article, the state of ACT in Hungary will also be briefly discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.S23A2753H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.S23A2753H"><span>An Envelope Based Feedback Control System for Earthquake Early Warning: Reality Check Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Heaton, T. H.; Karakus, G.; Beck, J. L.</p> <p>2016-12-01</p> <p>Earthquake early warning systems are, in general, designed to be open loop control systems in such a way that the output, i.e., the warning messages, only depend on the input, i.e., recorded ground motions, up to the moment when the message is issued in real-time. We propose an algorithm, which is called Reality Check Algorithm (RCA), which would assess the accuracy of issued warning messages, and then feed the outcome of the assessment back into the system. Then, the system would modify its messages if necessary. That is, we are proposing to convert earthquake early warning systems into feedback control systems by integrating them with RCA. RCA works by continuously monitoring and comparing the observed ground motions' envelopes to the predicted envelopes of Virtual Seismologist (Cua 2005). Accuracy of magnitude and location (both spatial and temporal) estimations of the system are assessed separately by probabilistic classification models, which are trained by a Sparse Bayesian Learning technique called Automatic Relevance Determination prior.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19257078','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19257078"><span>Fluctuations in protein synthesis from a single RNA template: stochastic kinetics of ribosomes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Garai, Ashok; Chowdhury, Debashish; Ramakrishnan, T V</p> <p>2009-01-01</p> <p>Proteins are polymerized by cyclic machines called ribosomes, which use their messenger RNA (mRNA) track also as the corresponding template, and the process is called translation. We explore, in depth and detail, the stochastic nature of the translation. We compute various distributions associated with the translation process; one of them--namely, the dwell time distribution--has been measured in recent single-ribosome experiments. The form of the distribution, which fits best with our simulation data, is consistent with that extracted from the experimental data. For our computations, we use a model that captures both the mechanochemistry of each individual ribosome and their steric interactions. We also demonstrate the effects of the sequence inhomogeneities of real genes on the fluctuations and noise in translation. Finally, inspired by recent advances in the experimental techniques of manipulating single ribosomes, we make theoretical predictions on the force-velocity relation for individual ribosomes. In principle, all our predictions can be tested by carrying out in vitro experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001SPIE.4320..627D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001SPIE.4320..627D"><span>Comparison of VRX CT scanners geometries</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>DiBianca, Frank A.; Melnyk, Roman; Duckworth, Christopher N.; Russ, Stephan; Jordan, Lawrence M.; Laughter, Joseph S.</p> <p>2001-06-01</p> <p>A technique called Variable-Resolution X-ray (VRX) detection greatly increases the spatial resolution in computed tomography (CT) and digital radiography (DR) as the field size decreases. The technique is based on a principle called `projective compression' that allows both the resolution element and the sampling distance of a CT detector to scale with the subject or field size. For very large (40 - 50 cm) field sizes, resolution exceeding 2 cy/mm is possible and for very small fields, microscopy is attainable with resolution exceeding 100 cy/mm. This paper compares the benefits obtainable with two different VRX detector geometries: the single-arm geometry and the dual-arm geometry. The analysis is based on Monte Carlo simulations and direct calculations. The results of this study indicate that the dual-arm system appears to have more advantages than the single-arm technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910010473','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910010473"><span>Spectrum transformation for divergent iterations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gupta, Murli M.</p> <p>1991-01-01</p> <p>Certain spectrum transformation techniques are described that can be used to transform a diverging iteration into a converging one. Two techniques are considered called spectrum scaling and spectrum enveloping and how to obtain the optimum values of the transformation parameters is discussed. Numerical examples are given to show how this technique can be used to transform diverging iterations into converging ones; this technique can also be used to accelerate the convergence of otherwise convergent iterations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=login&pg=5&id=EJ261943','ERIC'); return false;" href="https://eric.ed.gov/?q=login&pg=5&id=EJ261943"><span>Optimization of Online Searching by Pre-Recording the Search Statements: A Technique for the HP-2645A Terminal.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Oberhauser, O. C.; Stebegg, K.</p> <p>1982-01-01</p> <p>Describes the terminal's capabilities, ways to store and call up lines of statements, cassette tapes needed during searches, and master tape's use for login storage. Advantages of the technique and two sources are listed. (RBF)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/23998','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/23998"><span>Development of a mix design process for cold-in-place rehabilitation using foamed asphalt.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2003-12-01</p> <p>This study evaluates one of the recycling techniques used to rehabilitate pavement, called Cold In-Place Recycling (CIR). CIR is one of the fastest growing road rehabilitation techniques because it is quick and cost-effective. The document reports on...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011EJASP2011..102M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011EJASP2011..102M"><span>Classification by diagnosing all absorption features (CDAF) for the most abundant minerals in airborne hyperspectral images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mobasheri, Mohammad Reza; Ghamary-Asl, Mohsen</p> <p>2011-12-01</p> <p>Imaging through hyperspectral technology is a powerful tool that can be used to spectrally identify and spatially map materials based on their specific absorption characteristics in electromagnetic spectrum. A robust method called Tetracorder has shown its effectiveness at material identification and mapping, using a set of algorithms within an expert system decision-making framework. In this study, using some stages of Tetracorder, a technique called classification by diagnosing all absorption features (CDAF) is introduced. This technique enables one to assign a class to the most abundant mineral in each pixel with high accuracy. The technique is based on the derivation of information from reflectance spectra of the image. This can be done through extraction of spectral absorption features of any minerals from their respected laboratory-measured reflectance spectra, and comparing it with those extracted from the pixels in the image. The CDAF technique has been executed on the AVIRIS image where the results show an overall accuracy of better than 96%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10470E..0MV','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10470E..0MV"><span>Full optical model of micro-endoscope with optical coherence microscopy, multiphoton microscopy and visible capabilities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vega, David; Kiekens, Kelli C.; Syson, Nikolas C.; Romano, Gabriella; Baker, Tressa; Barton, Jennifer K.</p> <p>2018-02-01</p> <p>While Optical Coherence Microscopy (OCM), Multiphoton Microscopy (MPM), and narrowband imaging are powerful imaging techniques that can be used to detect cancer, each imaging technique has limitations when used by itself. Combining them into an endoscope to work in synergy can help achieve high sensitivity and specificity for diagnosis at the point of care. Such complex endoscopes have an elevated risk of failure, and performing proper modelling ensures functionality and minimizes risk. We present full 2D and 3D models of a multimodality optical micro-endoscope to provide real-time detection of carcinomas, called a salpingoscope. The models evaluate the endoscope illumination and light collection capabilities of various modalities. The design features two optical paths with different numerical apertures (NA) through a single lens system with a scanning optical fiber. The dual path is achieved using dichroic coatings embedded in a triplet. A high NA optical path is designed to perform OCM and MPM while a low NA optical path is designed for the visible spectrum to navigate the endoscope to areas of interest and narrowband imaging. Different tests such as the reflectance profile of homogeneous epithelial tissue were performed to adjust the models properly. Light collection models for the different modalities were created and tested for efficiency. While it is challenging to evaluate the efficiency of multimodality endoscopes, the models ensure that the system is design for the expected light collection levels to provide detectable signal to work for the intended imaging.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23163611','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23163611"><span>Calling and life satisfaction: it's not about having it, it's about living it.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Duffy, Ryan D; Allan, Blake A; Autin, Kelsey L; Bott, Elizabeth M</p> <p>2013-01-01</p> <p>The present study examined the relation of career calling to life satisfaction among a diverse sample of 553 working adults, with a specific focus on the distinction between perceiving a calling (sensing a calling to a career) and living a calling (actualizing one's calling in one's current career). As hypothesized, the relation of perceiving a calling to life satisfaction was fully mediated by living a calling. On the basis of this finding, a structural equation model was tested to examine possible mediators between living a calling and life satisfaction. As hypothesized, the relation of living a calling to life satisfaction was partially mediated by job satisfaction and life meaning, and the link between living a calling and job satisfaction was mediated by work meaning and career commitment. Modifications of the model also revealed that the link of living a calling to life meaning was mediated by work meaning. Implications for research and practice are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JPhCS.138a2021R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JPhCS.138a2021R"><span>'Enzyme Test Bench': A biochemical application of the multi-rate modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rachinskiy, K.; Schultze, H.; Boy, M.; Büchs, J.</p> <p>2008-11-01</p> <p>In the expanding field of 'white biotechnology' enzymes are frequently applied to catalyze the biochemical reaction from a resource material to a valuable product. Evolutionary designed to catalyze the metabolism in any life form, they selectively accelerate complex reactions under physiological conditions. Modern techniques, such as directed evolution, have been developed to satisfy the increasing demand on enzymes. Applying these techniques together with rational protein design, we aim at improving of enzymes' activity, selectivity and stability. To tap the full potential of these techniques, it is essential to combine them with adequate screening methods. Nowadays a great number of high throughput colorimetric and fluorescent enzyme assays are applied to measure the initial enzyme activity with high throughput. However, the prediction of enzyme long term stability within short experiments is still a challenge. A new high throughput technique for enzyme characterization with specific attention to the long term stability, called 'Enzyme Test Bench', is presented. The concept of the Enzyme Test Bench consists of short term enzyme tests conducted under partly extreme conditions to predict the enzyme long term stability under moderate conditions. The technique is based on the mathematical modeling of temperature dependent enzyme activation and deactivation. Adapting the temperature profiles in sequential experiments by optimum non-linear experimental design, the long term deactivation effects can be purposefully accelerated and detected within hours. During the experiment the enzyme activity is measured online to estimate the model parameters from the obtained data. Thus, the enzyme activity and long term stability can be calculated as a function of temperature. The results of the characterization, based on micro liter format experiments of hours, are in good agreement with the results of long term experiments in 1L format. Thus, the new technique allows for both: the enzyme screening with regard to the long term stability and the choice of the optimal process temperature. The presented article gives a successful example for the application of multi-rate modeling, experimental design and parameter estimation within biochemical engineering. At the same time, it shows the limitations of the methods at the state of the art and addresses the current problems to the applied mathematics community.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.esajournals.org/doi/abs/10.1890/04-1802','USGSPUBS'); return false;" href="http://www.esajournals.org/doi/abs/10.1890/04-1802"><span>A general class of multinomial mixture models for anuran calling survey data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Royle, J. Andrew; Link, W.A.</p> <p>2005-01-01</p> <p>We propose a general framework for modeling anuran abundance using data collected from commonly used calling surveys. The data generated from calling surveys are indices of calling intensity (vocalization of males) that do not have a precise link to actual population size and are sensitive to factors that influence anuran behavior. We formulate a model for calling-index data in terms of the maximum potential calling index that could be observed at a site (the 'latent abundance class'), given its underlying breeding population, and we focus attention on estimating the distribution of this latent abundance class. A critical consideration in estimating the latent structure is imperfect detection, which causes the observed abundance index to be less than or equal to the latent abundance class. We specify a multinomial sampling model for the observed abundance index that is conditional on the latent abundance class. Estimation of the latent abundance class distribution is based on the marginal likelihood of the index data, having integrated over the latent class distribution. We apply the proposed modeling framework to data collected as part of the North American Amphibian Monitoring Program (NAAMP).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5053880','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5053880"><span>Multiple imputation of missing covariates for the Cox proportional hazards cure model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Beesley, Lauren J; Bartlett, Jonathan W; Wolf, Gregory T; Taylor, Jeremy M G</p> <p>2016-01-01</p> <p>We explore several approaches for imputing partially observed covariates when the outcome of interest is a censored event time and when there is an underlying subset of the population that will never experience the event of interest. We call these subjects “cured,” and we consider the case where the data are modeled using a Cox proportional hazards (CPH) mixture cure model. We study covariate imputation approaches using fully conditional specification (FCS). We derive the exact conditional distribution and suggest a sampling scheme for imputing partially observed covariates in the CPH cure model setting. We also propose several approximations to the exact distribution that are simpler and more convenient to use for imputation. A simulation study demonstrates that the proposed imputation approaches outperform existing imputation approaches for survival data without a cure fraction in terms of bias in estimating CPH cure model parameters. We apply our multiple imputation techniques to a study of patients with head and neck cancer. PMID:27439726</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1003532-systems-approach-scalable-transportation-network-modeling','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1003532-systems-approach-scalable-transportation-network-modeling"><span>A Systems Approach to Scalable Transportation Network Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Perumalla, Kalyan S</p> <p>2006-01-01</p> <p>Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24808069','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24808069"><span>Discriminative least squares regression for multiclass classification and feature selection.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xiang, Shiming; Nie, Feiping; Meng, Gaofeng; Pan, Chunhong; Zhang, Changshui</p> <p>2012-11-01</p> <p>This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18255855','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18255855"><span>A Petri net synthesis theory for modeling flexible manufacturing systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jeng, M D</p> <p>1997-01-01</p> <p>A theory that synthesizes Petri nets for modeling flexible manufacturing systems is presented. The theory adopts a bottom-up or modular-composition approach to construct net models. Each module is modeled as a resource control net (RCN), which represents a subsystem that controls a resource type in a flexible manufacturing system. Interactions among the modules are described as the common transition and transition subnets. The net obtained by merging the modules with two minimal restrictions is shown to be conservative and thus bounded. An algorithm is developed to detect two sufficient conditions for structural liveness of the net. The algorithm examines only the net's structure and the initial marking, and appears to be more efficient than state enumeration techniques such as the reachability tree method. In this paper, the sufficient conditions for liveness are shown to be related to some structural objects called siphons. To demonstrate the applicability of the theory, a flexible manufacturing system of a moderate size is modeled and analyzed using the proposed theory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1949b0023L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1949b0023L"><span>The role of data fusion in predictive maintenance using digital twin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Zheng; Meyendorf, Norbert; Mrad, Nezih</p> <p>2018-04-01</p> <p>Modern aerospace industry is migrating from reactive to proactive and predictive maintenance to increase platform operational availability and efficiency, extend its useful life cycle and reduce its life cycle cost. Multiphysics modeling together with data-driven analytics generate a new paradigm called "Digital Twin." The digital twin is actually a living model of the physical asset or system, which continually adapts to operational changes based on the collected online data and information, and can forecast the future of the corresponding physical counterpart. This paper reviews the overall framework to develop a digital twin coupled with the industrial Internet of Things technology to advance aerospace platforms autonomy. Data fusion techniques particularly play a significant role in the digital twin framework. The flow of information from raw data to high-level decision making is propelled by sensor-to-sensor, sensor-to-model, and model-to-model fusion. This paper further discusses and identifies the role of data fusion in the digital twin framework for aircraft predictive maintenance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhyA..455....1G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhyA..455....1G"><span>Option pricing for stochastic volatility model with infinite activity Lévy jumps</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gong, Xiaoli; Zhuang, Xintian</p> <p>2016-08-01</p> <p>The purpose of this paper is to apply the stochastic volatility model driven by infinite activity Lévy processes to option pricing which displays infinite activity jumps behaviors and time varying volatility that is consistent with the phenomenon observed in underlying asset dynamics. We specially pay attention to three typical Lévy processes that replace the compound Poisson jumps in Bates model, aiming to capture the leptokurtic feature in asset returns and volatility clustering effect in returns variance. By utilizing the analytical characteristic function and fast Fourier transform technique, the closed form formula of option pricing can be derived. The intelligent global optimization search algorithm called Differential Evolution is introduced into the above highly dimensional models for parameters calibration so as to improve the calibration quality of fitted option models. Finally, we perform empirical researches using both time series data and options data on financial markets to illustrate the effectiveness and superiority of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8889E..1GM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8889E..1GM"><span>An earth imaging camera simulation using wide-scale construction of reflectance surfaces</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk</p> <p>2013-10-01</p> <p>Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19990064123','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19990064123"><span>Structural Embeddings: Mechanization with Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Munoz, Cesar; Rushby, John</p> <p>1999-01-01</p> <p>The most powerful tools for analysis of formal specifications are general-purpose theorem provers and model checkers, but these tools provide scant methodological support. Conversely, those approaches that do provide a well-developed method generally have less powerful automation. It is natural, therefore, to try to combine the better-developed methods with the more powerful general-purpose tools. An obstacle is that the methods and the tools often employ very different logics. We argue that methods are separable from their logics and are largely concerned with the structure and organization of specifications. We, propose a technique called structural embedding that allows the structural elements of a method to be supported by a general-purpose tool, while substituting the logic of the tool for that of the method. We have found this technique quite effective and we provide some examples of its application. We also suggest how general-purpose systems could be restructured to support this activity better.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3367908','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3367908"><span>Visualising Conversation Structure across Time: Insights into Effective Doctor-Patient Consultations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Angus, Daniel; Watson, Bernadette; Smith, Andrew; Gallois, Cindy; Wiles, Janet</p> <p>2012-01-01</p> <p>Effective communication between healthcare professionals and patients is critical to patients’ health outcomes. The doctor/patient dialogue has been extensively researched from different perspectives, with findings emphasising a range of behaviours that lead to effective communication. Much research involves self-reports, however, so that behavioural engagement cannot be disentangled from patients’ ratings of effectiveness. In this study we used a highly efficient and time economic automated computer visualisation measurement technique called Discursis to analyse conversational behaviour in consultations. Discursis automatically builds an internal language model from a transcript, mines the transcript for its conceptual content, and generates an interactive visual account of the discourse. The resultant visual account of the whole consultation can be analysed for patterns of engagement between interactants. The findings from this study show that Discursis is effective at highlighting a range of consultation techniques, including communication accommodation, engagement and repetition. PMID:22693629</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009LNCS.5690..302M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009LNCS.5690..302M"><span>Web Navigation Sequences Automation in Modern Websites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Montoto, Paula; Pan, Alberto; Raposo, Juan; Bellas, Fernando; López, Javier</p> <p></p> <p>Most today’s web sources are designed to be used by humans, but they do not provide suitable interfaces for software programs. That is why a growing interest has arisen in so-called web automation applications that are widely used for different purposes such as B2B integration, automated testing of web applications or technology and business watch. Previous proposals assume models for generating and reproducing navigation sequences that are not able to correctly deal with new websites using technologies such as AJAX: on one hand existing systems only allow recording simple navigation actions and, on the other hand, they are unable to detect the end of the effects caused by an user action. In this paper, we propose a set of new techniques to record and execute web navigation sequences able to deal with all the complexity existing in AJAX-based web sites. We also present an exhaustive evaluation of the proposed techniques that shows very promising results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/992467','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/992467"><span>The Sloan Digital Sky Survey-II: Photometry and Supernova Ia Light Curves from the 2005 Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Holtzman, Jon A.; /New Mexico State U.; Marriner, John</p> <p>2010-08-26</p> <p>We present ugriz light curves for 146 spectroscopically confirmed or spectroscopically probable Type Ia supernovae from the 2005 season of the SDSS-II Supernova survey. The light curves have been constructed using a photometric technique that we call scene modeling, which is described in detail here; the major feature is that supernova brightnesses are extracted from a stack of images without spatial resampling or convolution of the image data. This procedure produces accurate photometry along with accurate estimates of the statistical uncertainty, and can be used to derive photometry taken with multiple telescopes. We discuss various tests of this technique thatmore » demonstrate its capabilities. We also describe the methodology used for the calibration of the photometry, and present calibrated magnitudes and fluxes for all of the spectroscopic SNe Ia from the 2005 season.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29127311','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29127311"><span>Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří</p> <p>2017-11-10</p> <p>We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002lww..book.....K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002lww..book.....K"><span>Linear Water Waves</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kuznetsov, N.; Maz'ya, V.; Vainberg, B.</p> <p>2002-08-01</p> <p>This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18421114','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18421114"><span>Glove-based approach to online signature verification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A</p> <p>2008-06-01</p> <p>Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..MAR.R5004D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..MAR.R5004D"><span>NeuroPhysics: Studying how neurons create the perception of space-time using Physics' tools and techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dhingra, Shonali; Sandler, Roman; Rios, Rodrigo; Vuong, Cliff; Mehta, Mayank</p> <p></p> <p>All animals naturally perceive the abstract concept of space-time. A brain region called the Hippocampus is known to be important in creating these perceptions, but the underlying mechanisms are unknown. In our lab we employ several experimental and computational techniques from Physics to tackle this fundamental puzzle. Experimentally, we use ideas from Nanoscience and Materials Science to develop techniques to measure the activity of hippocampal neurons, in freely-behaving animals. Computationally, we develop models to study neuronal activity patterns, which are point processes that are highly stochastic and multidimensional. We then apply these techniques to collect and analyze neuronal signals from rodents while they're exploring space in Real World or Virtual Reality with various stimuli. Our findings show that under these conditions neuronal activity depends on various parameters, such as sensory cues including visual and auditory, and behavioral cues including, linear and angular, position and velocity. Further, neuronal networks create internally-generated rhythms, which influence perception of space and time. In totality, these results further our understanding of how the brain develops a cognitive map of our surrounding space, and keep track of time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhBio..15b6009B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhBio..15b6009B"><span>ProteinAC: a frequency domain technique for analyzing protein dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bozkurt Varolgunes, Yasemin; Demir, Alper</p> <p>2018-03-01</p> <p>It is widely believed that the interactions of proteins with ligands and other proteins are determined by their dynamic characteristics as opposed to only static, time-invariant processes. We propose a novel computational technique, called ProteinAC (PAC), that can be used to analyze small scale functional protein motions as well as interactions with ligands directly in the frequency domain. PAC was inspired by a frequency domain analysis technique that is widely used in electronic circuit design, and can be applied to both coarse-grained and all-atom models. It can be considered as a generalization of previously proposed static perturbation-response methods, where the frequency of the perturbation becomes the key. We discuss the precise relationship of PAC to static perturbation-response schemes. We show that the frequency of the perturbation may be an important factor in protein dynamics. Perturbations at different frequencies may result in completely different response behavior while magnitude and direction are kept constant. Furthermore, we introduce several novel frequency dependent metrics that can be computed via PAC in order to characterize response behavior. We present results for the ferric binding protein that demonstrate the potential utility of the proposed techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JGeod..87..813B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JGeod..87..813B"><span>A technique for routinely updating the ITU-R database using radio occultation electron density profiles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brunini, Claudio; Azpilicueta, Francisco; Nava, Bruno</p> <p>2013-09-01</p> <p>Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density,, and the height, . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve and values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between and elec/m for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height (2 %).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990068044&hterms=indices+diversity&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dindices%2Bdiversity','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990068044&hterms=indices+diversity&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dindices%2Bdiversity"><span>Environmental Assessment and Monitoring with ICAMS (Image Characterization and Modeling System) Using Multiscale Remote-Sensing Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lam, N.; Qiu, H.-I.; Quattrochi, Dale A.; Zhao, Wei</p> <p>1997-01-01</p> <p>With the rapid increase in spatial data, especially in the NASA-EOS (Earth Observing System) era, it is necessary to develop efficient and innovative tools to handle and analyze these data so that environmental conditions can be assessed and monitored. A main difficulty facing geographers and environmental scientists in environmental assessment and measurement is that spatial analytical tools are not easily accessible. We have recently developed a remote sensing/GIS software module called Image Characterization and Modeling System (ICAMS) to provide specialized spatial analytical tools for the measurement and characterization of satellite and other forms of spatial data. ICAMS runs on both the Intergraph-MGE and Arc/info UNIX and Windows-NT platforms. The main techniques in ICAMS include fractal measurement methods, variogram analysis, spatial autocorrelation statistics, textural measures, aggregation techniques, normalized difference vegetation index (NDVI), and delineation of land/water and vegetated/non-vegetated boundaries. In this paper, we demonstrate the main applications of ICAMS on the Intergraph-MGE platform using Landsat Thematic Mapper images from the city of Lake Charles, Louisiana. While the utilities of ICAMS' spatial measurement methods (e.g., fractal indices) in assessing environmental conditions remain to be researched, making the software available to a wider scientific community can permit the techniques in ICAMS to be evaluated and used for a diversity of applications. The findings from these various studies should lead to improved algorithms and more reliable models for environmental assessment and monitoring.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JChPh.143w4110K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JChPh.143w4110K"><span>Exact calculation of the time convolutionless master equation generator: Application to the nonequilibrium resonant level model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kidon, Lyran; Wilner, Eli Y.; Rabani, Eran</p> <p>2015-12-01</p> <p>The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima-Zwanzig-Mori time-convolution (TC) and the other on the Tokuyama-Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called "memory kernel" or "generator," going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green's function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1991PhDT........66G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1991PhDT........66G"><span>Strong Langmuir Turbulence and Four-Wave Mixing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Glanz, James</p> <p>1991-02-01</p> <p>The staircase expansion is a new mathematical technique for deriving reduced, nonlinear-PDE descriptions from the plasma-moment equations. Such descriptions incorporate only the most significant linear and nonlinear terms of more complex systems. The technique is used to derive a set of Dawson-Zakharov or "master" equations, which unify and generalize previous work and show the limitations of models commonly used to describe nonlinear plasma waves. Fundamentally new wave-evolution equations are derived that admit of exact nonlinear solutions (solitary waves). Analytic calculations illustrate the competition between well-known effects of self-focusing, which require coupling to ion motion, and pure-electron nonlinearities, which are shown to be especially important in curved geometries. Also presented is an N -moment hydrodynamic model derived from the Vlasov equation. In this connection, the staircase expansion is shown to remain useful for all values of N >= 3. The relevance of the present work to nonlocally truncated hierarchies, which more accurately model dissipation, is briefly discussed. Finally, the general formalism is applied to the problem of electromagnetic emission from counterpropagating Langmuir pumps. It is found that previous treatments have neglected order-unity effects that increase the emission significantly. Detailed numerical results are presented to support these conclusions. The staircase expansion--so called because of its appearance when written out--should be effective whenever the largest contribution to the nonlinear wave remains "close" to some given frequency. Thus the technique should have application to studies of wake-field acceleration schemes and anomalous damping of plasma waves.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840012433','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840012433"><span>Addressee Errors in ATC Communications: The Call Sign Problem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Monan, W. P.</p> <p>1983-01-01</p> <p>Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=compute&pg=4&id=EJ862207','ERIC'); return false;" href="https://eric.ed.gov/?q=compute&pg=4&id=EJ862207"><span>Trick or Technique?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Sheard, Michael</p> <p>2009-01-01</p> <p>More often than one might at first imagine, a simple trick involving integration by parts can be used to compute indefinite integrals in unexpected and amusing ways. A systematic look at the trick illuminates the question of whether the trick is useful enough to be called an actual technique of integration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhDT........97M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhDT........97M"><span>Energy-based dosimetry of low-energy, photon-emitting brachytherapy sources</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Malin, Martha J.</p> <p></p> <p>Model-based dose calculation algorithms (MBDCAs) for low-energy, photon-emitting brachytherapy sources have advanced to the point where the algorithms may be used in clinical practice. Before these algorithms can be used, a methodology must be established to verify the accuracy of the source models used by the algorithms. Additionally, the source strength metric for these algorithms must be established. This work explored the feasibility of verifying the source models used by MBDCAs by measuring the differential photon fluence emitted from the encapsulation of the source. The measured fluence could be compared to that modeled by the algorithm to validate the source model. This work examined how the differential photon fluence varied with position and angle of emission from the source, and the resolution that these measurements would require for dose computations to be accurate to within 1.5%. Both the spatial and angular resolution requirements were determined. The techniques used to determine the resolution required for measurements of the differential photon fluence were applied to determine why dose-rate constants determined using a spectroscopic technique disagreed with those computed using Monte Carlo techniques. The discrepancy between the two techniques had been previously published, but the cause of the discrepancy was not known. This work determined the impact that some of the assumptions used by the spectroscopic technique had on the accuracy of the calculation. The assumption of isotropic emission was found to cause the largest discrepancy in the spectroscopic dose-rate constant. Finally, this work improved the instrumentation used to measure the rate at which energy leaves the encapsulation of a brachytherapy source. This quantity is called emitted power (EP), and is presented as a possible source strength metric for MBDCAs. A calorimeter that measured EP was designed and built. The theoretical framework that the calorimeter relied upon to measure EP was established. Four clinically relevant 125I brachytherapy sources were measured with the instrument. The accuracy of the measured EP was compared to an air-kerma strength-derived EP to test the accuracy of the instrument. The instrument was accurate to within 10%, with three out of the four source measurements accurate to within 4%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010PhDT........73B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010PhDT........73B"><span>New optical tomographic & topographic techniques for biomedical applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Buytaert, Jan</p> <p></p> <p>The mammalian middle ear contains the eardrum and the three auditory ossicles, and forms an impedance match between sound in air and pressure waves in the fluid of the inner ear. Without this intermediate system, with its unsurpassed efficiency and dynamic range, we would be practically deaf. Physics-based modeling of this extremely complex mechanical system is necessary to help our basic understanding of the functioning of hearing. Highly realistic models will make it possible to predict the outcome of surgical interventions and to optimize design of ossicle prostheses and active middle ear implants. To obtain such models and with realistic output, basic input data is still missing. In this dissertation I developed and used two new optical techniques to obtain two essential sets of data: accurate three-dimensional morphology of the middle ear structures, and elasticity parameters of the eardrum. The first technique is a new method for optical tomography of macroscopic biomedical objects, which makes it possible to measure the three-dimensional geometry of the middle ear ossicles and soft tissues which are connecting and suspending them. I made a new and high-resolution version of this orthogonal-plane fluorescence optical sectioning method, to obtain micrometer resolution in macroscopic specimens. The result is thus a complete 3-D model of the middle (and inner) ear of gerbil in unprecedented quality. On top of high-resolution morphological models of the middle ear structures, I applied the technique in other fields of research as well. The second device works according to a new optical profilometry technique which allows to measure shape and deformations of the eardrum and other membranes or objects. The approach is called projection moire profilometry, and creates moire interference fringes which contain the height information. I developed a setup which uses liquid crystal panels for grid projection and optical demodulation. Hence no moving parts are present and the setup is entirely digitally controlled. This measurement method is developed to determine the elasticity parameters of the eardrum in-situ. Other surface shapes however can also be measured.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT........33E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT........33E"><span>Coupled diffusion and mechanics in battery electrodes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Eshghinejad, Ahmadreza</p> <p></p> <p>We are living in a world with continuous production and consumption of energy. The energy production in the past decades has started to move away from petrochemical sources toward sustainable sources such as solar, wind and geothermal. Also, the energy consumption is further adapting to the sustainable sources. For instance, in recent years electric vehicles are growing fast that can consume sustainable electric energy stored in their batteries. In this direction, in order to further move toward sustainable energy, materials are becoming increasingly important for storing electric energy. Although, currently the technologies such as Li-ion batteries and solid-oxide fuel cells are commercially available for energy applications, improvements are crucial for the next generation of many other technologies producing or consuming sustainable energies. A critical aspect of the electrochemical activities involved in energy storage technologies such as Li-ion batteries and solid-oxide fuel cells is the diffusion of ions into the electrode materials. This process ultimately governs various functional properties of the batteries such as capacity and charging/discharging rates. The first goal of this dissertation is to develop mathematical tools to analyze the ionic diffusion and investigate its coupling with mechanics in electrodes. For this purpose, a thermodynamics-based modeling framework is developed and numerically solved using two numerical methods to analyze ionic diffusion in heterogeneous and structured electrodes. The next goal of this dissertation is to develop and analyze characterization techniques to probe the electrochemical processes at the nano-scale. To this end, the mathematical models are first employed to model a previously developed Atomic Force Microscopy based technique to probe local electrochemical activities called Electrochemical Strain Microscopy (ESM). This method probes the activities by inducing AC electric field to perturb ionic activities and measuring the surface vibrations. Different aspects of this technique are analyzed and the limitations are discussed. Such limitations moves the dissertation toward development of a new technique for probing the electrochemical activities, to overcome the previous limitations, called Scanning Thermo-ionic Microscopy (STIM). In this method, the local activities are probed by inducing AC temperature oscillations to perturb ionic activities and measuring the surface vibrations. The principle mathematical analysis of the coupled governing equations and the method of probing electrochemical activities are discussed in detail. Also, the method is implemented into the AFM hardware/software and the STIM response is confirmed using experiments on LiFePO4 and Sm-doped Ceria as well-known battery and fuel cell electrodes. The STIM method provides a clean method for analyzing energy storage materials and designing novel nano-structured materials for improved performance. Finally, conclusion of the presented work is discussed in the last chapter and the future works to continue the development of the modeling and experiments are listed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ShMeS.tmp...50V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ShMeS.tmp...50V"><span>Additive Manufacturing of Shape Memory Alloys</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Van Humbeeck, Jan</p> <p>2018-04-01</p> <p>Selective Laser Melting (SLM) is an additive manufacturing production process, also called 3D printing, in which functional, complex parts are produced by selectively melting patterns in consecutive layers of powder with a laser beam. The pattern the laser beam is following is controlled by software that calculates the pattern by slicing a 3D CAD model of the part to be constructed. Apart from SLM, also other additive manufacturing techniques such as EBM (Electron Beam Melting), FDM (Fused Deposition Modelling), WAAM (Wire Arc Additive Manufacturing), LENS (Laser Engineered Net Shaping such as Laser Cladding) and binder jetting allow to construct complete parts layer upon layer. But since more experience of AM of shape memory alloys is collected by SLM, this paper will overview the potentials, limits and problems of producing NiTi parts by SLM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OPhy...15...92B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OPhy...15...92B"><span>Electric field computation and measurements in the electroporation of inhomogeneous samples</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bernardis, Alessia; Bullo, Marco; Campana, Luca Giovanni; Di Barba, Paolo; Dughiero, Fabrizio; Forzan, Michele; Mognaschi, Maria Evelina; Sgarbossa, Paolo; Sieni, Elisabetta</p> <p>2017-12-01</p> <p>In clinical treatments of a class of tumors, e.g. skin tumors, the drug uptake of tumor tissue is helped by means of a pulsed electric field, which permeabilizes the cell membranes. This technique, which is called electroporation, exploits the conductivity of the tissues: however, the tumor tissue could be characterized by inhomogeneous areas, eventually causing a non-uniform distribution of current. In this paper, the authors propose a field model to predict the effect of tissue inhomogeneity, which can affect the current density distribution. In particular, finite-element simulations, considering non-linear conductivity against field relationship, are developed. Measurements on a set of samples subject to controlled inhomogeneity make it possible to assess the numerical model in view of identifying the equivalent resistance between pairs of electrodes.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5014101','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5014101"><span>Two-scale homogenization to determine effective parameters of thin metallic-structured films</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Marigo, Jean-Jacques</p> <p>2016-01-01</p> <p>We present a homogenization method based on matched asymptotic expansion technique to derive effective transmission conditions of thin structured films. The method leads unambiguously to effective parameters of the interface which define jump conditions or boundary conditions at an equivalent zero thickness interface. The homogenized interface model is presented in the context of electromagnetic waves for metallic inclusions associated with Neumann or Dirichlet boundary conditions for transverse electric or transverse magnetic wave polarization. By comparison with full-wave simulations, the model is shown to be valid for thin interfaces up to thicknesses close to the wavelength. We also compare our effective conditions with the two-sided impedance conditions obtained in transmission line theory and to the so-called generalized sheet transition conditions. PMID:27616916</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012Sc%26Ed..21..745C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012Sc%26Ed..21..745C"><span>Heuristic Diagrams as a Tool to Teach History of Science</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chamizo, José A.</p> <p>2012-05-01</p> <p>The graphic organizer called here heuristic diagram as an improvement of Gowin's Vee heuristic is proposed as a tool to teach history of science. Heuristic diagrams have the purpose of helping students (or teachers, or researchers) to understand their own research considering that asks and problem-solving are central to scientific activity. The left side originally related in Gowin's Vee with philosophies, theories, models, laws or regularities now agrees with Toulmin's concepts (language, models as representation techniques and application procedures). Mexican science teachers without experience in science education research used the heuristic diagram to learn about the history of chemistry considering also in the left side two different historical times: past and present. Through a semantic differential scale teachers' attitude to the heuristic diagram was evaluated and its usefulness was demonstrated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhPro..25.2218W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhPro..25.2218W"><span>Challenges of CAC in Heterogeneous Wireless Cognitive Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Jiazheng; Fu, Xiuhua</p> <p></p> <p>Call admission control (CAC) is known as an effective functionality in ensuring the QoS of wireless networks. The vision of next generation wireless networks has led to the development of new call admission control (CAC) algorithms specifically designed for heterogeneous wireless Cognitive networks. However, there will be a number of challenges created by dynamic spectrum access and scheduling techniques associated with the cognitive systems. In this paper for the first time, we recommend that the CAC policies should be distinguished between primary users and secondary users. The classification of different methods of cac policies in cognitive networks contexts is proposed. Although there have been some researches within the umbrella of Joint CAC and cross-layer optimization for wireless networks, the advent of the cognitive networks adds some additional problems. We present the conceptual models for joint CAC and cross-layer optimization respectively. Also, the benefit of Cognition can only be realized fully if application requirements and traffic flow contexts are determined or inferred in order to know what modes of operation and spectrum bands to use at each point in time. The process model of Cognition involved per-flow-based CAC is presented. Because there may be a number of parameters on different levels affecting a CAC decision and the conditions for accepting or rejecting a call must be computed quickly and frequently, simplicity and practicability are particularly important for designing a feasible CAC algorithm. In a word, a more thorough understanding of CAC in heterogeneous wireless cognitive networks may help one to design better CAC algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29310272','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29310272"><span>Determination of the optimal number of components in independent components analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N</p> <p>2018-03-01</p> <p>Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1225256','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1225256"><span>Constrained diffusion or immobile fraction on cell surfaces: a new interpretation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Feder, T J; Brust-Mascher, I; Slattery, J P; Baird, B; Webb, W W</p> <p>1996-01-01</p> <p>Protein lateral mobility in cell membranes is generally measured using fluorescence photobleaching recovery (FPR). Since the development of this technique, the data have been interpreted by assuming free Brownian diffusion of cell surface receptors in two dimensions, an interpretation that requires that a subset of the diffusing species remains immobile. The origin of this so-called immobile fraction remains a mystery. In FPR, the motions of thousands of particles are inherently averaged, inevitably masking the details of individual motions. Recently, tracking of individual cell surface receptors has identified several distinct types of motion (Gross and Webb, 1988; Ghosh and Webb, 1988, 1990, 1994; Kusumi et al. 1993; Qian et al. 1991; Slattery, 1995), thereby calling into question the classical interpretation of FPR data as free Brownian motion of a limited mobile fraction. We have measured the motion of fluorescently labeled immunoglobulin E complexed to high affinity receptors (Fc epsilon RI) on rat basophilic leukemia cells using both single particle tracking and FPR. As in previous studies, our tracking results show that individual receptors may diffuse freely, or may exhibit restricted, time-dependent (anomalous) diffusion. Accordingly, we have analyzed FPR data by a new model to take this varied motion into account, and we show that the immobile fraction may be due to particles moving with the anomalous subdiffusion associated with restricted lateral mobility. Anomalous subdiffusion denotes random molecular motion in which the mean square displacements grow as a power law in time with a fractional positive exponent less than one. These findings call for a new model of cell membrane structure. PMID:8744314</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28794544','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28794544"><span>Biostatistics Series Module 10: Brief Overview of Multivariate Methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hazra, Avijit; Gogtay, Nithya</p> <p>2017-01-01</p> <p>Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19507286','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19507286"><span>Graph wavelet alignment kernels for drug virtual screening.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Smalter, Aaron; Huan, Jun; Lushington, Gerald</p> <p>2009-06-01</p> <p>In this paper, we introduce a novel statistical modeling technique for target property prediction, with applications to virtual screening and drug design. In our method, we use graphs to model chemical structures and apply a wavelet analysis of graphs to summarize features capturing graph local topology. We design a novel graph kernel function to utilize the topology features to build predictive models for chemicals via Support Vector Machine classifier. We call the new graph kernel a graph wavelet-alignment kernel. We have evaluated the efficacy of the wavelet-alignment kernel using a set of chemical structure-activity prediction benchmarks. Our results indicate that the use of the kernel function yields performance profiles comparable to, and sometimes exceeding that of the existing state-of-the-art chemical classification approaches. In addition, our results also show that the use of wavelet functions significantly decreases the computational costs for graph kernel computation with more than ten fold speedup.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27754496','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27754496"><span>Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga</p> <p>2015-10-01</p> <p>The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880050528&hterms=Dissociative&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DDissociative','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880050528&hterms=Dissociative&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DDissociative"><span>The sensitivity of gas-phase models of dense interstellar clouds to changes in dissociative recombination branching ratios</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Millar, T. J.; Defrees, D. J.; Mclean, A. D.; Herbst, E.</p> <p>1988-01-01</p> <p>The approach of Bates to the determination of neutral product branching ratios in ion-electron dissociative recombination reactions has been utilized in conjunction with quantum chemical techniques to redetermine branching ratios for a wide variety of important reactions of this class in dense interstellar clouds. The branching ratios have then been used in a pseudo time-dependent model calculation of the gas phase chemistry of a dark cloud resembling TMC-1 and the results compared with an analogous model containing previously used branching ratios. In general, the changes in branching ratios lead to stronger effects on calculated molecular abundances at steady state than at earlier times and often lead to reductions in the calculated abundances of complex molecules. However, at the so-called 'early time' when complex molecule synthesis is most efficient, the abundances of complex molecules are hardly affected by the newly used branching ratios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26262522','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26262522"><span>The Consumer Health Information System Adoption Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Monkman, Helen; Kushniruk, Andre W</p> <p>2015-01-01</p> <p>Derived from overlapping concepts in consumer health, a consumer health information system refers to any of the broad range of applications, tools, and educational resources developed to empower consumers with knowledge, techniques, and strategies, to manage their own health. As consumer health information systems become increasingly popular, it is important to explore the factors that impact their adoption and success. Accumulating evidence indicates a relationship between usability and consumers' eHealth Literacy skills and the demands consumer HISs place on their skills. Here, we present a new model called the Consumer Health Information System Adoption Model, which depicts both consumer eHealth literacy skills and system demands on eHealth literacy as moderators with the potential to affect the strength of relationship between usefulness and usability (predictors of usage) and adoption, value, and successful use (actual usage outcomes). Strategies for aligning these two moderating factors are described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5477488','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5477488"><span>Modelling proteins’ hidden conformations to predict antibiotic resistance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.</p> <p>2016-01-01</p> <p>TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM’s specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models’ prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design. PMID:27708258</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15208193','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15208193"><span>Evolutionary fuzzy modeling human diagnostic decisions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Peña-Reyes, Carlos Andrés</p> <p>2004-05-01</p> <p>Fuzzy CoCo is a methodology, combining fuzzy logic and evolutionary computation, for constructing systems able to accurately predict the outcome of a human decision-making process, while providing an understandable explanation of the underlying reasoning. Fuzzy logic provides a formal framework for constructing systems exhibiting both good numeric performance (accuracy) and linguistic representation (interpretability). However, fuzzy modeling--meaning the construction of fuzzy systems--is an arduous task, demanding the identification of many parameters. To solve it, we use evolutionary computation techniques (specifically cooperative coevolution), which are widely used to search for adequate solutions in complex spaces. We have successfully applied the algorithm to model the decision processes involved in two breast cancer diagnostic problems, the WBCD problem and the Catalonia mammography interpretation problem, obtaining systems both of high performance and high interpretability. For the Catalonia problem, an evolved system was embedded within a Web-based tool-called COBRA-for aiding radiologists in mammography interpretation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JCoPh.227.3486S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JCoPh.227.3486S"><span>Nonhydrostatic icosahedral atmospheric model (NICAM) for global cloud resolving simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Satoh, M.; Matsuno, T.; Tomita, H.; Miura, H.; Nasuno, T.; Iga, S.</p> <p>2008-03-01</p> <p>A new type of ultra-high resolution atmospheric global circulation model is developed. The new model is designed to perform "cloud resolving simulations" by directly calculating deep convection and meso-scale circulations, which play key roles not only in the tropical circulations but in the global circulations of the atmosphere. Since cores of deep convection have a few km in horizontal size, they have not directly been resolved by existing atmospheric general circulation models (AGCMs). In order to drastically enhance horizontal resolution, a new framework of a global atmospheric model is required; we adopted nonhydrostatic governing equations and icosahedral grids to the new model, and call it Nonhydrostatic ICosahedral Atmospheric Model (NICAM). In this article, we review governing equations and numerical techniques employed, and present the results from the unique 3.5-km mesh global experiments—with O(10 9) computational nodes—using realistic topography and land/ocean surface thermal forcing. The results show realistic behaviors of multi-scale convective systems in the tropics, which have not been captured by AGCMs. We also argue future perspective of the roles of the new model in the next generation atmospheric sciences.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21296162','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21296162"><span>Modeling protein structure at near atomic resolutions with Gorgon.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Baker, Matthew L; Abeysinghe, Sasakthi S; Schuh, Stephen; Coleman, Ross A; Abrams, Austin; Marsh, Michael P; Hryc, Corey F; Ruths, Troy; Chiu, Wah; Ju, Tao</p> <p>2011-05-01</p> <p>Electron cryo-microscopy (cryo-EM) has played an increasingly important role in elucidating the structure and function of macromolecular assemblies in near native solution conditions. Typically, however, only non-atomic resolution reconstructions have been obtained for these large complexes, necessitating computational tools for integrating and extracting structural details. With recent advances in cryo-EM, maps at near-atomic resolutions have been achieved for several macromolecular assemblies from which models have been manually constructed. In this work, we describe a new interactive modeling toolkit called Gorgon targeted at intermediate to near-atomic resolution density maps (10-3.5 Å), particularly from cryo-EM. Gorgon's de novo modeling procedure couples sequence-based secondary structure prediction with feature detection and geometric modeling techniques to generate initial protein backbone models. Beyond model building, Gorgon is an extensible interactive visualization platform with a variety of computational tools for annotating a wide variety of 3D volumes. Examples from cryo-EM maps of Rotavirus and Rice Dwarf Virus are used to demonstrate its applicability to modeling protein structure. Copyright © 2011 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2826847','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2826847"><span>Conceptual Models of Depression in Primary Care Patients: A Comparative Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Karasz, Alison; Garcia, Nerina; Ferri, Lucia</p> <p>2009-01-01</p> <p>Conventional psychiatric treatment models are based on a biopsychiatric model of depression. A plausible explanation for low rates of depression treatment utilization among ethnic minorities and the poor is that members of these communities do not share the cultural assumptions underlying the biopsychiatric model. The study examined conceptual models of depression among depressed patients from various ethnic groups, focusing on the degree to which patients’ conceptual models ‘matched’ a biopsychiatric model of depression. The sample included 74 primary care patients from three ethnic groups screening positive for depression. We administered qualitative interviews assessing patients’ conceptual representations of depression. The analysis proceeded in two phases. The first phase involved a strategy called ‘quantitizing’ the qualitative data. A rating scheme was developed and applied to the data by a rater blind to study hypotheses. The data was subjected to statistical analyses. The second phase of the analysis involved the analysis of thematic data using standard qualitative techniques. Study hypotheses were largely supported. The qualitative analysis provided a detailed picture of primary care patients’ conceptual models of depression and suggested interesting directions for future research. PMID:20182550</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1047961.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1047961.pdf"><span>Cold Calling and Web Postings: Do They Improve Students' Preparation and Learning in Statistics?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Levy, Dan</p> <p>2014-01-01</p> <p>Getting students to prepare well for class is a common challenge faced by instructors all over the world. This study investigates the effects that two frequently used techniques to increase student preparation--web postings and cold calling--have on student outcomes. The study is based on two experiments and a qualitative study conducted in a…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=accounting+AND+career&pg=6&id=EJ965204','ERIC'); return false;" href="https://eric.ed.gov/?q=accounting+AND+career&pg=6&id=EJ965204"><span>Perceiving a Calling, Living a Calling, and Job Satisfaction: Testing a Moderated, Multiple Mediator Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Duffy, Ryan D.; Bott, Elizabeth M.; Allan, Blake A.; Torrey, Carrie L.; Dik, Bryan J.</p> <p>2012-01-01</p> <p>The current study examined the relation between perceiving a calling, living a calling, and job satisfaction among a diverse group of employed adults who completed an online survey (N = 201). Perceiving a calling and living a calling were positively correlated with career commitment, work meaning, and job satisfaction. Living a calling moderated…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1986SPIE..655..390S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1986SPIE..655..390S"><span>Synchronous Stroboscopic Electronic Speckle Pattern Interferometry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Soares, Oliverio D. D.</p> <p>1986-10-01</p> <p>Electronic Speckle Pattern Interferometry (E.S.P.I) oftenly called Electronic Holography is a practical powerful technique in non-destructive testing. Practical capabilities of the technique have been improved by fringe betterment and the control of analysis in the time domain, in particular, the scanning of the vibration cycle, with introduction of: synchronized amplitude and phase modulated pulse illumination, microcomputer control, fibre optics design, and moire evaluation techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25817037','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25817037"><span>An agent-based model of dialect evolution in killer whales.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Filatova, Olga A; Miller, Patrick J O</p> <p>2015-05-21</p> <p>The killer whale is one of the few animal species with vocal dialects that arise from socially learned group-specific call repertoires. We describe a new agent-based model of killer whale populations and test a set of vocal-learning rules to assess which mechanisms may lead to the formation of dialect groupings observed in the wild. We tested a null model with genetic transmission and no learning, and ten models with learning rules that differ by template source (mother or matriline), variation type (random errors or innovations) and type of call change (no divergence from kin vs. divergence from kin). The null model without vocal learning did not produce the pattern of group-specific call repertoires we observe in nature. Learning from either mother alone or the entire matriline with calls changing by random errors produced a graded distribution of the call phenotype, without the discrete call types observed in nature. Introducing occasional innovation or random error proportional to matriline variance yielded more or less discrete and stable call types. A tendency to diverge from the calls of related matrilines provided fast divergence of loose call clusters. A pattern resembling the dialect diversity observed in the wild arose only when rules were applied in combinations and similar outputs could arise from different learning rules and their combinations. Our results emphasize the lack of information on quantitative features of wild killer whale dialects and reveal a set of testable questions that can draw insights into the cultural evolution of killer whale dialects. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007EJASP2008..202Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007EJASP2008..202Y"><span>Multimodality Inferring of Human Cognitive States Based on Integration of Neuro-Fuzzy Network and Information Fusion Techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, G.; Lin, Y.; Bhattacharya, P.</p> <p>2007-12-01</p> <p>To achieve an effective and safe operation on the machine system where the human interacts with the machine mutually, there is a need for the machine to understand the human state, especially cognitive state, when the human's operation task demands an intensive cognitive activity. Due to a well-known fact with the human being, a highly uncertain cognitive state and behavior as well as expressions or cues, the recent trend to infer the human state is to consider multimodality features of the human operator. In this paper, we present a method for multimodality inferring of human cognitive states by integrating neuro-fuzzy network and information fusion techniques. To demonstrate the effectiveness of this method, we take the driver fatigue detection as an example. The proposed method has, in particular, the following new features. First, human expressions are classified into four categories: (i) casual or contextual feature, (ii) contact feature, (iii) contactless feature, and (iv) performance feature. Second, the fuzzy neural network technique, in particular Takagi-Sugeno-Kang (TSK) model, is employed to cope with uncertain behaviors. Third, the sensor fusion technique, in particular ordered weighted aggregation (OWA), is integrated with the TSK model in such a way that cues are taken as inputs to the TSK model, and then the outputs of the TSK are fused by the OWA which gives outputs corresponding to particular cognitive states under interest (e.g., fatigue). We call this method TSK-OWA. Validation of the TSK-OWA, performed in the Northeastern University vehicle drive simulator, has shown that the proposed method is promising to be a general tool for human cognitive state inferring and a special tool for the driver fatigue detection.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23260716','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23260716"><span>On statistical inference in time series analysis of the evolution of road safety.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora</p> <p>2013-11-01</p> <p>Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22059426','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22059426"><span>Perceiving a calling, living a calling, and job satisfaction: testing a moderated, multiple mediator model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Duffy, Ryan D; Bott, Elizabeth M; Allan, Blake A; Torrey, Carrie L; Dik, Bryan J</p> <p>2012-01-01</p> <p>The current study examined the relation between perceiving a calling, living a calling, and job satisfaction among a diverse group of employed adults who completed an online survey (N = 201). Perceiving a calling and living a calling were positively correlated with career commitment, work meaning, and job satisfaction. Living a calling moderated the relations of perceiving a calling with career commitment and work meaning, such that these relations were more robust for those with a stronger sense they were living their calling. Additionally, a moderated, multiple mediator model was run to examine the mediating role of career commitment and work meaning in the relation of perceiving a calling and job satisfaction, while accounting for the moderating role of living a calling. Results indicated that work meaning and career commitment fully mediated the relation between perceiving a calling and job satisfaction. However, the indirect effects of work meaning and career commitment were only significant for individuals with high levels of living a calling, indicating the importance of living a calling in the link between perceiving a calling and job satisfaction. Implications for research and practice are discussed. (c) 2012 APA, all rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H54C..08L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H54C..08L"><span>Data assimilation for groundwater flow modelling using Unbiased Ensemble Square Root Filter: Case study in Guantao, North China Plain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.</p> <p>2017-12-01</p> <p>Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies in groundwater resources management.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23250442','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23250442"><span>Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chaudhuri, Shomesh E; Merfeld, Daniel M</p> <p>2013-03-01</p> <p>Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3464607','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3464607"><span>ParticleCall: A particle filter for base calling in next-generation sequencing systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017A%26A...606A..78C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017A%26A...606A..78C"><span>SPHYNX: an accurate density-based SPH method for astrophysical applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cabezón, R. M.; García-Senz, D.; Figueira, J.</p> <p>2017-10-01</p> <p>Aims: Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here. Methods: SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code. Results: A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes. Conclusions: On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET-2, PSPH or with the recent density-independent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED261660.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED261660.pdf"><span>Implications of Windowing Techniques for CAI.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Heines, Jesse M.; Grinstein, Georges G.</p> <p></p> <p>This paper discusses the use of a technique called windowing in computer assisted instruction to allow independent control of functional areas in complex CAI displays and simultaneous display of output from a running computer program and coordinated instructional material. Two obstacles to widespread use of CAI in computer science courses are…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=need+AND+assessment&pg=7&id=EJ1092514','ERIC'); return false;" href="https://eric.ed.gov/?q=need+AND+assessment&pg=7&id=EJ1092514"><span>Ketso: A New Tool for Extension Professionals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Bates, James S.</p> <p>2016-01-01</p> <p>Extension professionals employ many techniques and tools to obtain feedback, input, information, and data from stakeholders, research participants, and program learners. An information-gathering tool called Ketso is described in this article. This tool and its associated techniques can be used in all phases of program development, implementation,…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28185571','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28185571"><span>HIPPI: highly accurate protein family classification with ensembles of HMMs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nguyen, Nam-Phuong; Nute, Michael; Mirarab, Siavash; Warnow, Tandy</p> <p>2016-11-11</p> <p>Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification). HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014SPIE.9064E..14O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014SPIE.9064E..14O"><span>Improved damage imaging in aerospace structures using a piezoceramic hybrid pin-force wave generation model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ostiguy, Pierre-Claude; Quaegebeur, Nicolas; Masson, Patrice</p> <p>2014-03-01</p> <p>In this study, a correlation-based imaging technique called "Excitelet" is used to monitor an aerospace grade aluminum plate, representative of an aircraft component. The principle is based on ultrasonic guided wave generation and sensing using three piezoceramic (PZT) transducers, and measurement of reflections induced by potential defects. The method uses a propagation model to correlate measured signals with a bank of signals and imaging is performed using a roundrobin procedure (Full-Matrix Capture). The formulation compares two models for the complex transducer dynamics: one where the shear stress at the tip of the PZT is considered to vary as a function of the frequency generated, and one where the PZT is discretized in order to consider the shear distribution under the PZT. This method allows taking into account the transducer dynamics and finite dimensions, multi-modal and dispersive characteristics of the material and complex interactions between guided wave and damages. Experimental validation has been conducted on an aerospace grade aluminum joint instrumented with three circular PZTs of 10 mm diameter. A magnet, acting as a reflector, is used in order to simulate a local reflection in the structure. It is demonstrated that the defect can be accurately detected and localized. The two models proposed are compared to the classical pin-force model, using narrow and broad-band excitations. The results demonstrate the potential of the proposed imaging techniques for damage monitoring of aerospace structures considering improved models for guided wave generation and propagation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1177713','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1177713"><span>PARENT Quick Blind Round-Robin Test Report</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Braatz, Brett G.; Heasler, Patrick G.; Meyer, Ryan M.</p> <p></p> <p>The U.S. Nuclear Regulatory Commission has established the Program to Assess the Reliability of Emerging Nondestructive Techniques (PARENT) whose goal is to investigate the effectiveness of current and novel nondestructive examination procedures and techniques to find flaws in nickel-alloy welds and base materials. This is to be done by conducting a series of open and blind international round-robin tests on a set of piping components that include large-bore dissimilar metal welds, small-bore dissimilar metal welds, and bottom-mounted instrumentation penetration welds. The blind testing is being conducted in two segments, one is called Quick-Blind and the other is called Blind. Themore » Quick-Blind testing and destructive analysis of the test blocks has been completed. This report describes the four Quick-Blind test blocks used, summarizes their destructive analysis, gives an overview of the nondestructive evaluation (NDE) techniques applied, provides an analysis inspection data, and presents the conclusions drawn.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhyEd..47..616E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhyEd..47..616E"><span>Measuring the apparent size of the Moon with a digital camera</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ellery, Adam; Hughes, Stephen</p> <p>2012-09-01</p> <p>The Moon appears to be much larger closer to the horizon than when higher in the sky. This is called the ‘Moon illusion’ since the observed size of the Moon is not actually larger when the Moon is just above the horizon. This paper describes a technique for verifying that the observed size of the Moon is not larger on the horizon. The technique can be performed easily in a high-school teaching environment. Moreover, the technique demonstrates the surprising fact that the observed size of the Moon is actually smaller on the horizon due to atmospheric refraction. For the purposes of this paper, several images of the Moon were taken with it close to the horizon and close to the zenith. The images were processed using a free program called ImageJ. The Moon was found to be 5.73 ± 0.04% smaller in area on the horizon then at the zenith.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AdWR...34..282X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AdWR...34..282X"><span>On the study of control effectiveness and computational efficiency of reduced Saint-Venant model in model predictive control of open channel flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, M.; van Overloop, P. J.; van de Giesen, N. C.</p> <p>2011-02-01</p> <p>Model predictive control (MPC) of open channel flow is becoming an important tool in water management. The complexity of the prediction model has a large influence on the MPC application in terms of control effectiveness and computational efficiency. The Saint-Venant equations, called SV model in this paper, and the Integrator Delay (ID) model are either accurate but computationally costly, or simple but restricted to allowed flow changes. In this paper, a reduced Saint-Venant (RSV) model is developed through a model reduction technique, Proper Orthogonal Decomposition (POD), on the SV equations. The RSV model keeps the main flow dynamics and functions over a large flow range but is easier to implement in MPC. In the test case of a modeled canal reach, the number of states and disturbances in the RSV model is about 45 and 16 times less than the SV model, respectively. The computational time of MPC with the RSV model is significantly reduced, while the controller remains effective. Thus, the RSV model is a promising means to balance the control effectiveness and computational efficiency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.888a2076Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.888a2076Z"><span>Pulse-shape discrimination techniques for the COBRA double beta-decay experiment at LNGS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zatschler, S.; COBRA Collaboration</p> <p>2017-09-01</p> <p>In modern elementary particle physics several questions arise from the fact that neutrino oscillation experiments have found neutrinos to be massive. Among them is the so far unknown nature of neutrinos: either they act as so-called Majorana particles, where one cannot distinguish between particle and antiparticle, or they are Dirac particles like all the other fermions in the Standard Model. The study of neutrinoless double beta-decay (0νββ-decay), where the lepton number conservation is violated by two units, could answer the question regarding the underlying nature of neutrinos and might also shed light on the mechanism responsible for the mass generation. So far there is no experimental evidence for the existence of 0νββ-decay, hence, existing experiments have to be improved and novel techniques should be explored. One of the next-generation experiments dedicated to the search for this ultra-rare decay is the COBRA experiment. This article gives an overview of techniques to identify and reject background based on pulse-shape discrimination.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20827036','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20827036"><span>A varying coefficient model to measure the effectiveness of mass media anti-smoking campaigns in generating calls to a Quitline.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bui, Quang M; Huggins, Richard M; Hwang, Wen-Han; White, Victoria; Erbas, Bircan</p> <p>2010-01-01</p> <p>Anti-smoking advertisements are an effective population-based smoking reduction strategy. The Quitline telephone service provides a first point of contact for adults considering quitting. Because of data complexity, the relationship between anti-smoking advertising placement, intensity, and time trends in total call volume is poorly understood. In this study we use a recently developed semi-varying coefficient model to elucidate this relationship. Semi-varying coefficient models comprise parametric and nonparametric components. The model is fitted to the daily number of calls to Quitline in Victoria, Australia to estimate a nonparametric long-term trend and parametric terms for day-of-the-week effects and to clarify the relationship with target audience rating points (TARPs) for the Quit and nicotine replacement advertising campaigns. The number of calls to Quitline increased with the TARP value of both the Quit and other smoking cessation advertisement; the TARP values associated with the Quit program were almost twice as effective. The varying coefficient term was statistically significant for peak periods with little or no advertising. Semi-varying coefficient models are useful for modeling public health data when there is little or no information on other factors related to the at-risk population. These models are well suited to modeling call volume to Quitline, because the varying coefficient allowed the underlying time trend to depend on fixed covariates that also vary with time, thereby explaining more of the variation in the call model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3900825','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3900825"><span>A Varying Coefficient Model to Measure the Effectiveness of Mass Media Anti-Smoking Campaigns in Generating Calls to a Quitline</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bui, Quang M.; Huggins, Richard M.; Hwang, Wen-Han; White, Victoria; Erbas, Bircan</p> <p>2010-01-01</p> <p>Background Anti-smoking advertisements are an effective population-based smoking reduction strategy. The Quitline telephone service provides a first point of contact for adults considering quitting. Because of data complexity, the relationship between anti-smoking advertising placement, intensity, and time trends in total call volume is poorly understood. In this study we use a recently developed semi-varying coefficient model to elucidate this relationship. Methods Semi-varying coefficient models comprise parametric and nonparametric components. The model is fitted to the daily number of calls to Quitline in Victoria, Australia to estimate a nonparametric long-term trend and parametric terms for day-of-the-week effects and to clarify the relationship with target audience rating points (TARPs) for the Quit and nicotine replacement advertising campaigns. Results The number of calls to Quitline increased with the TARP value of both the Quit and other smoking cessation advertisement; the TARP values associated with the Quit program were almost twice as effective. The varying coefficient term was statistically significant for peak periods with little or no advertising. Conclusions Semi-varying coefficient models are useful for modeling public health data when there is little or no information on other factors related to the at-risk population. These models are well suited to modeling call volume to Quitline, because the varying coefficient allowed the underlying time trend to depend on fixed covariates that also vary with time, thereby explaining more of the variation in the call model. PMID:20827036</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.8141P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.8141P"><span>Adaptive correction of ensemble forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane</p> <p>2017-04-01</p> <p>Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1818187K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1818187K"><span>Characterization of pyrogenic organic matter by 2-dimenstional HETeronucleus CORelation solid-state 13C NMR (HETCOR) spectroscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Knicker, Heike</p> <p>2016-04-01</p> <p>During the last years, increasing evidences are provided that the common view of charcoal as a polyaromatic network is too much simplified. Experiments with model compounds indicated that it represents a heterogeneous mixture of thermally altered biomacromolecules with N, O and likely also S substitutions as common features. If produced from a N-rich feedstock, the so called black nitrogen (BN) has to be considered as an integral part of the aromatic charcoal network. In order to study this network one-dimensional (1D) solid-state nuclear magnetic resonance (NMR) spectroscopy is often applied. However, this technique suffers from broad resonance lines and low resolution. Applying 2D techniques can help but until recently, this was unfeasible for natural organic matter (NOM) due to sensitivity problems and the high complexity of the material. On the other hand, during the last decade, the development of stronger magnetic field instruments and advanced pulse sequences has put them into reach for NOM research. Although 2D NMR spectroscopy has many different applications, all pulse sequences are based on the introduction of a preparation time during which the magnetization of a spin system is adjusted into a state appropriate to whatever properties are to be detected in the indirect dimension. Then, the spins are allowed to evolve with the given conditions and after their additional manipulation during a mixing period the modulated magnetization is detected. Assembling several 1D spectra with incrementing evolution time creates a data set which is two-dimensional in time (t1, t2). Fourier transformation of both dimensions leads to a 2D contour plot correlating the interactions detected in the indirect dimension t1 with the signals detected in the direct dimension t2. The so called solid-state heteronuclear correlation (HETCOR) NMR spectroscopy represents a 2D technique allows the determination which protons are interacting with which carbons. In the present work this technique was used for monitoring the chemical changes occurring during charring of biomass derived from model compounds, fire-affected and unaffected NOM. The 2D 13C HETCOR NMR spectrum of the fire- unaffected soils revealed that most of the carboxyl C occurs as ester or amide. Aside from cross peaks typically seen in spectra of NOM, the spectrum of the respective fire-affected counterpart shows additional signals assignable to PyOM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://images.nasa.gov/#/details-0101748.html','SCIGOVIMAGE-NASA'); return false;" href="https://images.nasa.gov/#/details-0101748.html"><span>Microgravity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://images.nasa.gov/">NASA Image and Video Library</a></p> <p></p> <p>2000-12-15</p> <p>NASA is looking to biological techniques that are millions of years old to help it develop new materials and technologies for the 21st century. Sponsored by NASA, Jeffrey Brinker of the University of New Mexico is studying how multiple elements can assemble themselves into a composite material that is clear, tough, and impermeable. His research is based on the model of how an abalone builds the nacre, also called mother-of-pearl, inside its shell. Strong thin coatings, or lamellae, in Brinker's research are formed when objects are dip-coated. Evaporation drives the self-assembly of molecular aggregates (micelles) of surfactant, soluble silica, and organic monomers and their further self-organization into layered organic and inorganic assemblies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/920828','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/920828"><span>Some attributes of a language for property-based testing.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Neagoe, Vicentiu; Bishop, Matt</p> <p></p> <p>Property-based testing is a testing technique that evaluates executions of a program. The method checks that specifications, called properties, hold throughout the execution of the program. TASpec is a language used to specify these properties. This paper compares some attributes of the language with the specification patterns used for model-checking languages, and then presents some descriptions of properties that can be used to detect common security flaws in programs. This report describes the results of a one year research project at the University of California, Davis, which was funded by a University Collaboration LDRD entitled ''Property-based Testing for Cyber Securitymore » Assurance''.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>