Sample records for improved modeling techniques

  1. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  2. Automation of energy demand forecasting

    NASA Astrophysics Data System (ADS)

    Siddique, Sanzad

    Automation of energy demand forecasting saves time and effort by searching automatically for an appropriate model in a candidate model space without manual intervention. This thesis introduces a search-based approach that improves the performance of the model searching process for econometrics models. Further improvements in the accuracy of the energy demand forecasting are achieved by integrating nonlinear transformations within the models. This thesis introduces machine learning techniques that are capable of modeling such nonlinearity. Algorithms for learning domain knowledge from time series data using the machine learning methods are also presented. The novel search based approach and the machine learning models are tested with synthetic data as well as with natural gas and electricity demand signals. Experimental results show that the model searching technique is capable of finding an appropriate forecasting model. Further experimental results demonstrate an improved forecasting accuracy achieved by using the novel machine learning techniques introduced in this thesis. This thesis presents an analysis of how the machine learning techniques learn domain knowledge. The learned domain knowledge is used to improve the forecast accuracy.

  3. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  4. Improved Slip Casting Of Ceramic Models

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.; Vasquez, Peter; Hicks, Lana P.

    1994-01-01

    Improved technique of investment slip casting developed for making precise ceramic wind-tunnel models. Needed in wind-tunnel experiments to verify predictions of aerothermodynamical computer codes. Ceramic materials used because of their low heat conductivities and ability to survive high temperatures. Present improved slip-casting technique enables casting of highly detailed models from aqueous or nonaqueous solutions. Wet shell molds peeled off models to ensure precise and undamaged details. Used at NASA Langley Research Center to form superconducting ceramic components from nonaqueous slip solutions. Technique has many more applications when ceramic materials developed further for such high-strength/ temperature components as engine parts.

  5. Six Rehearsal Techniques for the Public Speaker: Improving Memory, Increasing Delivery Skills and Reducing Speech Stress.

    ERIC Educational Resources Information Center

    Crane, Loren D.

    This paper describes six specific techniques that speech communication students may use in rehearsals to improve memory, to increase delivery skills, and to reduce speech stress. The techniques are idea association, covert modeling, desensitization, language elaboration, overt modeling, and self-regulation. Recent research is reviewed that…

  6. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    NASA Astrophysics Data System (ADS)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.

  7. Extended charge banking model of dual path shocks for implantable cardioverter defibrillators

    PubMed Central

    Dosdall, Derek J; Sweeney, James D

    2008-01-01

    Background Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. Methods The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. Results The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Discussion Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters. PMID:18673561

  8. An Information System Development Method Connecting Business Process Modeling and its Experimental Evaluation

    NASA Astrophysics Data System (ADS)

    Okawa, Tsutomu; Kaminishi, Tsukasa; Kojima, Yoshiyuki; Hirabayashi, Syuichi; Koizumi, Hisao

    Business process modeling (BPM) is gaining attention as a measure of analysis and improvement of the business process. BPM analyses the current business process as an AS-IS model and solves problems to improve the current business and moreover it aims to create a business process, which produces values, as a TO-BE model. However, researches of techniques that connect the business process improvement acquired by BPM to the implementation of the information system seamlessly are rarely reported. If the business model obtained by BPM is converted into UML, and the implementation can be carried out by the technique of UML, we can expect the improvement in efficiency of information system implementation. In this paper, we describe a method of the system development, which converts the process model obtained by BPM into UML and the method is evaluated by modeling a prototype of a parts procurement system. In the evaluation, comparison with the case where the system is implemented by the conventional UML technique without going via BPM is performed.

  9. Module Degradation Mechanisms Studied by a Multi-Scale Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Steve; Al-Jassim, Mowafak; Hacke, Peter

    2016-11-21

    A key pathway to meeting the Department of Energy SunShot 2020 goals is to reduce financing costs by improving investor confidence through improved photovoltaic (PV) module reliability. A comprehensive approach to further understand and improve PV reliability includes characterization techniques and modeling from module to atomic scale. Imaging techniques, which include photoluminescence, electroluminescence, and lock-in thermography, are used to locate localized defects responsible for module degradation. Small area samples containing such defects are prepared using coring techniques and are then suitable and available for microscopic study and specific defect modeling and analysis.

  10. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    PubMed Central

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  11. Group Guidance Services with Self-Regulation Technique to Improve Student Learning Motivation in Junior High School (JHS)

    ERIC Educational Resources Information Center

    Pranoto, Hadi; Atieka, Nurul; Wihardjo, Sihadi Darmo; Wibowo, Agus; Nurlaila, Siti; Sudarmaji

    2016-01-01

    This study aims at: determining students motivation before being given a group guidance with self-regulation technique, determining students' motivation after being given a group counseling with self-regulation technique, generating a model of group counseling with self-regulation technique to improve motivation of learning, determining the…

  12. An animal model for instructing and the study of in situ arterial bypass.

    PubMed

    Saifi, J; Chang, B B; Paty, P S; Kaufman, J; Leather, R P; Shah, D M

    1990-11-01

    A canine model that used the cephalic vein to bypass from the brachial to the ulnar artery was designed for use in instructing and evaluating surgical technique needed for constructing an in situ arterial bypass. This model was used for instructing vascular residents in the in situ vein bypass technique. The use of this model enabled the resident to become more adept with the instruments for valve incision and construction of small vessel anastomosis. The improvement in the resident's operative technique was reflected by a decrease in the number of technical complications (missed valves, missed arteriovenous fistulas, poorly constructed anastomoses) and improved patency rate.

  13. Ozone measurement systems improvements studies

    NASA Technical Reports Server (NTRS)

    Thomas, R. W.; Guard, K.; Holland, A. C.; Spurling, J. F.

    1974-01-01

    Results are summarized of an initial study of techniques for measuring atmospheric ozone, carried out as the first phase of a program to improve ozone measurement techniques. The study concentrated on two measurement systems, the electro chemical cell (ECC) ozonesonde and the Dobson ozone spectrophotometer, and consisted of two tasks. The first task consisted of error modeling and system error analysis of the two measurement systems. Under the second task a Monte-Carlo model of the Dobson ozone measurement technique was developed and programmed for computer operation.

  14. Recent developments in MrBUMP: better search-model preparation, graphical interaction with search models, and solution improvement and assessment.

    PubMed

    Keegan, Ronan M; McNicholas, Stuart J; Thomas, Jens M H; Simpkin, Adam J; Simkovic, Felix; Uski, Ville; Ballard, Charles C; Winn, Martyn D; Wilson, Keith S; Rigden, Daniel J

    2018-03-01

    Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case.

  15. Recent developments in MrBUMP: better search-model preparation, graphical interaction with search models, and solution improvement and assessment

    PubMed Central

    Keegan, Ronan M.; McNicholas, Stuart J.; Thomas, Jens M. H.; Simpkin, Adam J.; Uski, Ville; Ballard, Charles C.

    2018-01-01

    Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case. PMID:29533225

  16. Advanced Atmospheric Ensemble Modeling Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckley, R.; Chiswell, S.; Kurzeja, R.

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two releasemore » times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.« less

  17. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  18. Rapid prototyping model for percutaneous nephrolithotomy training.

    PubMed

    Bruyère, Franck; Leroux, Cecile; Brunereau, Laurent; Lermusiaux, Patrick

    2008-01-01

    Rapid prototyping is a technique used for creating computer images in three dimensions more efficiently than classic techniques. Percutaneous nephrolithotomy (PCNL) is a popular method to remove kidney stones; however, broader use by the urologic community has been hampered by the morbidity associated with needle puncture to gain access to the renal calix (bleeding, pneumothorax, hydrothorax, inadvertent colon injury). A training model to improve technique and understanding of renal anatomy could improve complications related to renal puncture; however, no model currently exists for resident training. We created a training model using the rapid prototyping technique based on abdominal CT images of a patient scheduled to undergo PCNL. This allowed our staff and residents to train on the model before performing the operation. This model allowed anticipation of particular difficulties inherent to the patient's anatomy. After training, the procedure proceeded without complication, and the patient was discharged at postoperative day 1 without problems. We hypothesize that rapid prototyping could be useful for resident education, allowing the creation of numerous models for research and surgical training. In addition, we anticipate that experienced urologists could find this technique helpful in preparation for difficult PCNL operations.

  19. Bearing Fault Diagnosis by a Robust Higher-Order Super-Twisting Sliding Mode Observer

    PubMed Central

    Kim, Jong-Myon

    2018-01-01

    An effective bearing fault detection and diagnosis (FDD) model is important for ensuring the normal and safe operation of machines. This paper presents a reliable model-reference observer technique for FDD based on modeling of a bearing’s vibration data by analyzing the dynamic properties of the bearing and a higher-order super-twisting sliding mode observation (HOSTSMO) technique for making diagnostic decisions using these data models. The HOSTSMO technique can adaptively improve the performance of estimating nonlinear failures in rolling element bearings (REBs) over a linear approach by modeling 5 degrees of freedom under normal and faulty conditions. The effectiveness of the proposed technique is evaluated using a vibration dataset provided by Case Western Reserve University, which consists of vibration acceleration signals recorded for REBs with inner, outer, ball, and no faults, i.e., normal. Experimental results indicate that the proposed technique outperforms the ARX-Laguerre proportional integral observation (ALPIO) technique, yielding 18.82%, 16.825%, and 17.44% performance improvements for three levels of crack severity of 0.007, 0.014, and 0.021 inches, respectively. PMID:29642459

  20. Bearing Fault Diagnosis by a Robust Higher-Order Super-Twisting Sliding Mode Observer.

    PubMed

    Piltan, Farzin; Kim, Jong-Myon

    2018-04-07

    An effective bearing fault detection and diagnosis (FDD) model is important for ensuring the normal and safe operation of machines. This paper presents a reliable model-reference observer technique for FDD based on modeling of a bearing's vibration data by analyzing the dynamic properties of the bearing and a higher-order super-twisting sliding mode observation (HOSTSMO) technique for making diagnostic decisions using these data models. The HOSTSMO technique can adaptively improve the performance of estimating nonlinear failures in rolling element bearings (REBs) over a linear approach by modeling 5 degrees of freedom under normal and faulty conditions. The effectiveness of the proposed technique is evaluated using a vibration dataset provided by Case Western Reserve University, which consists of vibration acceleration signals recorded for REBs with inner, outer, ball, and no faults, i.e., normal. Experimental results indicate that the proposed technique outperforms the ARX-Laguerre proportional integral observation (ALPIO) technique, yielding 18.82%, 16.825%, and 17.44% performance improvements for three levels of crack severity of 0.007, 0.014, and 0.021 inches, respectively.

  1. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    NASA Astrophysics Data System (ADS)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  2. Improved techniques for thermomechanical testing in support of deformation modeling

    NASA Technical Reports Server (NTRS)

    Castelli, Michael G.; Ellis, John R.

    1992-01-01

    The feasibility of generating precise thermomechanical deformation data to support constitutive model development was investigated. Here, the requirement is for experimental data that is free from anomalies caused by less than ideal equipment and procedures. A series of exploratory tests conducted on Hastelloy X showed that generally accepted techniques for strain controlled tests were lacking in at least three areas. Specifically, problems were encountered with specimen stability, thermal strain compensation, and temperature/mechanical strain phasing. The source of these difficulties was identified and improved thermomechanical testing techniques to correct them were developed. These goals were achieved by developing improved procedures for measuring and controlling thermal gradients and by designing a specimen specifically for thermomechanical testing. In addition, innovative control strategies were developed to correctly proportion and phase the thermal and mechanical components of strain. Subsequently, the improved techniques were used to generate deformation data for Hastelloy X over the temperature range, 200 to 1000 C.

  3. Development of Semi-Span Model Test Techniques

    NASA Technical Reports Server (NTRS)

    Pulnam, L. Elwood (Technical Monitor); Milholen, William E., II; Chokani, Ndaona; McGhee, Robert J.

    1996-01-01

    A computational investigation was performed to support the development of a semi-span model test capability in the NASA Langley Research Center's National Transonic Facility. This capability is desirable for the testing of advanced subsonic transport aircraft at full-scale Reynolds numbers. A state-of-the-art three-dimensional Navier-Stokes solver was used to examine methods to improve the flow over a semi-span configuration. First, a parametric study is conducted to examine the influence of the stand-off height on the flow over the semi-span model. It is found that decreasing the stand-off height, below the maximum fuselage radius, improves the aerodynamic characteristics of the semi-span model. Next, active sidewall boundary layer control techniques are examined. Juncture region blowing jets, upstream tangential blowing, and sidewall suction are found to improve the flow over the aft portion of the semi-span model. Both upstream blowing and suction are found to reduce the sidewall boundary layer separation. The resulting near surface streamline patterns are improved, and found to be quite similar to the full-span results. Both techniques however adversely affect the pitching moment coefficient.

  4. Development of Semi-Span Model Test Techniques

    NASA Technical Reports Server (NTRS)

    Milholen, William E., II; Chokani, Ndaona; McGhee, Robert J.

    1996-01-01

    A computational investigation was performed to support the development of a semispan model test capability in the NASA Langley Research Center's National Transonic Facility. This capability is desirable for the testing of advanced subsonic transport aircraft at full-scale Reynolds numbers. A state-of-the-art three-dimensional Navier-Stokes solver was used to examine methods to improve the flow over a semi-span configuration. First, a parametric study is conducted to examine the influence of the stand-off height on the flow over the semispan model. It is found that decreasing the stand-off height, below the maximum fuselage radius, improves the aerodynamic characteristics of the semi-span model. Next, active sidewall boundary layer control techniques are examined. Juncture region blowing jets, upstream tangential blowing, and sidewall suction are found to improve the flow over the aft portion of the semispan model. Both upstream blowing and suction are found to reduce the sidewall boundary layer separation. The resulting near surface streamline patterns are improved, and found to be quite similar to the full-span results. Both techniques however adversely affect the pitching moment coefficient.

  5. Model-based software process improvement

    NASA Technical Reports Server (NTRS)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  6. The Application of Collaborative Business Intelligence Technology in the Hospital SPD Logistics Management Model.

    PubMed

    Liu, Tongzhu; Shen, Aizong; Hu, Xiaojian; Tong, Guixian; Gu, Wei

    2017-06-01

    We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model. We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process. For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management. Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers.

  7. Application of neural networks and sensitivity analysis to improved prediction of trauma survival.

    PubMed

    Hunter, A; Kennedy, L; Henry, J; Ferguson, I

    2000-05-01

    The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.

  8. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  9. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  10. Evaluating the role of evapotranspiration remote sensing data in improving hydrological modeling predictability

    NASA Astrophysics Data System (ADS)

    Herman, Matthew R.; Nejadhashemi, A. Pouyan; Abouali, Mohammad; Hernandez-Suarez, Juan Sebastian; Daneshvar, Fariborz; Zhang, Zhen; Anderson, Martha C.; Sadeghi, Ali M.; Hain, Christopher R.; Sharifi, Amirreza

    2018-01-01

    As the global demands for the use of freshwater resources continues to rise, it has become increasingly important to insure the sustainability of this resources. This is accomplished through the use of management strategies that often utilize monitoring and the use of hydrological models. However, monitoring at large scales is not feasible and therefore model applications are becoming challenging, especially when spatially distributed datasets, such as evapotranspiration, are needed to understand the model performances. Due to these limitations, most of the hydrological models are only calibrated for data obtained from site/point observations, such as streamflow. Therefore, the main focus of this paper is to examine whether the incorporation of remotely sensed and spatially distributed datasets can improve the overall performance of the model. In this study, actual evapotranspiration (ETa) data was obtained from the two different sets of satellite based remote sensing data. One dataset estimates ETa based on the Simplified Surface Energy Balance (SSEBop) model while the other one estimates ETa based on the Atmosphere-Land Exchange Inverse (ALEXI) model. The hydrological model used in this study is the Soil and Water Assessment Tool (SWAT), which was calibrated against spatially distributed ETa and single point streamflow records for the Honeyoey Creek-Pine Creek Watershed, located in Michigan, USA. Two different techniques, multi-variable and genetic algorithm, were used to calibrate the SWAT model. Using the aforementioned datasets, the performance of the hydrological model in estimating ETa was improved using both calibration techniques by achieving Nash-Sutcliffe efficiency (NSE) values >0.5 (0.73-0.85), percent bias (PBIAS) values within ±25% (±21.73%), and root mean squared error - observations standard deviation ratio (RSR) values <0.7 (0.39-0.52). However, the genetic algorithm technique was more effective with the ETa calibration while significantly reducing the model performance for estimating the streamflow (NSE: 0.32-0.52, PBIAS: ±32.73%, and RSR: 0.63-0.82). Meanwhile, using the multi-variable technique, the model performance for estimating the streamflow was maintained with a high level of accuracy (NSE: 0.59-0.61, PBIAS: ±13.70%, and RSR: 0.63-0.64) while the evapotranspiration estimations were improved. Results from this assessment shows that incorporation of remotely sensed and spatially distributed data can improve the hydrological model performance if it is coupled with a right calibration technique.

  11. Comparisons of Three-Dimensional Variational Data Assimilation and Model Output Statistics in Improving Atmospheric Chemistry Forecasts

    NASA Astrophysics Data System (ADS)

    Ma, Chaoqun; Wang, Tijian; Zang, Zengliang; Li, Zhijin

    2018-07-01

    Atmospheric chemistry models usually perform badly in forecasting wintertime air pollution because of their uncertainties. Generally, such uncertainties can be decreased effectively by techniques such as data assimilation (DA) and model output statistics (MOS). However, the relative importance and combined effects of the two techniques have not been clarified. Here, a one-month air quality forecast with the Weather Research and Forecasting-Chemistry (WRF-Chem) model was carried out in a virtually operational setup focusing on Hebei Province, China. Meanwhile, three-dimensional variational (3DVar) DA and MOS based on one-dimensional Kalman filtering were implemented separately and simultaneously to investigate their performance in improving the model forecast. Comparison with observations shows that the chemistry forecast with MOS outperforms that with 3DVar DA, which could be seen in all the species tested over the whole 72 forecast hours. Combined use of both techniques does not guarantee a better forecast than MOS only, with the improvements and degradations being small and appearing rather randomly. Results indicate that the implementation of MOS is more suitable than 3DVar DA in improving the operational forecasting ability of WRF-Chem.

  12. Comparison of Sequential and Variational Data Assimilation

    NASA Astrophysics Data System (ADS)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  13. [Research progress of three-dimensional digital model for repair and reconstruction of knee joint].

    PubMed

    Tong, Lu; Li, Yanlin; Hu, Meng

    2013-01-01

    To review recent advance in the application and research of three-dimensional digital knee model. The recent original articles about three-dimensional digital knee model were extensively reviewed and analyzed. The digital three-dimensional knee model can simulate the knee complex anatomical structure very well. Based on this, there are some developments of new software and techniques, and good clinical results are achieved. With the development of computer techniques and software, the knee repair and reconstruction procedure has been improved, the operation will be more simple and its accuracy will be further improved.

  14. RECURSIVE PROTEIN MODELING: A DIVIDE AND CONQUER STRATEGY FOR PROTEIN STRUCTURE PREDICTION AND ITS CASE STUDY IN CASP9

    PubMed Central

    CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN

    2013-01-01

    After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379

  15. Improvements in approaches to forecasting and evaluation techniques

    NASA Astrophysics Data System (ADS)

    Weatherhead, Elizabeth

    2014-05-01

    The US is embarking on an experiment to make significant and sustained improvements in weather forecasting. The effort stems from a series of community conversations that recognized the rapid advancements in observations, modeling and computing techniques in the academic, governmental and private sectors. The new directions and initial efforts will be summarized, including information on possibilities for international collaboration. Most new projects are scheduled to start in the last half of 2014. Several advancements include ensemble forecasting with global models, and new sharing of computing resources. Newly developed techniques for evaluating weather forecast models will be presented in detail. The approaches use statistical techniques that incorporate pair-wise comparisons of forecasts with observations and account for daily auto-correlation to assess appropriate uncertainty in forecast changes. Some of the new projects allow for international collaboration, particularly on the research components of the projects.

  16. The Application of Collaborative Business Intelligence Technology in the Hospital SPD Logistics Management Model

    PubMed Central

    LIU, Tongzhu; SHEN, Aizong; HU, Xiaojian; TONG, Guixian; GU, Wei

    2017-01-01

    Background: We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model. Methods: We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process. Results: For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management. Conclusion: Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers. PMID:28828316

  17. Development of Improved Oil Field Waste Injection Disposal Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terralog Technologies

    2002-11-25

    The goals of this project have was to: (1) assemble and analyze a comprehensive database of past waste injection operations; (2) develop improved diagnostic techniques for monitoring fracture growth and formation changes; (3) develop operating guidelines to optimize daily operations and ultimate storage capacity of the target formation; and (4) to apply these improved models and guidelines in the field.

  18. High-efficiency resonant coupled wireless power transfer via tunable impedance matching

    NASA Astrophysics Data System (ADS)

    Anowar, Tanbir Ibne; Barman, Surajit Das; Wasif Reza, Ahmed; Kumar, Narendra

    2017-10-01

    For magnetic resonant coupled wireless power transfer (WPT), the axial movement of near-field coupled coils adversely degrades the power transfer efficiency (PTE) of the system and often creates sub-resonance. This paper presents a tunable impedance matching technique based on optimum coupling tuning to enhance the efficiency of resonant coupled WPT system. The optimum power transfer model is analysed from equivalent circuit model via reflected load principle, and the adequate matching are achieved through the optimum tuning of coupling coefficients at both the transmitting and receiving end of the system. Both simulations and experiments are performed to evaluate the theoretical model of the proposed matching technique, and results in a PTE over 80% at close coil proximity without shifting the original resonant frequency. Compared to the fixed coupled WPT, the extracted efficiency shows 15.1% and 19.9% improvements at the centre-to-centre misalignment of 10 and 70 cm, respectively. Applying this technique, the extracted S21 parameter shows more than 10 dB improvements at both strong and weak couplings. Through the developed model, the optimum coupling tuning also significantly improves the performance over matching techniques using frequency tracking and tunable matching circuits.

  19. Conceptual Model Evaluation using Advanced Parameter Estimation Techniques with Heat as a Tracer

    NASA Astrophysics Data System (ADS)

    Naranjo, R. C.; Morway, E. D.; Healy, R. W.

    2016-12-01

    Temperature measurements made at multiple depths beneath the sediment-water interface has proven useful for estimating seepage rates from surface-water channels and corresponding subsurface flow direction. Commonly, parsimonious zonal representations of the subsurface structure are defined a priori by interpretation of temperature envelopes, slug tests or analysis of soil cores. However, combining multiple observations into a single zone may limit the inverse model solution and does not take full advantage of the information content within the measured data. Further, simulating the correct thermal gradient, flow paths, and transient behavior of solutes may be biased by inadequacies in the spatial description of subsurface hydraulic properties. The use of pilot points in PEST offers a more sophisticated approach to estimate the structure of subsurface heterogeneity. This presentation evaluates seepage estimation in a cross-sectional model of a trapezoidal canal with intermittent flow representing four typical sedimentary environments. The recent improvements in heat as a tracer measurement techniques (i.e. multi-depth temperature probe) along with use of modern calibration techniques (i.e., pilot points) provides opportunities for improved calibration of flow models, and, subsequently, improved model predictions.

  20. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.

  1. A framework for propagation of uncertainty contributed by parameterization, input data, model structure, and calibration/validation data in watershed modeling

    USDA-ARS?s Scientific Manuscript database

    The progressive improvement of computer science and development of auto-calibration techniques means that calibration of simulation models is no longer a major challenge for watershed planning and management. Modelers now increasingly focus on challenges such as improved representation of watershed...

  2. Theoretical and software considerations for nonlinear dynamic analysis

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1983-01-01

    In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.

  3. Space, time, and the third dimension (model error)

    USGS Publications Warehouse

    Moss, Marshall E.

    1979-01-01

    The space-time tradeoff of hydrologic data collection (the ability to substitute spatial coverage for temporal extension of records or vice versa) is controlled jointly by the statistical properties of the phenomena that are being measured and by the model that is used to meld the information sources. The control exerted on the space-time tradeoff by the model and its accompanying errors has seldom been studied explicitly. The technique, known as Network Analyses for Regional Information (NARI), permits such a study of the regional regression model that is used to relate streamflow parameters to the physical and climatic characteristics of the drainage basin.The NARI technique shows that model improvement is a viable and sometimes necessary means of improving regional data collection systems. Model improvement provides an immediate increase in the accuracy of regional parameter estimation and also increases the information potential of future data collection. Model improvement, which can only be measured in a statistical sense, cannot be quantitatively estimated prior to its achievement; thus an attempt to upgrade a particular model entails a certain degree of risk on the part of the hydrologist.

  4. Real-time emergency forecasting technique for situation management systems

    NASA Astrophysics Data System (ADS)

    Kopytov, V. V.; Kharechkin, P. V.; Naumenko, V. V.; Tretyak, R. S.; Tebueva, F. B.

    2018-05-01

    The article describes the real-time emergency forecasting technique that allows increasing accuracy and reliability of forecasting results of any emergency computational model applied for decision making in situation management systems. Computational models are improved by the Improved Brown’s method applying fractal dimension to forecast short time series data being received from sensors and control systems. Reliability of emergency forecasting results is ensured by the invalid sensed data filtering according to the methods of correlation analysis.

  5. Weighted least squares techniques for improved received signal strength based localization.

    PubMed

    Tarrío, Paula; Bernardos, Ana M; Casar, José R

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  6. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

  7. Modeling paradigms for medical diagnostic decision support: a survey and future directions.

    PubMed

    Wagholikar, Kavishwar B; Sundararajan, Vijayraghavan; Deshpande, Ashok W

    2012-10-01

    Use of computer based decision tools to aid clinical decision making, has been a primary goal of research in biomedical informatics. Research in the last five decades has led to the development of Medical Decision Support (MDS) applications using a variety of modeling techniques, for a diverse range of medical decision problems. This paper surveys literature on modeling techniques for diagnostic decision support, with a focus on decision accuracy. Trends and shortcomings of research in this area are discussed and future directions are provided. The authors suggest that-(i) Improvement in the accuracy of MDS application may be possible by modeling of vague and temporal data, research on inference algorithms, integration of patient information from diverse sources and improvement in gene profiling algorithms; (ii) MDS research would be facilitated by public release of de-identified medical datasets, and development of opensource data-mining tool kits; (iii) Comparative evaluations of different modeling techniques are required to understand characteristics of the techniques, which can guide developers in choice of technique for a particular medical decision problem; and (iv) Evaluations of MDS applications in clinical setting are necessary to foster physicians' utilization of these decision aids.

  8. An Evaluation of Understandability of Patient Journey Models in Mental Health.

    PubMed

    Percival, Jennifer; McGregor, Carolyn

    2016-07-28

    There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on extending the pilot case study to a more diversified group of clinicians and health care support workers.

  9. Improving Motor Skills through Listening

    ERIC Educational Resources Information Center

    Wang, Lin

    2004-01-01

    In this article, the author discusses how to improve a child's motor skills through listening by using three simple steps--recording the auditory model, determining when to use the auditory model, and considering where to use the auditory model. She points out the importance of using a demonstration technique that helps learners understand the…

  10. Implementation of a lightning data assimilation technique in the Weather Research and Forecasting (WRF) model for improving precipitation prediction

    NASA Astrophysics Data System (ADS)

    Giannaros, Theodore; Kotroni, Vassiliki; Lagouvardos, Kostas

    2015-04-01

    Lightning data assimilation has been recently attracting increasing attention as a technique implemented in numerical weather prediction (NWP) models for improving precipitation forecasts. In the frame of TALOS project, we implemented a robust lightning data assimilation technique in the Weather Research and Forecasting (WRF) model with the aim to improve the precipitation prediction in Greece. The assimilation scheme employs lightning as a proxy for the presence or absence of deep convection. In essence, flash data are ingested in WRF to control the Kain-Fritsch (KF) convective parameterization scheme (CPS). When lightning is observed, indicating the occurrence of convective activity, the CPS is forced to attempt to produce convection, whereas the CPS may be optionally be prevented from producing convection when no lightning is observed. Eight two-day precipitation events were selected for assessing the performance of the lightning data assimilation technique. The ingestion of lightning in WRF was carried out during the first 6 h of each event and the evaluation focused on the consequent 24 h, constituting a realistic setup that could be used in operational weather forecasting applications. Results show that the implemented assimilation scheme can improve model performance in terms of precipitation prediction. Forecasts employing the assimilation of flash data were found to exhibit more skill than control simulations, particularly for the intense (>20 mm) 24 h rain accumulations. Analysis of results also revealed that the option not to suppress the KF scheme in the absence of observed lightning, leads to a generally better performance compared to the experiments employing the full control of the CPS' triggering. Overall, the implementation of the lightning data assimilation technique is found to improve the model's ability to represent convection, especially in situations when past convection has modified the mesoscale environment in ways that affect the occurrence and evolution of subsequent convection.

  11. Short Duration Base Heating Test Improvements

    NASA Technical Reports Server (NTRS)

    Bender, Robert L.; Dagostino, Mark G.; Engel, Bradley A.; Engel, Carl D.

    1999-01-01

    Significant improvements have been made to a short duration space launch vehicle base heating test technique. This technique was first developed during the 1960's to investigate launch vehicle plume induced convective environments. Recent improvements include the use of coiled nitrogen buffer gas lines upstream of the hydrogen / oxygen propellant charge tubes, fast acting solenoid valves, stand alone gas delivery and data acquisition systems, and an integrated model design code. Technique improvements were successfully demonstrated during a 2.25% scale X-33 base heating test conducted in the NASA/MSFC Nozzle Test Facility in early 1999. Cost savings of approximately an order of magnitude over previous tests were realized due in large part to these improvements.

  12. Communication and cooperation in underwater acoustic networks

    NASA Astrophysics Data System (ADS)

    Yerramalli, Srinivas

    In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.

  13. Helping agencies improve their planning analysis techniques.

    DOT National Transportation Integrated Search

    2011-11-18

    This report summarizes the results of a peer review of the AZTDM. The peer review was : supported by the Travel Model Improvement Program (TMIP), which is sponsored by FHWA. : The peer review of a travel model can serve multiple purposes, including i...

  14. Modeling the Malaysian motor insurance claim using artificial neural network and adaptive NeuroFuzzy inference system

    NASA Astrophysics Data System (ADS)

    Mohd Yunos, Zuriahati; Shamsuddin, Siti Mariyam; Ismail, Noriszura; Sallehuddin, Roselina

    2013-04-01

    Artificial neural network (ANN) with back propagation algorithm (BP) and ANFIS was chosen as an alternative technique in modeling motor insurance claims. In particular, an ANN and ANFIS technique is applied to model and forecast the Malaysian motor insurance data which is categorized into four claim types; third party property damage (TPPD), third party bodily injury (TPBI), own damage (OD) and theft. This study is to determine whether an ANN and ANFIS model is capable of accurately predicting motor insurance claim. There were changes made to the network structure as the number of input nodes, number of hidden nodes and pre-processing techniques are also examined and a cross-validation technique is used to improve the generalization ability of ANN and ANFIS models. Based on the empirical studies, the prediction performance of the ANN and ANFIS model is improved by using different number of input nodes and hidden nodes; and also various sizes of data. The experimental results reveal that the ANFIS model has outperformed the ANN model. Both models are capable of producing a reliable prediction for the Malaysian motor insurance claims and hence, the proposed method can be applied as an alternative to predict claim frequency and claim severity.

  15. AN IMPROVED SOCKING TECHNIQUE FOR MASTER SLAVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, T.C.; Deckard, L.E.; Howe, P.W.

    1962-10-29

    A technique for socking a pair of standard Model 8 master-slave manipulators is described. The technique is primarily concerned with the fabrication of the bellows section, which provides for Z motion as well as wris movement and rotation. (N.W.R.)

  16. Energy and technology review: Engineering modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabayan, H.S.; Goudreau, G.L.; Ziolkowski, R.W.

    1986-10-01

    This report presents information concerning: Modeling Canonical Problems in Electromagnetic Coupling Through Apertures; Finite-Element Codes for Computing Electrostatic Fields; Finite-Element Modeling of Electromagnetic Phenomena; Modeling Microwave-Pulse Compression in a Resonant Cavity; Lagrangian Finite-Element Analysis of Penetration Mechanics; Crashworthiness Engineering; Computer Modeling of Metal-Forming Processes; Thermal-Mechanical Modeling of Tungsten Arc Welding; Modeling Air Breakdown Induced by Electromagnetic Fields; Iterative Techniques for Solving Boltzmann's Equations for p-Type Semiconductors; Semiconductor Modeling; and Improved Numerical-Solution Techniques in Large-Scale Stress Analysis.

  17. Investigation and Development of Control Laws for the NASA Langley Research Center Cockpit Motion Facility

    NASA Technical Reports Server (NTRS)

    Coon, Craig R.; Cardullo, Frank M.; Zaychik, Kirill B.

    2014-01-01

    The ability to develop highly advanced simulators is a critical need that has the ability to significantly impact the aerospace industry. The aerospace industry is advancing at an ever increasing pace and flight simulators must match this development with ever increasing urgency. In order to address both current problems and potential advancements with flight simulator techniques, several aspects of current control law technology of the National Aeronautics and Space Administration (NASA) Langley Research Center's Cockpit Motion Facility (CMF) motion base simulator were examined. Preliminary investigation of linear models based upon hardware data were examined to ensure that the most accurate models are used. This research identified both system improvements in the bandwidth and more reliable linear models. Advancements in the compensator design were developed and verified through multiple techniques. The position error rate feedback, the acceleration feedback and the force feedback were all analyzed in the heave direction using the nonlinear model of the hardware. Improvements were made using the position error rate feedback technique. The acceleration feedback compensator also provided noteworthy improvement, while attempts at implementing a force feedback compensator proved unsuccessful.

  18. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    PubMed

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  19. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1991-01-01

    A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  20. Static Aeroelastic Predictions for a Transonic Transport Model Using an Unstructured-Grid Flow Solver Coupled With a Structural Plate Technique

    NASA Technical Reports Server (NTRS)

    Allison, Dennis O.; Cavallo, Peter A.

    2003-01-01

    An equivalent-plate structural deformation technique was coupled with a steady-state unstructured-grid three-dimensional Euler flow solver and a two-dimensional strip interactive boundary-layer technique. The objective of the research was to assess the extent to which a simple accounting for static model deformations could improve correlations with measured wing pressure distributions and lift coefficients at transonic speeds. Results were computed and compared to test data for a wing-fuselage model of a generic low-wing transonic transport at a transonic cruise condition over a range of Reynolds numbers and dynamic pressures. The deformations significantly improved correlations with measured wing pressure distributions and lift coefficients. This method provided a means of quantifying the role of dynamic pressure in wind-tunnel studies of Reynolds number effects for transonic transport models.

  1. Variable Complexity Optimization of Composite Structures

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    2002-01-01

    The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.

  2. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-04-24

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  3. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  4. Application of the weighted total field-scattering field technique to 3D-PSTD light scattering model

    NASA Astrophysics Data System (ADS)

    Hu, Shuai; Gao, Taichang; Liu, Lei; Li, Hao; Chen, Ming; Yang, Bo

    2018-04-01

    PSTD (Pseudo Spectral Time Domain) is an excellent model for the light scattering simulation of nonspherical aerosol particles. However, due to the particularity of its discretization form of the Maxwell's equations, the traditional Total Field/Scattering Field (TF/SF) technique for FDTD (Finite Differential Time Domain) is not applicable to PSTD, and the time-consuming pure scattering field technique is mainly applied to introduce the incident wave. To this end, the weighted TF/SF technique proposed by X. Gao is generalized and applied to the 3D-PSTD scattering model. Using this technique, the incident light can be effectively introduced by modifying the electromagnetic components in an inserted connecting region between the total field and the scattering field region with incident terms, where the incident terms are obtained by weighting the incident field by a window function. To optimally determine the thickness of connection region and the window function type for PSTD calculations, their influence on the modeling accuracy is firstly analyzed. To further verify the effectiveness and advantages of the weighted TF/SF technique, the improved PSTD model is validated against the PSTD model equipped with pure scattering field technique in both calculation accuracy and efficiency. The results show that, the performance of PSTD seems to be not sensitive to variation of window functions. The number of the connection layer required decreases with the increasing of spatial resolution, where for spatial resolution of 24 grids per wavelength, a 6-layer region is thick enough. The scattering phase matrices and integral scattering parameters obtained by the improved PSTD show an excellent consistency with those well-tested models for spherical and nonspherical particles, illustrating that the weighted TF/SF technique can introduce the incident precisely. The weighted TF/SF technique shows higher computational efficiency than pure scattering technique.

  5. Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Schwartz, C. S.

    2017-12-01

    Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.

  6. Turning Continuous Quality Improvement into Institutional Practice: The Tools and Techniques.

    ERIC Educational Resources Information Center

    Cornesky, Robert A.

    This manual is intended to assist managers of support units at institutions of higher education in the implementation of Continuous Quality Improvement (CQI). The purpose is to describe a cooperative model for CQI which will permit managers to evaluate the quality of their units and institution, and by using the described tools and techniques, to…

  7. An improved survivability prognosis of breast cancer by using sampling and feature selection technique to solve imbalanced patient classification data.

    PubMed

    Wang, Kung-Jeng; Makond, Bunjira; Wang, Kung-Min

    2013-11-09

    Breast cancer is one of the most critical cancers and is a major cause of cancer death among women. It is essential to know the survivability of the patients in order to ease the decision making process regarding medical treatment and financial preparation. Recently, the breast cancer data sets have been imbalanced (i.e., the number of survival patients outnumbers the number of non-survival patients) whereas the standard classifiers are not applicable for the imbalanced data sets. The methods to improve survivability prognosis of breast cancer need for study. Two well-known five-year prognosis models/classifiers [i.e., logistic regression (LR) and decision tree (DT)] are constructed by combining synthetic minority over-sampling technique (SMOTE), cost-sensitive classifier technique (CSC), under-sampling, bagging, and boosting. The feature selection method is used to select relevant variables, while the pruning technique is applied to obtain low information-burden models. These methods are applied on data obtained from the Surveillance, Epidemiology, and End Results database. The improvements of survivability prognosis of breast cancer are investigated based on the experimental results. Experimental results confirm that the DT and LR models combined with SMOTE, CSC, and under-sampling generate higher predictive performance consecutively than the original ones. Most of the time, DT and LR models combined with SMOTE and CSC use less informative burden/features when a feature selection method and a pruning technique are applied. LR is found to have better statistical power than DT in predicting five-year survivability. CSC is superior to SMOTE, under-sampling, bagging, and boosting to improve the prognostic performance of DT and LR.

  8. An improved survivability prognosis of breast cancer by using sampling and feature selection technique to solve imbalanced patient classification data

    PubMed Central

    2013-01-01

    Background Breast cancer is one of the most critical cancers and is a major cause of cancer death among women. It is essential to know the survivability of the patients in order to ease the decision making process regarding medical treatment and financial preparation. Recently, the breast cancer data sets have been imbalanced (i.e., the number of survival patients outnumbers the number of non-survival patients) whereas the standard classifiers are not applicable for the imbalanced data sets. The methods to improve survivability prognosis of breast cancer need for study. Methods Two well-known five-year prognosis models/classifiers [i.e., logistic regression (LR) and decision tree (DT)] are constructed by combining synthetic minority over-sampling technique (SMOTE) ,cost-sensitive classifier technique (CSC), under-sampling, bagging, and boosting. The feature selection method is used to select relevant variables, while the pruning technique is applied to obtain low information-burden models. These methods are applied on data obtained from the Surveillance, Epidemiology, and End Results database. The improvements of survivability prognosis of breast cancer are investigated based on the experimental results. Results Experimental results confirm that the DT and LR models combined with SMOTE, CSC, and under-sampling generate higher predictive performance consecutively than the original ones. Most of the time, DT and LR models combined with SMOTE and CSC use less informative burden/features when a feature selection method and a pruning technique are applied. Conclusions LR is found to have better statistical power than DT in predicting five-year survivability. CSC is superior to SMOTE, under-sampling, bagging, and boosting to improve the prognostic performance of DT and LR. PMID:24207108

  9. A progress report on seismic model studies

    USGS Publications Warehouse

    Healy, J.H.; Mangan, G.B.

    1963-01-01

    The value of seismic-model studies as an aid to understanding wave propagation in the Earth's crust was recognized by early investigators (Tatel and Tuve, 1955). Preliminary model results were very promising, but progress in model seismology has been restricted by two problems: (1) difficulties in the development of models with continuously variable velocity-depth functions, and (2) difficulties in the construction of models of adequate size to provide a meaningful wave-length to layer-thickness ratio. The problem of a continuously variable velocity-depth function has been partly solved by a technique using two-dimensional plate models constructed by laminating plastic to aluminum, so that the ratio of plastic to aluminum controls the velocity-depth function (Healy and Press, 1960). These techniques provide a continuously variable velocity-depth function, but it is not possible to construct such models large enough to study short-period wave propagation in the crust. This report describes improvements in our ability to machine large models. Two types of models are being used: one is a cylindrical aluminum tube machined on a lathe, and the other is a large plate machined on a precision planer. Both of these modeling techniques give promising results and are a significant improvement over earlier efforts.

  10. A Unified Data Assimilation Strategy for Regional Coupled Atmosphere-Ocean Prediction Systems

    NASA Astrophysics Data System (ADS)

    Xie, Lian; Liu, Bin; Zhang, Fuqing; Weng, Yonghui

    2014-05-01

    Improving tropical cyclone (TC) forecasts is a top priority in weather forecasting. Assimilating various observational data to produce better initial conditions for numerical models using advanced data assimilation techniques has been shown to benefit TC intensity forecasts, whereas assimilating large-scale environmental circulation into regional models by spectral nudging or Scale-Selective Data Assimilation (SSDA) has been demonstrated to improve TC track forecasts. Meanwhile, taking into account various air-sea interaction processes by high-resolution coupled air-sea modelling systems has also been shown to improve TC intensity forecasts. Despite the advances in data assimilation and air-sea coupled models, large errors in TC intensity and track forecasting remain. For example, Hurricane Nate (2011) has brought considerable challenge for the TC operational forecasting community, with very large intensity forecast errors (27, 25, and 40 kts for 48, 72, and 96 h, respectively) for the official forecasts. Considering the slow-moving nature of Hurricane Nate, it is reasonable to hypothesize that air-sea interaction processes played a critical role in the intensity change of the storm, and accurate representation of the upper ocean dynamics and thermodynamics is necessary to quantitatively describe the air-sea interaction processes. Currently, data assimilation techniques are generally only applied to hurricane forecasting in stand-alone atmospheric or oceanic model. In fact, most of the regional hurricane forecasting models only included data assimilation techniques for improving the initial condition of the atmospheric model. In such a situation, the benefit of adjustments in one model (atmospheric or oceanic) by assimilating observational data can be compromised by errors from the other model. Thus, unified data assimilation techniques for coupled air-sea modelling systems, which not only simultaneously assimilate atmospheric and oceanic observations into the coupled air-sea modelling system, but also nudging the large-scale environmental flow in the regional model towards global model forecasts are of increasing necessity. In this presentation, we will outline a strategy for an integrated approach in air-sea coupled data assimilation and discuss its benefits and feasibility from incremental results for select historical hurricane cases.

  11. Control system design for flexible structures using data models

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis; Frazier, W. Garth; Mitchell, Jerrel R.; Medina, Enrique A.; Bukley, Angelia P.

    1993-01-01

    The dynamics and control of flexible aerospace structures exercises many of the engineering disciplines. In recent years there has been considerable research in the developing and tailoring of control system design techniques for these structures. This problem involves designing a control system for a multi-input, multi-output (MIMO) system that satisfies various performance criteria, such as vibration suppression, disturbance and noise rejection, attitude control and slewing control. Considerable progress has been made and demonstrated in control system design techniques for these structures. The key to designing control systems for these structures that meet stringent performance requirements is an accurate model. It has become apparent that theoretically and finite-element generated models do not provide the needed accuracy; almost all successful demonstrations of control system design techniques have involved using test results for fine-tuning a model or for extracting a model using system ID techniques. This paper describes past and ongoing efforts at Ohio University and NASA MSFC to design controllers using 'data models.' The basic philosophy of this approach is to start with a stabilizing controller and frequency response data that describes the plant; then, iteratively vary the free parameters of the controller so that performance measures become closer to satisfying design specifications. The frequency response data can be either experimentally derived or analytically derived. One 'design-with-data' algorithm presented in this paper is called the Compensator Improvement Program (CIP). The current CIP designs controllers for MIMO systems so that classical gain, phase, and attenuation margins are achieved. The center-piece of the CIP algorithm is the constraint improvement technique which is used to calculate a parameter change vector that guarantees an improvement in all unsatisfied, feasible performance metrics from iteration to iteration. The paper also presents a recently demonstrated CIP-type algorithm, called the Model and Data Oriented Computer-Aided Design System (MADCADS), developed for achieving H(sub infinity) type design specifications using data models. Control system design for the NASA/MSFC Single Structure Control Facility are demonstrated for both CIP and MADCADS. Advantages of design-with-data algorithms over techniques that require analytical plant models are also presented.

  12. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  13. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  14. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1993-01-01

    A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  15. Impact of different satellite soil moisture products on the predictions of a continuous distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Laiolo, P.; Gabellani, S.; Campo, L.; Silvestro, F.; Delogu, F.; Rudari, R.; Pulvirenti, L.; Boni, G.; Fascetti, F.; Pierdicca, N.; Crapolicchio, R.; Hasenauer, S.; Puca, S.

    2016-06-01

    The reliable estimation of hydrological variables in space and time is of fundamental importance in operational hydrology to improve the flood predictions and hydrological cycle description. Nowadays remotely sensed data can offer a chance to improve hydrological models especially in environments with scarce ground based data. The aim of this work is to update the state variables of a physically based, distributed and continuous hydrological model using four different satellite-derived data (three soil moisture products and a land surface temperature measurement) and one soil moisture analysis to evaluate, even with a non optimal technique, the impact on the hydrological cycle. The experiments were carried out for a small catchment, in the northern part of Italy, for the period July 2012-June 2013. The products were pre-processed according to their own characteristics and then they were assimilated into the model using a simple nudging technique. The benefits on the model predictions of discharge were tested against observations. The analysis showed a general improvement of the model discharge predictions, even with a simple assimilation technique, for all the assimilation experiments; the Nash-Sutcliffe model efficiency coefficient was increased from 0.6 (relative to the model without assimilation) to 0.7, moreover, errors on discharge were reduced up to the 10%. An added value to the model was found in the rainfall season (autumn): all the assimilation experiments reduced the errors up to the 20%. This demonstrated that discharge prediction of a distributed hydrological model, which works at fine scale resolution in a small basin, can be improved with the assimilation of coarse-scale satellite-derived data.

  16. An improved DPSM technique for modelling ultrasonic fields in cracked solids

    NASA Astrophysics Data System (ADS)

    Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique

    2007-04-01

    In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.

  17. Numerical modeling of pollutant transport using a Lagrangian marker particle technique

    NASA Technical Reports Server (NTRS)

    Spaulding, M.

    1976-01-01

    A derivation and code were developed for the three-dimensional mass transport equation, using a particle-in-cell solution technique, to solve coastal zone waste discharge problems where particles are a major component of the waste. Improvements in the particle movement techniques are suggested and typical examples illustrated. Preliminary model comparisons with analytic solutions for an instantaneous point release in a uniform flow show good results in resolving the waste motion. The findings to date indicate that this computational model will provide a useful technique to study the motion of sediment, dredged spoils, and other particulate waste commonly deposited in coastal waters.

  18. An Evaluation of Understandability of Patient Journey Models in Mental Health

    PubMed Central

    2016-01-01

    Background There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. Objectives This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. Method Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. Results The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. Conclusions The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on extending the pilot case study to a more diversified group of clinicians and health care support workers. PMID:27471006

  19. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  20. Guiding and Modelling Quality Improvement in Higher Education Institutions

    ERIC Educational Resources Information Center

    Little, Daniel

    2015-01-01

    The article considers the process of creating quality improvement in higher education institutions from the point of view of current organisational theory and social-science modelling techniques. The author considers the higher education institution as a functioning complex of rules, norms and other organisational features and reviews the social…

  1. Evolution and revolution: gauging the impact of technological and technical innovation on Olympic performance.

    PubMed

    Balmer, Nigel; Pleasence, Pascoe; Nevill, Alan

    2012-01-01

    A number of studies have pointed to a plateauing of athletic performance, with the suggestion that further improvements will need to be driven by revolutions in technology or technique. In the present study, we examine post-war men's Olympic performance in jumping events (pole vault, long jump, high jump, triple jump) to determine whether performance has indeed plateaued and to present techniques, derived from models of human growth, for assessing the impact of technological and technical innovation over time (logistic and double logistic models of growth). Significantly, two of the events involve well-documented changes in technology (pole material in pole vault) or technique (the Fosbury Flop in high jump), while the other two do not. We find that in all four cases, performance appears to have plateaued and that no further "general" improvement should be expected. In the case of high jump, the double logistic model provides a convenient method for modelling and quantifying a performance intervention (in this case the Fosbury Flop). However, some shortcomings are revealed for pole vault, where evolutionary post-war improvements and innovation (fibre glass poles) were concurrent, preventing their separate identification in the model. In all four events, it is argued that further general growth in performance will indeed need to rely predominantly on technological or technical innovation.

  2. Shuttle TPS thermal performance and analysis methodology

    NASA Technical Reports Server (NTRS)

    Neuenschwander, W. E.; Mcbride, D. U.; Armour, G. A.

    1983-01-01

    Thermal performance of the thermal protection system was approximately as predicted. The only extensive anomalies were filler bar scorching and over-predictions in the high Delta p gap heating regions of the orbiter. A technique to predict filler bar scorching has been developed that can aid in defining a solution. Improvement in high Delta p gap heating methodology is still under study. Minor anomalies were also examined for improvements in modeling techniques and prediction capabilities. These include improved definition of low Delta p gap heating, an analytical model for inner mode line convection heat transfer, better modeling of structure, and inclusion of sneak heating. The limited number of problems related to penetration items that presented themselves during orbital flight tests were resolved expeditiously, and designs were changed and proved successful within the time frame of that program.

  3. An improved switching converter model. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.

  4. Assimilation of Satellite to Improve Cloud Simulation in Wrf Model

    NASA Astrophysics Data System (ADS)

    Park, Y. H.; Pour Biazar, A.; McNider, R. T.

    2012-12-01

    A simple approach has been introduced to improve cloud simulation spatially and temporally in a meteorological model. The first step for this approach is to use Geostationary Operational Environmental Satellite (GOES) observations to identify clouds and estimate the clouds structure. Then by comparing GOES observations to model cloud field, we identify areas in which model has under-predicted or over-predicted clouds. Next, by introducing subsidence in areas with over-prediction and lifting in areas with under-prediction, erroneous clouds are removed and new clouds are formed. The technique estimates a vertical velocity needed for the cloud correction and then uses a one dimensional variation schemes (1D_Var) to calculate the horizontal divergence components and the consequent horizontal wind components needed to sustain such vertical velocity. Finally, the new horizontal winds are provided as a nudging field to the model. This nudging provides the dynamical support needed to create/clear clouds in a sustainable manner. The technique was implemented and tested in the Weather Research and Forecast (WRF) Model and resulted in substantial improvement in model simulated clouds. Some of the results are presented here.

  5. Manufacture of conical springs with elastic medium technology improvement

    NASA Astrophysics Data System (ADS)

    Kurguzov, S. A.; Mikhailova, U. V.; Kalugina, O. B.

    2018-01-01

    This article considers the manufacturing technology improvement by using an elastic medium in the stamping tool forming space to improve the conical springs performance characteristics and reduce the costs of their production. Estimation technique of disk spring operational properties is developed by mathematical modeling of the compression process during the operation of a spring. A technique for optimizing the design parameters of a conical spring is developed, which ensures a minimum voltage value when operated in the edge of the spring opening.

  6. Development of an Improved Time Varying Loudness Model with the Inclusion of Binaural Loudness Summation

    NASA Astrophysics Data System (ADS)

    Charbonneau, Jeremy

    As the perceived quality of a product is becoming more important in the manufacturing industry, more emphasis is being placed on accurately predicting the sound quality of everyday objects. This study was undertaken to improve upon current prediction techniques with regard to the psychoacoustic descriptor of loudness and an improved binaural summation technique. The feasibility of this project was first investigated through a loudness matching experiment involving thirty-one subjects and pure tones of constant sound pressure level. A dependence of binaural summation on frequency was observed which had previously not been a subject of investigation in the reviewed literature. A follow-up investigation was carried out with forty-eight volunteers and pure tones of constant sensation level. Contrary to existing theories in literature the resulting loudness matches revealed an amplitude versus frequency relationship which confirmed the perceived increase in loudness when a signal was presented to both ears simultaneously as opposed to one ear alone. The resulting trend strongly indicated that the higher the frequency of the presented signal, the greater the increase in observed binaural summation. The results from each investigation were summarized into a single binaural summation algorithm and inserted into an improved time-varying loudness model. Using experimental techniques, it was demonstrated that the updated binaural summation algorithm was a considerable improvement over the state of the art approach for predicting the perceived binaural loudness. The improved function retained the ease of use from the original model while additionally providing accurate estimates of diotic listening conditions from monaural WAV files. It was clearly demonstrated using a validation jury test that the revised time-varying loudness model was a significant improvement over the previously standardized approach.

  7. Applying model abstraction techniques to optimize monitoring networks for detecting subsurface contaminant transport

    USDA-ARS?s Scientific Manuscript database

    Improving strategies for monitoring subsurface contaminant transport includes performance comparison of competing models, developed independently or obtained via model abstraction. Model comparison and parameter discrimination involve specific performance indicators selected to better understand s...

  8. Can Cultural Competency Reduce Racial And Ethnic Health Disparities? A Review And Conceptual Model

    PubMed Central

    Brach, Cindy; Fraserirector, Irene

    2016-01-01

    This article develops a conceptual model of cultural competency’s potential to reduce racial and ethnic health disparities, using the cultural competency and disparities literature to lay the foundation for the model and inform assessments of its validity. The authors identify nine major cultural competency techniques: interpreter services, recruitment and retention policies, training, coordinating with traditional healers, use of community health workers, culturally competent health promotion, including family/community members, immersion into another culture, and administrative and organizational accommodations. The conceptual model shows how these techniques could theoretically improve the ability of health systems and their clinicians to deliver appropriate services to diverse populations, thereby improving outcomes and reducing disparities. The authors conclude that while there is substantial research evidence to suggest that cultural competency should in fact work, health systems have little evidence about which cultural competency techniques are effective and less evidence on when and how to implement them properly. PMID:11092163

  9. Improved reconstruction and sensing techniques for personnel screening in three-dimensional cylindrical millimeter-wave portal scanning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandes, Justin L.; Rappaport, Carey M.; Sheen, David M.

    2011-05-01

    The cylindrical millimeter-wave imaging technique, developed at Pacific Northwest National Laboratory (PNNL) and commercialized by L-3 Communications/Safeview in the ProVision system, is currently being deployed in airports and other high security locations to meet person-borne weapon and explosive detection requirements. While this system is efficient and effective in its current form, there are a number of areas in which the detection performance may be improved through using different reconstruction algorithms and sensing configurations. PNNL and Northeastern University have teamed together to investigate higher-order imaging artifacts produced by the current cylindrical millimeter-wave imaging technique using full-wave forward modeling and laboratory experimentation.more » Based on imaging results and scattered field visualizations using the full-wave forward model, a new imaging system is proposed. The new system combines a multistatic sensor configuration with the generalized synthetic aperture focusing technique (GSAFT). Initial results show an improved ability to image in areas of the body where target shading, specular and higher-order reflections cause images produced by the monostatic system difficult to interpret.« less

  10. Countering imbalanced datasets to improve adverse drug event predictive models in labor and delivery.

    PubMed

    Taft, L M; Evans, R S; Shyu, C R; Egger, M J; Chawla, N; Mitchell, J A; Thornton, S N; Bray, B; Varner, M

    2009-04-01

    The IOM report, Preventing Medication Errors, emphasizes the overall lack of knowledge of the incidence of adverse drug events (ADE). Operating rooms, emergency departments and intensive care units are known to have a higher incidence of ADE. Labor and delivery (L&D) is an emergency care unit that could have an increased risk of ADE, where reported rates remain low and under-reporting is suspected. Risk factor identification with electronic pattern recognition techniques could improve ADE detection rates. The objective of the present study is to apply Synthetic Minority Over Sampling Technique (SMOTE) as an enhanced sampling method in a sparse dataset to generate prediction models to identify ADE in women admitted for labor and delivery based on patient risk factors and comorbidities. By creating synthetic cases with the SMOTE algorithm and using a 10-fold cross-validation technique, we demonstrated improved performance of the Naïve Bayes and the decision tree algorithms. The true positive rate (TPR) of 0.32 in the raw dataset increased to 0.67 in the 800% over-sampled dataset. Enhanced performance from classification algorithms can be attained with the use of synthetic minority class oversampling techniques in sparse clinical datasets. Predictive models created in this manner can be used to develop evidence based ADE monitoring systems.

  11. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  12. Improved aortic enhancement in CT angiography using slope-based triggering with table speed optimization: a pilot study.

    PubMed

    Bashir, Mustafa R; Weber, Paul W; Husarik, Daniela B; Howle, Laurens E; Nelson, Rendon C

    2012-08-01

    To assess whether a scan triggering technique based on the slope of the time-attenuation curve combined with table speed optimization may improve arterial enhancement in aortic CT angiography compared to conventional threshold-based triggering techniques. Measurements of arterial enhancement were performed in a physiologic flow phantom over a range of simulated cardiac outputs (2.2-8.1 L/min) using contrast media boluses of 80 and 150 mL injected at 4 mL/s. These measurements were used to construct computer models of aortic attenuation in CT angiography, using cardiac output, aortic diameter, and CT table speed as input parameters. In-plane enhancement was calculated for normal and aneurysmal aortic diameters. Calculated arterial enhancement was poor (<150 HU) along most of the scan length using the threshold-based triggering technique for low cardiac outputs and the aneurysmal aorta model. Implementation of the slope-based triggering technique with table speed optimization improved enhancement in all scenarios and yielded good- (>200 HU; 13/16 scenarios) to excellent-quality (>300 HU; 3/16 scenarios) enhancement in all cases. Slope-based triggering with table speed optimization may improve the technical quality of aortic CT angiography over conventional threshold-based techniques, and may reduce technical failures related to low cardiac output and slow flow through an aneurysmal aorta.

  13. Prostate Cancer Probability Prediction By Machine Learning Technique.

    PubMed

    Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena

    2017-11-26

    The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.

  14. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less

  15. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2018-04-01

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.

  16. Preliminary numerical analysis of improved gas chromatograph model

    NASA Technical Reports Server (NTRS)

    Woodrow, P. T.

    1973-01-01

    A mathematical model for the gas chromatograph was developed which incorporates the heretofore neglected transport mechanisms of intraparticle diffusion and rates of adsorption. Because a closed-form analytical solution to the model does not appear realizable, techniques for the numerical solution of the model equations are being investigated. Criteria were developed for using a finite terminal boundary condition in place of an infinite boundary condition used in analytical solution techniques. The class of weighted residual methods known as orthogonal collocation is presently being investigated and appears promising.

  17. In vitro oral drug permeation models: the importance of taking physiological and physico-chemical factors into consideration.

    PubMed

    Joubert, Ruan; Steyn, Johan Dewald; Heystek, Hendrik Jacobus; Steenekamp, Jan Harm; Du Preez, Jan Lourens; Hamman, Josias Hendrik

    2017-02-01

    The assessment of intestinal membrane permeability properties of new chemical entities is a crucial step in the drug discovery and development process and a variety of in vitro models, methods and techniques are available to estimate the extent of oral drug absorption in humans. However, variations in certain physiological and physico-chemical factors are often not reflected in the results and the complex dynamic interplay between these factors is sometimes oversimplified with in vitro models. Areas covered: In vitro models to evaluate drug pharmacokinetics are briefly outlined, while both physiological and physico-chemical factors that may have an influence on these techniques are critically reviewed. The shortcomings identified for some of the in vitro techniques are discussed in conjunction with novel ways to improve and thereby overcome some challenges. Expert opinion: Although conventional in vitro methods and theories are used as basic guidelines to predict drug absorption, critical evaluations have identified some shortcomings. Advancements in technology have made it possible to investigate and understand the role of physiological and physico-chemical factors in drug delivery more clearly, which can be used to improve and refine the techniques to more closely mimic the in vivo environment.

  18. On the next generation of reliability analysis tools

    NASA Technical Reports Server (NTRS)

    Babcock, Philip S., IV; Leong, Frank; Gai, Eli

    1987-01-01

    The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.

  19. Improving Estimates and Forecasts of Lake Carbon Pools and Fluxes Using Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zwart, J. A.; Hararuk, O.; Prairie, Y.; Solomon, C.; Jones, S.

    2017-12-01

    Lakes are biogeochemical hotspots on the landscape, contributing significantly to the global carbon cycle despite their small areal coverage. Observations and models of lake carbon pools and fluxes are rarely explicitly combined through data assimilation despite significant use of this technique in other fields with great success. Data assimilation adds value to both observations and models by constraining models with observations of the system and by leveraging knowledge of the system formalized by the model to objectively fill information gaps. In this analysis, we highlight the utility of data assimilation in lake carbon cycling research by using the Ensemble Kalman Filter to combine simple lake carbon models with observations of lake carbon pools. We demonstrate the use of data assimilation to improve a model's representation of lake carbon dynamics, to reduce uncertainty in estimates of lake carbon pools and fluxes, and to improve the accuracy of carbon pool size estimates relative to estimates derived from observations alone. Data assimilation techniques should be embraced as valuable tools for lake biogeochemists interested in learning about ecosystem dynamics and forecasting ecosystem states and processes.

  20. Scientific analysis of satellite ranging data

    NASA Technical Reports Server (NTRS)

    Smith, David E.

    1994-01-01

    A network of satellite laser ranging (SLR) tracking systems with continuously improving accuracies is challenging the modelling capabilities of analysts worldwide. Various data analysis techniques have yielded many advances in the development of orbit, instrument and Earth models. The direct measurement of the distance to the satellite provided by the laser ranges has given us a simple metric which links the results obtained by diverse approaches. Different groups have used SLR data, often in combination with observations from other space geodetic techniques, to improve models of the static geopotential, the solid Earth, ocean tides, and atmospheric drag models for low Earth satellites. Radiation pressure models and other non-conservative forces for satellite orbits above the atmosphere have been developed to exploit the full accuracy of the latest SLR instruments. SLR is the baseline tracking system for the altimeter missions TOPEX/Poseidon, and ERS-1 and will play an important role in providing the reference frame for locating the geocentric position of the ocean surface, in providing an unchanging range standard for altimeter calibration, and for improving the geoid models to separate gravitational from ocean circulation signals seen in the sea surface. However, even with the many improvements in the models used to support the orbital analysis of laser observations, there remain systematic effects which limit the full exploitation of SLR accuracy today.

  1. Statistical Evaluation of Time Series Analysis Techniques

    NASA Technical Reports Server (NTRS)

    Benignus, V. A.

    1973-01-01

    The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.

  2. Managing distribution changes in time series prediction

    NASA Astrophysics Data System (ADS)

    Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.

    2006-07-01

    When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.

  3. Some Improved Diagnostics for Failure of The Rasch Model.

    ERIC Educational Resources Information Center

    Molenaar, Ivo W.

    1983-01-01

    Goodness of fit tests for the Rasch model are typically large-sample, global measures. This paper offers suggestions for small-sample exploratory techniques for examining the fit of item data to the Rasch model. (Author/JKS)

  4. A reduced order, test verified component mode synthesis approach for system modeling applications

    NASA Astrophysics Data System (ADS)

    Butland, Adam; Avitabile, Peter

    2010-05-01

    Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.

  5. The Wide-Field Imaging Interferometry Testbed: Enabling Techniques for High Angular Resolution Astronomy

    NASA Technical Reports Server (NTRS)

    Rinehart, S. A.; Armstrong, T.; Frey, Bradley J.; Jung, J.; Kirk, J.; Leisawitz, David T.; Leviton, Douglas B.; Lyon, R.; Maher, Stephen; Martino, Anthony J.; hide

    2007-01-01

    The Wide-Field Imaging Interferometry Testbed (WIIT) was designed to develop techniques for wide-field of view imaging interferometry, using "double-Fourier" methods. These techniques will be important for a wide range of future spacebased interferometry missions. We have provided simple demonstrations of the methodology already, and continuing development of the testbed will lead to higher data rates, improved data quality, and refined algorithms for image reconstruction. At present, the testbed effort includes five lines of development; automation of the testbed, operation in an improved environment, acquisition of large high-quality datasets, development of image reconstruction algorithms, and analytical modeling of the testbed. We discuss the progress made towards the first four of these goals; the analytical modeling is discussed in a separate paper within this conference.

  6. Into the development of a model to assess beam shaping and polarization control effects on laser cutting

    NASA Astrophysics Data System (ADS)

    Rodrigues, Gonçalo C.; Duflou, Joost R.

    2018-02-01

    This paper offers an in-depth look into beam shaping and polarization control as two of the most promising techniques for improving industrial laser cutting of metal sheets. An assessment model is developed for the study of such effects. It is built upon several modifications to models as available in literature in order to evaluate the potential of a wide range of considered concepts. This includes different kinds of beam shaping (achieved by extra-cavity optical elements or asymmetric diode staking) and polarization control techniques (linear, cross, radial, azimuthal). A fully mathematical description and solution procedure are provided. Three case studies for direct diode lasers follow, containing both experimental data and parametric studies. In the first case study, linear polarization is analyzed for any given angle between the cutting direction and the electrical field. In the second case several polarization strategies are compared for similar cut conditions, evaluating, for example, the minimum number of spatial divisions of a segmented polarized laser beam to achieve a target performance. A novel strategy, based on a 12-division linear-to-radial polarization converter with an axis misalignment and capable of improving cutting efficiency with more than 60%, is proposed. The last case study reveals different insights in beam shaping techniques, with an example of a beam shape optimization path for a 30% improvement in cutting efficiency. The proposed techniques are not limited to this type of laser source, neither is the model dedicated to these specific case studies. Limitations of the model and opportunities are further discussed.

  7. Artificial neural networks in Space Station optimal attitude control

    NASA Astrophysics Data System (ADS)

    Kumar, Renjith R.; Seywald, Hans; Deshpande, Samir M.; Rahman, Zia

    1995-01-01

    Innovative techniques of using "artificial neural networks" (ANN) for improving the performance of the pitch axis attitude control system of Space Station Freedom using control moment gyros (CMGs) are investigated. The first technique uses a feed-forward ANN with multi-layer perceptrons to obtain an on-line controller which improves the performance of the control system via a model following approach. The second technique uses a single layer feed-forward ANN with a modified back propagation scheme to estimate the internal plant variations and the external disturbances separately. These estimates are then used to solve two differential Riccati equations to obtain time varying gains which improve the control system performance in successive orbits.

  8. Applying a Continuous Quality Improvement Model To Assess Institutional Effectiveness.

    ERIC Educational Resources Information Center

    Roberts, Keith

    This handbook outlines techniques and processes for improving institutional effectiveness and ensuring continuous quality improvement, based on strategic planning activities at Wisconsin's Milwaukee Area Technical College (MATC). First, institutional effectiveness is defined and 17 core indicators of effectiveness developed by the Wisconsin…

  9. Matrix Synthesis and Characterization

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The role of NASA in the area of composite material synthesis; evaluation techniques; prediction analysis techniques; solvent-resistant tough composite matrix; resistance to paint strippers; acceptable processing temperature and pressure for thermoplastics; and the role of computer modeling and fiber interface improvement were discussed.

  10. a Single-Exposure Dual-Energy Computed Radiography Technique for Improved Nodule Detection and Classification in Chest Imaging

    NASA Astrophysics Data System (ADS)

    Zink, Frank Edward

    The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology assessment.

  11. Performance study of LMS based adaptive algorithms for unknown system identification

    NASA Astrophysics Data System (ADS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  12. Performance study of LMS based adaptive algorithms for unknown system identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Javed, Shazia; Ahmad, Noor Atinah

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less

  13. Evaluation of origin-destination matrix estimation techniques to support aspects of traffic modeling.

    DOT National Transportation Integrated Search

    2014-05-01

    Travel demand forecasting models are used to predict future traffic volumes to evaluate : roadway improvement alternatives. Each of the metropolitan planning organizations (MPO) in : Alabama maintains a travel demand model to support planning efforts...

  14. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  15. Modeling and managing risk early in software development

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Thomas, William M.; Hetmanski, Christopher J.

    1993-01-01

    In order to improve the quality of the software development process, we need to be able to build empirical multivariate models based on data collectable early in the software process. These models need to be both useful for prediction and easy to interpret, so that remedial actions may be taken in order to control and optimize the development process. We present an automated modeling technique which can be used as an alternative to regression techniques. We show how it can be used to facilitate the identification and aid the interpretation of the significant trends which characterize 'high risk' components in several Ada systems. Finally, we evaluate the effectiveness of our technique based on a comparison with logistic regression based models.

  16. HSR Model Deformation Measurements from Subsonic to Supersonic Speeds

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Erickson, G. E.; Goodman, W. L.; Fleming, G. A.

    1999-01-01

    This paper describes the video model deformation technique (VMD) used at five NASA facilities and the projection moire interferometry (PMI) technique used at two NASA facilities. Comparisons between the two techniques for model deformation measurements are provided. Facilities at NASA-Ames and NASA-Langley where deformation measurements have been made are presented. Examples of HSR model deformation measurements from the Langley Unitary Wind Tunnel, Langley 16-foot Transonic Wind Tunnel, and the Ames 12-foot Pressure Tunnel are presented. A study to improve and develop new targeting schemes at the National Transonic Facility is also described. The consideration of milled targets for future HSR models is recommended when deformation measurements are expected to be required. Finally, future development work for VMD and PMI is addressed.

  17. Toward an Innovative, Basic Program Model for the Improvement of Professional Instruction in Dental Education: A Review of the Literature.

    ERIC Educational Resources Information Center

    Wulf, Kathleen M.; And Others

    1980-01-01

    An analysis of the massive amount of literature pertaining to the improvement of professional instruction in dental education resulted in the formation of a comprehensive model of 10 categories, including Delphi technique; systems approach; agencies; workshops; multi-media, self-instruction; evaluation paradigms, measurement, courses, and…

  18. Effective Report Preparation: Streamlining the Reporting Process. AIR 1999 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Dalrymple, Margaret; Wang, Mindy; Frost, Jacquelyn

    This paper describes the processes and techniques used to improve and streamline the standard student reports used at Purdue University (Indiana). Various models for analyzing reporting processes are described, especially the model used in the study, the Shewart or Deming Cycle, a method that aids in continuous analysis and improvement through a…

  19. On the assimilation set-up of ASCAT soil moisture data for improving streamflow catchment simulation

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Tarpanelli, Angelica; Brocca, Luca; Casalí, Javier

    2018-01-01

    Assimilation of remotely sensed surface soil moisture (SSM) data into hydrological catchment models has been identified as a means to improve streamflow simulations, but reported results vary markedly depending on the particular model, catchment and assimilation procedure used. In this study, the influence of key aspects, such as the type of model, re-scaling technique and SSM observation error considered, were evaluated. For this aim, Advanced SCATterometer ASCAT-SSM observations were assimilated through the ensemble Kalman filter into two hydrological models of different complexity (namely MISDc and TOPLATS) run on two Mediterranean catchments of similar size (750 km2). Three different re-scaling techniques were evaluated (linear re-scaling, variance matching and cumulative distribution function matching), and SSM observation error values ranging from 0.01% to 20% were considered. Four different efficiency measures were used for evaluating the results. Increases in Nash-Sutcliffe efficiency (0.03-0.15) and efficiency indices (10-45%) were obtained, especially when linear re-scaling and observation errors within 4-6% were considered. This study found out that there is a potential to improve streamflow prediction through data assimilation of remotely sensed SSM in catchments of different characteristics and with hydrological models of different conceptualizations schemes, but for that, a careful evaluation of the observation error and re-scaling technique set-up utilized is required.

  20. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_6 --> <div id="page_7" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="121"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29675321','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29675321"><span>Burn-injured tissue detection for debridement surgery through the combination of non-invasive optical imaging techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Heredia-Juesas, Juan; Thatcher, Jeffrey E; Lu, Yang; Squiers, John J; King, Darlene; Fan, Wensheng; DiMaio, J Michael; Martinez-Lorenzo, Jose A</p> <p>2018-04-01</p> <p>The process of burn debridement is a challenging technique requiring significant skills to identify the regions that need excision and their appropriate excision depths. In order to assist surgeons, a machine learning tool is being developed to provide a quantitative assessment of burn-injured tissue. This paper presents three non-invasive optical imaging techniques capable of distinguishing four kinds of tissue-healthy skin, viable wound bed, shallow burn, and deep burn-during serial burn debridement in a porcine model. All combinations of these three techniques have been studied through a k-fold cross-validation method. In terms of global performance, the combination of all three techniques significantly improves the classification accuracy with respect to just one technique, from 0.42 up to more than 0.76. Furthermore, a non-linear spatial filtering based on the mode of a small neighborhood has been applied as a post-processing technique, in order to improve the performance of the classification. Using this technique, the global accuracy reaches a value close to 0.78 and, for some particular tissues and combination of techniques, the accuracy improves by 13%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4927621','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4927621"><span>Computational Modeling and Neuroimaging Techniques for Targeting during Deep Brain Stimulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sweet, Jennifer A.; Pace, Jonathan; Girgis, Fady; Miller, Jonathan P.</p> <p>2016-01-01</p> <p>Accurate surgical localization of the varied targets for deep brain stimulation (DBS) is a process undergoing constant evolution, with increasingly sophisticated techniques to allow for highly precise targeting. However, despite the fastidious placement of electrodes into specific structures within the brain, there is increasing evidence to suggest that the clinical effects of DBS are likely due to the activation of widespread neuronal networks directly and indirectly influenced by the stimulation of a given target. Selective activation of these complex and inter-connected pathways may further improve the outcomes of currently treated diseases by targeting specific fiber tracts responsible for a particular symptom in a patient-specific manner. Moreover, the delivery of such focused stimulation may aid in the discovery of new targets for electrical stimulation to treat additional neurological, psychiatric, and even cognitive disorders. As such, advancements in surgical targeting, computational modeling, engineering designs, and neuroimaging techniques play a critical role in this process. This article reviews the progress of these applications, discussing the importance of target localization for DBS, and the role of computational modeling and novel neuroimaging in improving our understanding of the pathophysiology of diseases, and thus paving the way for improved selective target localization using DBS. PMID:27445709</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130011283','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130011283"><span>Sensor Web Dynamic Measurement Techniques and Adaptive Observing Strategies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Talabac, Stephen J.</p> <p>2004-01-01</p> <p>Sensor Web observing systems may have the potential to significantly improve our ability to monitor, understand, and predict the evolution of rapidly evolving, transient, or variable environmental features and events. This improvement will come about by integrating novel data collection techniques, new or improved instruments, emerging communications technologies and protocols, sensor mark-up languages, and interoperable planning and scheduling systems. In contrast to today's observing systems, "event-driven" sensor webs will synthesize real- or near-real time measurements and information from other platforms and then react by reconfiguring the platforms and instruments to invoke new measurement modes and adaptive observation strategies. Similarly, "model-driven" sensor webs will utilize environmental prediction models to initiate targeted sensor measurements or to use a new observing strategy. The sensor web concept contrasts with today's data collection techniques and observing system operations concepts where independent measurements are made by remote sensing and in situ platforms that do not share, and therefore cannot act upon, potentially useful complementary sensor measurement data and platform state information. This presentation describes NASA's view of event-driven and model-driven Sensor Webs and highlights several research and development activities at the Goddard Space Flight Center.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.6533L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.6533L"><span>ASCAT soil moisture data assimilation through the Ensemble Kalman Filter for improving streamflow simulation in Mediterranean catchments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel</p> <p>2016-04-01</p> <p>Assimilation of Surface Soil Moisture (SSM) observations obtained from remote sensing techniques have been shown to improve streamflow prediction at different time scales of hydrological modeling. Different sensors and methods have been tested for their application in SSM estimation, especially in the microwave region of the electromagnetic spectrum. The available observation devices include passive microwave sensors such as the Advanced Microwave Scanning Radiometer - Earth Observation System (AMSR-E) onboard the Aqua satellite and the Soil Moisture and Ocean Salinity (SMOS) mission. On the other hand, active microwave systems include Scatterometers (SCAT) onboard the European Remote Sensing satellites (ERS-1/2) and the Advanced Scatterometer (ASCAT) onboard MetOp-A satellite. Data assimilation (DA) include different techniques that have been applied in hydrology and other fields for decades. These techniques include, among others, Kalman Filtering (KF), Variational Assimilation or Particle Filtering. From the initial KF method, different techniques were developed to suit its application to different systems. The Ensemble Kalman Filter (EnKF), extensively applied in hydrological modeling improvement, shows its capability to deal with nonlinear model dynamics without linearizing model equations, as its main advantage. The objective of this study was to investigate whether data assimilation of SSM ASCAT observations, through the EnKF method, could improve streamflow simulation of mediterranean catchments with TOPLATS hydrological complex model. The DA technique was programmed in FORTRAN, and applied to hourly simulations of TOPLATS catchment model. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) was applied on its lumped version for two mediterranean catchments of similar size, located in northern Spain (Arga, 741 km2) and central Italy (Nestore, 720 km2). The model performs a separated computation of energy and water balances. In those balances, the soil is divided into two layers, the upper Surface Zone (SZ), and the deeper Transmission Zone (TZ). In this study, the SZ depth was fixed to 5 cm, for adequate assimilation of observed data. Available data was distributed as follows: first, the model was calibrated for the 2001-2007 period; then the 2007-2010 period was used for satellite data rescaling purposes. Finally, data assimilation was applied during the validation (2010-2013) period. Application of the EnKF required the following steps: 1) rescaling of satellite data, 2) transformation of rescaled data into Soil Water Index (SWI) through a moving average filter, where a T = 9 calibrated value was applied, 3) generation of a 50 member ensemble through perturbation of inputs (rainfall and temperature) and three selected parameters, 4) validation of the ensemble through the compliance of two criteria based on ensemble's spread, mean square error and skill and, 5) Kalman Gain calculation. In this work, comparison of three satellite data rescaling techniques: 1) cumulative distribution Function (CDF) matching, 2) variance matching and 3) linear least square regression was also performed. Results obtained in this study showed slight improvements of hourly Nash-Sutcliffe Efficiency (NSE) in both catchments, with the different rescaling methods evaluated. Larger improvements were found in terms of seasonal simulated volume error reduction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110016506','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110016506"><span>Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Murphy, Patrick; Klein, Vladislav</p> <p>2011-01-01</p> <p>Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25831933','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25831933"><span>[The methods of assessment of health risk from exposure to radon and radon daughters].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Demin, V F; Zhukovskiy, M V; Kiselev, S M</p> <p>2014-01-01</p> <p>The critical analysis of existing models of the relationship dose-effect (RDE) for radon exposure on human health has been performed. Conclusion about the necessity and possibility of improving these models has been made. A new improved version ofthe RDE has been developed. A technique for assessing the human health risk of exposure to radon, including the method for estimating of exposure doses of radon, an improved model of RDE, proper methodology risk assessment has been described. Methodology is proposed for the use in the territory of Russia.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/6861441','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/6861441"><span>CRAC2 model description</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ritchie, L.T.; Alpert, D.J.; Burke, R.P.</p> <p>1984-03-01</p> <p>The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19720007909','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19720007909"><span>A method for nonlinear exponential regression analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Junkin, B. G.</p> <p>1971-01-01</p> <p>A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170010213','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170010213"><span>Data Assimilation to Extract Soil Moisture Information From SMAP Observations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.</p> <p>2017-01-01</p> <p>Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, they can be used to reduce the need for localized bias correction techniques typically implemented in data assimilation (DA) systems that tend to remove some of the independent information provided by satellite observations. Here, we use a statistical neural network (NN) algorithm to retrieve SMAP (Soil Moisture Active Passive) surface soil moisture estimates in the climatology of the NASA Catchment land surface model. Assimilating these estimates without additional bias correction is found to significantly reduce the model error and increase the temporal correlation against SMAP CalVal in situ observations over the contiguous United States. A comparison with assimilation experiments using traditional bias correction techniques shows that the NN approach better retains the independent information provided by the SMAP observations and thus leads to larger model skill improvements during the assimilation. A comparison with the SMAP Level 4 product shows that the NN approach is able to provide comparable skill improvements and thus represents a viable assimilation approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22163659','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22163659"><span>Color regeneration from reflective color sensor using an artificial intelligent technique.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Saracoglu, Ömer Galip; Altural, Hayriye</p> <p>2010-01-01</p> <p>A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in this work enable color regeneration from analog outputs of the color sensor. Besides, inverse modeling supported by an intelligent technique enables the sensor probe for use of a colorimetric sensor that relates color changes to analog voltages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4568356','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4568356"><span>Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Schellenberg, Florian; Oberhofer, Katja; Taylor, William R.</p> <p>2015-01-01</p> <p>Background. Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. Methods. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Results. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. Conclusion. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines. PMID:26417378</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26417378','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26417378"><span>Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schellenberg, Florian; Oberhofer, Katja; Taylor, William R; Lorenzetti, Silvio</p> <p>2015-01-01</p> <p>Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JBO....17f7002K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JBO....17f7002K"><span>Comparative evaluation of differential laser-induced perturbation spectroscopy as a technique to discriminate emerging skin pathology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kozikowski, Raymond T.; Smith, Sarah E.; Lee, Jennifer A.; Castleman, William L.; Sorg, Brian S.; Hahn, David W.</p> <p>2012-06-01</p> <p>Fluorescence spectroscopy has been widely investigated as a technique for identifying pathological tissue; however, unrelated subject-to-subject variations in spectra complicate data analysis and interpretation. We describe and evaluate a new biosensing technique, differential laser-induced perturbation spectroscopy (DLIPS), based on deep ultraviolet (UV) photochemical perturbation in combination with difference spectroscopy. This technique combines sequential fluorescence probing (pre- and post-perturbation) with sub-ablative UV perturbation and difference spectroscopy to provide a new spectral dimension, facilitating two improvements over fluorescence spectroscopy. First, the differential technique eliminates significant variations in absolute fluorescence response within subject populations. Second, UV perturbations alter the extracellular matrix (ECM), directly coupling the DLIPS response to the biological structure. Improved biosensing with DLIPS is demonstrated in vivo in a murine model of chemically induced skin lesion development. Component loading analysis of the data indicates that the DLIPS technique couples to structural proteins in the ECM. Analysis of variance shows that DLIPS has a significant response to emerging pathology as opposed to other population differences. An optimal likelihood ratio classifier for the DLIPS dataset shows that this technique holds promise for improved diagnosis of epithelial pathology. Results further indicate that DLIPS may improve diagnosis of tissue by augmenting fluorescence spectra (i.e. orthogonal sensing).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17717228','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17717228"><span>Improving operating room efficiency by applying bin-packing and portfolio techniques to surgical case scheduling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Van Houdenhoven, Mark; van Oostrum, Jeroen M; Hans, Erwin W; Wullink, Gerhard; Kazemier, Geert</p> <p>2007-09-01</p> <p>An operating room (OR) department has adopted an efficient business model and subsequently investigated how efficiency could be further improved. The aim of this study is to show the efficiency improvement of lowering organizational barriers and applying advanced mathematical techniques. We applied advanced mathematical algorithms in combination with scenarios that model relaxation of various organizational barriers using prospectively collected data. The setting is the main inpatient OR department of a university hospital, which sets its surgical case schedules 2 wk in advance using a block planning method. The main outcome measures are the number of freed OR blocks and OR utilization. Lowering organizational barriers and applying mathematical algorithms can yield a 4.5% point increase in OR utilization (95% confidence interval 4.0%-5.0%). This is obtained by reducing the total required OR time. Efficient OR departments can further improve their efficiency. The paper shows that a radical cultural change that comprises the use of mathematical algorithms and lowering organizational barriers improves OR utilization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120014370','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120014370"><span>An Analysis Technique/Automated Tool for Comparing and Tracking Analysis Modes of Different Finite Element Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Towner, Robert L.; Band, Jonathan L.</p> <p>2012-01-01</p> <p>An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JGRD..12110617N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JGRD..12110617N"><span>Satellite-enhanced dynamical downscaling for the analysis of extreme events</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nunes, Ana M. B.</p> <p>2016-09-01</p> <p>The use of regional models in the downscaling of general circulation models provides a strategy to generate more detailed climate information. In that case, boundary-forcing techniques can be useful to maintain the large-scale features from the coarse-resolution global models in agreement with the inner modes of the higher-resolution regional models. Although those procedures might improve dynamics, downscaling via regional modeling still aims for better representation of physical processes. With the purpose of improving dynamics and physical processes in regional downscaling of global reanalysis, the Regional Spectral Model—originally developed at the National Centers for Environmental Prediction—employs a newly reformulated scale-selective bias correction, together with the 3-hourly assimilation of the satellite-based precipitation estimates constructed from the Climate Prediction Center morphing technique. The two-scheme technique for the dynamical downscaling of global reanalysis can be applied in analyses of environmental disasters and risk assessment, with hourly outputs, and resolution of about 25 km. Here the satellite-enhanced dynamical downscaling added value is demonstrated in simulations of the first reported hurricane in the western South Atlantic Ocean basin through comparisons with global reanalyses and satellite products available in ocean areas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70041952','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70041952"><span>Comparison of stream invertebrate response models for bioassessment metric</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Waite, Ian R.; Kennen, Jonathan G.; May, Jason T.; Brown, Larry R.; Cuffney, Thomas F.; Jones, Kimberly A.; Orlando, James L.</p> <p>2012-01-01</p> <p>We aggregated invertebrate data from various sources to assemble data for modeling in two ecoregions in Oregon and one in California. Our goal was to compare the performance of models developed using multiple linear regression (MLR) techniques with models developed using three relatively new techniques: classification and regression trees (CART), random forest (RF), and boosted regression trees (BRT). We used tolerance of taxa based on richness (RICHTOL) and ratio of observed to expected taxa (O/E) as response variables and land use/land cover as explanatory variables. Responses were generally linear; therefore, there was little improvement to the MLR models when compared to models using CART and RF. In general, the four modeling techniques (MLR, CART, RF, and BRT) consistently selected the same primary explanatory variables for each region. However, results from the BRT models showed significant improvement over the MLR models for each region; increases in R2 from 0.09 to 0.20. The O/E metric that was derived from models specifically calibrated for Oregon consistently had lower R2 values than RICHTOL for the two regions tested. Modeled O/E R2 values were between 0.06 and 0.10 lower for each of the four modeling methods applied in the Willamette Valley and were between 0.19 and 0.36 points lower for the Blue Mountains. As a result, BRT models may indeed represent a good alternative to MLR for modeling species distribution relative to environmental variables.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1308105','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1308105"><span>The use of self-modeling to improve the swimming performance of spina bifida children.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Dowrick, P W; Dove, C</p> <p>1980-01-01</p> <p>The use of edited videotape replay (which showed only "positive" behaviors) to improve the water skills of three spina bifida children, aged 5 to 10 years was examined. A multiple baseline across subjects design was used, and behavioral changes were observed to occur in close association with intervention. One child was given successive reapplications of videotaped self-modeling with continuing improvements. It appears that a useful practical technique has been developed. PMID:6988381</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AdWR..107..301K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AdWR..107..301K"><span>Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.</p> <p>2017-09-01</p> <p>The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040053356&hterms=imprint&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dimprint','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040053356&hterms=imprint&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dimprint"><span>Cockpit System Situational Awareness Modeling Tool</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Keller, John; Lebiere, Christian; Shay, Rick; Latorella, Kara</p> <p>2004-01-01</p> <p>This project explored the possibility of predicting pilot situational awareness (SA) using human performance modeling techniques for the purpose of evaluating developing cockpit systems. The Improved Performance Research Integration Tool (IMPRINT) was combined with the Adaptive Control of Thought-Rational (ACT-R) cognitive modeling architecture to produce a tool that can model both the discrete tasks of pilots and the cognitive processes associated with SA. The techniques for using this tool to predict SA were demonstrated using the newly developed Aviation Weather Information (AWIN) system. By providing an SA prediction tool to cockpit system designers, cockpit concepts can be assessed early in the design process while providing a cost-effective complement to the traditional pilot-in-the-loop experiments and data collection techniques.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_7 --> <div id="page_8" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="141"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810010572','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810010572"><span>Development of advanced techniques for rotorcraft state estimation and parameter identification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.</p> <p>1980-01-01</p> <p>An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/54503','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/54503"><span>Function modeling improves the efficiency of spatial modeling using big data from remote sensing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>John Hogland; Nathaniel Anderson</p> <p>2017-01-01</p> <p>Spatial modeling is an integral component of most geographic information systems (GISs). However, conventional GIS modeling techniques can require substantial processing time and storage space and have limited statistical and machine learning functionality. To address these limitations, many have parallelized spatial models using multiple coding libraries and have...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27960549','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27960549"><span>A novel pretreatment method combining sealing technique with direct injection technique applied for improving biosafety.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Xinyu; Gao, Jing-Lin; Du, Chaohui; An, Jing; Li, MengJiao; Ma, Haiyan; Zhang, Lina; Jiang, Ye</p> <p>2017-01-01</p> <p>People today have a stronger interest in the risk of biosafety in clinical bioanalysis. A safe, simple, effective method of preparation is needed urgently. To improve biosafety of clinical analysis, we used antiviral drugs of adefovir and tenofovir as model drugs and developed a safe pretreatment method combining sealing technique with direct injection technique. The inter- and intraday precision (RSD %) of the method were <4%, and the extraction recoveries ranged from 99.4 to 100.7%. Meanwhile, the results showed that standard solution could be used to prepare calibration curve instead of spiking plasma, acquiring more accuracy result. Compared with traditional methods, the novel method not only improved biosecurity of the pretreatment method significantly, but also achieved several advantages including higher precision, favorable sensitivity and satisfactory recovery. With these highly practical and desirable characteristics, the novel method may become a feasible platform in bioanalysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1991PhDT........39K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1991PhDT........39K"><span>Quantitative model validation of manipulative robot systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kartowisastro, Iman Herwidiana</p> <p></p> <p>This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of friction terms in the model was highlighted with the aid of the partition control technique. It was also shown that the conventional feedback control scheme was insufficient for a robot manipulative system due to high nonlinearity which was inherent in the robot manipulator.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29302476','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29302476"><span>Robotic and endoscopic transoral thyroidectomy: feasibility and description of the technique in the cadaveric model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kahramangil, Bora; Mohsin, Khuzema; Alzahrani, Hassan; Bu Ali, Daniah; Tausif, Syed; Kang, Sang-Wook; Kandil, Emad; Berber, Eren</p> <p>2017-12-01</p> <p>Numerous new approaches have been described over the years to improve the cosmetic outcomes of thyroid surgery. Transoral approach is a new technique that aims to achieve superior cosmetic outcomes by concealing the incision in the oral cavity. Transoral thyroidectomy through vestibular approach was performed in two institutions on cadaveric models. Procedure was performed endoscopically in one institution, while the robotic technique was utilized at the other. Transoral thyroidectomy was successfully performed at both institutions with robotic and endoscopic techniques. All vital structures were identified and preserved. Transoral thyroidectomy has been performed in animal and cadaveric models, as well as in some clinical studies. Our initial experience indicates the feasibility of this approach. More clinical studies are required to elucidate its full utility.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22581366-statistical-iterative-reconstruction-improve-image-quality-digital-breast-tomosynthesis','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22581366-statistical-iterative-reconstruction-improve-image-quality-digital-breast-tomosynthesis"><span>Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu; Lu, Jianping</p> <p>2015-09-15</p> <p>Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair basedmore » prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19360229','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19360229"><span>Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa</p> <p>2008-01-01</p> <p>This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18313557','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18313557"><span>A systematic review finds methodological improvements necessary for prognostic models in determining traumatic brain injury outcomes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mushkudiani, Nino A; Hukkelhoven, Chantal W P M; Hernández, Adrián V; Murray, Gordon D; Choi, Sung C; Maas, Andrew I R; Steyerberg, Ewout W</p> <p>2008-04-01</p> <p>To describe the modeling techniques used for early prediction of outcome in traumatic brain injury (TBI) and to identify aspects for potential improvements. We reviewed key methodological aspects of studies published between 1970 and 2005 that proposed a prognostic model for the Glasgow Outcome Scale of TBI based on admission data. We included 31 papers. Twenty-four were single-center studies, and 22 reported on fewer than 500 patients. The median of the number of initially considered predictors was eight, and on average five of these were selected for the prognostic model, generally including age, Glasgow Coma Score (or only motor score), and pupillary reactivity. The most common statistical technique was logistic regression with stepwise selection of predictors. Model performance was often quantified by accuracy rate rather than by more appropriate measures such as the area under the receiver-operating characteristic curve. Model validity was addressed in 15 studies, but mostly used a simple split-sample approach, and external validation was performed in only four studies. Although most models agree on the three most important predictors, many were developed on small sample sizes within single centers and hence lack generalizability. Modeling strategies have to be improved, and include external validation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900018014','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900018014"><span>Artificial intelligence techniques for modeling database user behavior</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tanner, Steve; Graves, Sara J.</p> <p>1990-01-01</p> <p>The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28314886','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28314886"><span>No differences in subjective knee function between surgical techniques of anterior cruciate ligament reconstruction at 2-year follow-up: a cohort study from the Swedish National Knee Ligament Register.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hamrin Senorski, Eric; Sundemo, David; Murawski, Christopher D; Alentorn-Geli, Eduard; Musahl, Volker; Fu, Freddie; Desai, Neel; Stålman, Anders; Samuelsson, Kristian</p> <p>2017-12-01</p> <p>The purpose of this study was to investigate how different techniques of single-bundle anterior cruciate ligament (ACL) reconstruction affect subjective knee function via the Knee injury and Osteoarthritis Outcome Score (KOOS) evaluation 2 years after surgery. It was hypothesized that the surgical techniques of single-bundle ACL reconstruction would result in equivalent results with respect to subjective knee function 2 years after surgery. This cohort study was based on data from the Swedish National Knee Ligament Register during the 10-year period of 1 January 2005 through 31 December 2014. Patients who underwent primary single-bundle ACL reconstruction with hamstrings tendon autograft were included. Details on surgical technique were collected using a web-based questionnaire comprised of essential AARSC items, including utilization of accessory medial portal drilling, anatomic tunnel placement, and visualization of insertion sites and landmarks. A repeated measures ANOVA and an additional linear mixed model analysis were used to investigate the effect of surgical technique on the KOOS 4 from the pre-operative period to 2-year follow-up. A total of 13,636 patients who had undergone single-bundle ACL reconstruction comprised the study group for this analysis. A repeated measures ANOVA determined that mean subjective knee function differed between the pre-operative time period and at 2-year follow-up (p < 0.001). No differences were found with respect to the interaction between KOOS 4 and surgical technique or gender. Additionally, the linear mixed model adjusted for age at reconstruction, gender, and concomitant injuries showed no difference between surgical techniques in KOOS 4 improvement from baseline to 2-year follow-up. However, KOOS 4 improved significantly in patients for all surgical techniques of single-bundle ACL reconstruction (p < 0.001); the largest improvement was seen between the pre-operative time period and at 1-year follow-up. Surgical techniques of primary single-bundle ACL reconstruction did not demonstrate differences in the improvement in baseline subjective knee function as measured with the KOOS 4 during the first 2 years after surgery. However, subjective knee function improved from pre-operative baseline to 2-year follow-up independently of surgical technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/44804','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/44804"><span>Function modeling: improved raster analysis through delayed reading and function raster datasets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>John S. Hogland; Nathaniel M. Anderson; J .Greg Jones</p> <p>2013-01-01</p> <p>Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space, often limiting the types of analyses that can be performed. To address this issue, we have developed Function Modeling. Function Modeling is a new modeling framework that streamlines the...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12822797','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12822797"><span>An improved water-filled impedance tube.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wilson, Preston S; Roy, Ronald A; Carey, William M</p> <p>2003-06-01</p> <p>A water-filled impedance tube capable of improved measurement accuracy and precision is reported. The measurement instrument employs a variation of the standardized two-sensor transfer function technique. Performance improvements were achieved through minimization of elastic waveguide effects and through the use of sound-hard wall-mounted acoustic pressure sensors. Acoustic propagation inside the water-filled impedance tube was found to be well described by a plane wave model, which is a necessary condition for the technique. Measurements of the impedance of a pressure-release terminated transmission line, and the reflection coefficient from a water/air interface, were used to verify the system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040191342&hterms=FAC&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DFAC','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040191342&hterms=FAC&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DFAC"><span>Improved Ionospheric Electrodynamic Models and Application to Calculating Joule Heating Rates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Weimer, D. R.</p> <p>2004-01-01</p> <p>Improved techniques have been developed for empirical modeling of the high-latitude electric potentials and magnetic field aligned currents (FAC) as a function of the solar wind parameters. The FAC model is constructed using scalar magnetic Euler potentials, and functions as a twin to the electric potential model. The improved models have more accurate field values as well as more accurate boundary locations. Non-linear saturation effects in the solar wind-magnetosphere coupling are also better reproduced. The models are constructed using a hybrid technique, which has spherical harmonic functions only within a small area at the pole. At lower latitudes the potentials are constructed from multiple Fourier series functions of longitude, at discrete latitudinal steps. It is shown that the two models can be used together in order to calculate the total Poynting flux and Joule heating in the ionosphere. An additional model of the ionospheric conductivity is not required in order to obtain the ionospheric currents and Joule heating, as the conductivity variations as a function of the solar inclination are implicitly contained within the FAC model's data. The models outputs are shown for various input conditions, as well as compared with satellite measurements. The calculations of the total Joule heating are compared with results obtained by the inversion of ground-based magnetometer measurements. Like their predecessors, these empirical models should continue to be a useful research and forecast tools.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24278363','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24278363"><span>Improvements of the surgical technique on the established mouse model of orthotopic single lung transplantation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zheng, Zhikun; Wang, Jianjun; Huang, Xia; Jiang, Ke; Nie, Jun; Qiao, Xinwei; Li, Jinsong</p> <p>2013-01-01</p> <p>A wide range of knockout and transgenic murine models for the study of nonimmune and immune mechanisms in lung transplants are available nowadays, but the microsurgical techniques are difficult to learn. We describe methods to simplify techniques and facilitate learning. Traditional procedures were implemented to perform lung transplants in 30 cases (group 1). Improved techniques which included cuff without tail, broadening of the cuff diameter for bronchus, establishment of one tunnel between three structures, innovative technology of the vascular anastomosis and placement of the chest tube post-operation were used to perform lung transplants in 30 cases (group 2). The improved techniques considerably shorten operative times (96.75 ± 6.16 min and 85.32 ± 6.98 min in groups 1 and 2, respectively). The survival rates in the recipient animals were 86.7% and 96.7% in groups 1 and 2, respectively. Chest X-rays and macroscopic changes of transplanted recipients showed that grafts were well inflated on postoperative day 30. There was no significant difference of the arterial oxygen tension (PaO2) between two groups (115.9 ± 7.11 mm Hg and 116.3 ± 6.87 mm Hg in groups 1 and 2, respectively). Histologically, no lung injury was seen in grafts. We described the modified procedures of orthotopic left lung transplants in mice, which could shorten operative time and increase survival rate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/10491','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/10491"><span>Aggregation in Network Models for Transportation Planning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1978-02-01</p> <p>This report documents research performed on techniques of aggregation applied to network models used in transportation planning. The central objective of this research has been to identify, extend, and evaluate methods of aggregation so as to improve...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19810000372&hterms=cost+model&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dcost%2Bmodel','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19810000372&hterms=cost+model&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dcost%2Bmodel"><span>Proposed Reliability/Cost Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Delionback, L. M.</p> <p>1982-01-01</p> <p>New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10426E..09F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10426E..09F"><span>Monitoring by forward scatter radar techniques: an improved second-order analytical model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Falconi, Marta Tecla; Comite, Davide; Galli, Alessandro; Marzano, Frank S.; Pastina, Debora; Lombardo, Pierfrancesco</p> <p>2017-10-01</p> <p>In this work, a second-order phase approximation is introduced to provide an improved analytical model of the signal received in forward scatter radar systems. A typical configuration with a rectangular metallic object illuminated while crossing the baseline, in far- or near-field conditions, is considered. An improved second-order model is compared with a simplified one already proposed by the authors and based on a paraxial approximation. A phase error analysis is carried out to investigate benefits and limitations of the second-order modeling. The results are validated by developing full-wave numerical simulations implementing the relevant scattering problem on a commercial tool.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SMaS...27g5038L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SMaS...27g5038L"><span>Numerical modeling and model updating for smart laminated structures with viscoelastic damping</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan</p> <p>2018-07-01</p> <p>This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900000790','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900000790"><span>Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Anissipour, Amir A.; Benson, Russell A.</p> <p>1989-01-01</p> <p>The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015GeoRL..42.3486H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015GeoRL..42.3486H"><span>Improved pattern scaling approaches for the use in climate impact studies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Herger, Nadja; Sanderson, Benjamin M.; Knutti, Reto</p> <p>2015-05-01</p> <p>Pattern scaling is a simple way to produce climate projections beyond the scenarios run with expensive global climate models (GCMs). The simplest technique has known limitations and assumes that a spatial climate anomaly pattern obtained from a GCM can be scaled by the global mean temperature (GMT) anomaly. We propose alternatives and assess their skills and limitations. One approach which avoids scaling is to consider a period in a different scenario with the same GMT change. It is attractive as it provides patterns of any temporal resolution that are consistent across variables, and it does not distort variability. Second, we extend the traditional approach with a land-sea contrast term, which provides the largest improvements over the traditional technique. When interpolating between known bounding scenarios, the proposed methods significantly improve the accuracy of the pattern scaled scenario with little computational cost. The remaining errors are much smaller than the Coupled Model Intercomparison Project Phase 5 model spread.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10717E..2FA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10717E..2FA"><span>Visualization of 3D CT-based anatomical models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.</p> <p>2018-04-01</p> <p>Biomedical volumetric data visualization techniques for the exploration purposes are well developed. Most of the known methods are inappropriate for surgery simulation systems due to lack of realism. A segmented data visualization is a well-known approach for the visualization of the structured volumetric data. The research is focused on improvement of the segmented data visualization technique by the aliasing problems resolution and the use of material transparency modeling for better semitransparent structures rendering.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JApSc...8..453L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JApSc...8..453L"><span>The Study of an Integrated Rating System for Supplier Quality Performance in the Semiconductor Industry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, Yu-Cheng; Yen, Tieh-Min; Tsai, Chih-Hung</p> <p></p> <p>This study provides an integrated model of Supplier Quality Performance Assesment (SQPA) activity for the semiconductor industry through introducing the ISO 9001 management framework, Importance-Performance Analysis (IPA) Supplier Quality Performance Assesment and Taguchi`s Signal-to-Noise Ratio (S/N) techniques. This integrated model provides a SQPA methodology to create value for all members under mutual cooperation and trust in the supply chain. This method helps organizations build a complete SQPA framework, linking organizational objectives and SQPA activities to optimize rating techniques to promote supplier quality improvement. The techniques used in SQPA activities are easily understood. A case involving a design house is illustrated to show our model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JJSEE..58.4.98T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JJSEE..58.4.98T"><span>Development of Control Teaching Material for Mechatronics Education Based on Experience</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tasaki, Takao; Watanabe, Shinichi; Shikanai, Yoshihito; Ozaki, Koichi</p> <p></p> <p>In this paper, we have developed a teaching material for technical high school students to understand the control technique. The material makes the students understanding the control technique through the sensibility obtained from the experience of riding the robot. We have considered the correspondence of the teaching material with the ARCS Model. Therefore, the material aims to improve the interest and the willingness to learn mechatronics and control technique by experiencing the difference of the response by the change in the control parameters. As the results of the questionnaire to the technical high school students in the class, we have verified educative effect of the teaching material which can be improved willingness of learning and interesting for mechatronics and control technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22163582','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22163582"><span>Position and speed control of brushless DC motors using sensorless techniques and application trends.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gamazo-Real, José Carlos; Vázquez-Sánchez, Ernesto; Gómez-Gil, Jaime</p> <p>2010-01-01</p> <p>This paper provides a technical review of position and speed sensorless methods for controlling Brushless Direct Current (BLDC) motor drives, including the background analysis using sensors, limitations and advances. The performance and reliability of BLDC motor drivers have been improved because the conventional control and sensing techniques have been improved through sensorless technology. Then, in this paper sensorless advances are reviewed and recent developments in this area are introduced with their inherent advantages and drawbacks, including the analysis of practical implementation issues and applications. The study includes a deep overview of state-of-the-art back-EMF sensing methods, which includes Terminal Voltage Sensing, Third Harmonic Voltage Integration, Terminal Current Sensing, Back-EMF Integration and PWM strategies. Also, the most relevant techniques based on estimation and models are briefly analysed, such as Sliding-mode Observer, Extended Kalman Filter, Model Reference Adaptive System, Adaptive observers (Full-order and Pseudoreduced-order) and Artificial Neural Networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=physiology+AND+stress&pg=7&id=ED233241','ERIC'); return false;" href="https://eric.ed.gov/?q=physiology+AND+stress&pg=7&id=ED233241"><span>Improving Self Worth by Learning to Cope with Distress: A Teaching Model Produced for Mid-Adolescents in the Ann Arbor Public Schools.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Love, James G.</p> <p></p> <p>The goal of this teaching model, which is designed to occupy approximately 8 class periods of 50 minutes each, is to improve the health and well-being of high school students through instruction in recognizing personal distress and utilizing effective coping techniques. Each of the six lessons (Introduction, Recognizing Our Stress Symptoms, How…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Six+AND+Sigma&id=EJ1076263','ERIC'); return false;" href="https://eric.ed.gov/?q=Six+AND+Sigma&id=EJ1076263"><span>Approaching the Challenge of Student Retention through the Lens of Quality Control: A Conceptual Model of University Business Student Retention Utilizing Six Sigma</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Jenicke, Lawrence O.; Holmes, Monica C.; Pisani, Michael J.</p> <p>2013-01-01</p> <p>Student retention in higher education is a major issue as academic institutions compete for fewer students and face declining enrollments. A conceptual model of applying the quality improvement methodology of Six Sigma to the problem of undergraduate student retention in a college of business is presented. Improvement techniques such as cause and…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA605074','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA605074"><span>Multi-sensor Improved Sea-Surface Temperature (MISST) for IOOS - Navy Component</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2013-09-30</p> <p>application and data fusion techniques. 2. Parameterization of IR and MW retrieval differences, with consideration of diurnal warming and cool-skin effects...associated retrieval confidence, standard deviation (STD), and diurnal warming estimates to the application user community in the new GDS 2.0 GHRSST...including coral reefs, ocean modeling in the Gulf of Mexico, improved lake temperatures, numerical data assimilation by ocean models, numerical</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=autism+AND+spectrum+AND+disorder&pg=6&id=EJ1144873','ERIC'); return false;" href="https://eric.ed.gov/?q=autism+AND+spectrum+AND+disorder&pg=6&id=EJ1144873"><span>Procedures and Compliance of a Video Modeling Applied Behavior Analysis Intervention for Brazilian Parents of Children with Autism Spectrum Disorders</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Bagaiolo, Leila F.; Mari, Jair de J.; Bordini, Daniela; Ribeiro, Tatiane C.; Martone, Maria Carolina C.; Caetano, Sheila C.; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S.</p> <p>2017-01-01</p> <p>Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Additive&pg=6&id=EJ1156213','ERIC'); return false;" href="https://eric.ed.gov/?q=Additive&pg=6&id=EJ1156213"><span>Improving Oral Reading Fluency in Elementary School Children: Comparing the Effectiveness of Repeated Readings and Video Self-Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wu, Shengtian; Gadke, Daniel L.</p> <p>2017-01-01</p> <p>Video self-modeling (VSM) is a relatively new technique used to improve reading fluency. At this point, VSM has primarily been used to supplement evidence-based reading interventions such as repeated readings (RR). There is limited to no research evaluating the independent effects of VSM in comparison to interventions such as RR. The goal of the…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=209286&keyword=nmr&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=209286&keyword=nmr&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Potential for Metabolomics-Based Markers of Exposure:Encouraging Evidence from Studies using Model Organisms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Genomic techniques (transcriptomics, proteomics, and metabolomics) have the potential to significantly improve the way chemical risk is managed in the 21st century. Indeed, a significant amount of research has been devoted to the use of these techniques to screen chemicals for h...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000033365','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000033365"><span>Characteristics of Forests in Western Sayani Mountains, Siberia from SAR Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ranson, K. Jon; Sun, Guoqing; Kharuk, V. I.; Kovacs, Katalin</p> <p>1998-01-01</p> <p>This paper investigated the possibility of using spaceborne radar data to map forest types and logging in the mountainous Western Sayani area in Siberia. L and C band HH, HV, and VV polarized images from the Shuttle Imaging Radar-C instrument were used in the study. Techniques to reduce topographic effects in the radar images were investigated. These included radiometric correction using illumination angle inferred from a digital elevation model, and reducing apparent effects of topography through band ratios. Forest classification was performed after terrain correction utilizing typical supervised techniques and principal component analyses. An ancillary data set of local elevations was also used to improve the forest classification. Map accuracy for each technique was estimated for training sites based on Russian forestry maps, satellite imagery and field measurements. The results indicate that it is necessary to correct for topography when attempting to classify forests in mountainous terrain. Radiometric correction based on a DEM (Digital Elevation Model) improved classification results but required reducing the SAR (Synthetic Aperture Radar) resolution to match the DEM. Using ratios of SAR channels that include cross-polarization improved classification and</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4643938','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4643938"><span>Maternal, Infant Characteristics, Breastfeeding Techniques, and Initiation: Structural Equation Modeling Approaches</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Htun, Tha Pyai; Lim, Peng Im; Ho-Lim, Sarah</p> <p>2015-01-01</p> <p>Objectives The aim of this study was to examine the relationships among maternal and infant characteristics, breastfeeding techniques, and exclusive breastfeeding initiation in different modes of birth using structural equation modeling approaches. Methods We examined a hypothetical model based on integrating concepts of a breastfeeding decision-making model, a breastfeeding initiation model, and a social cognitive theory among 952 mother-infant dyads. The LATCH breastfeeding assessment tool was used to evaluate breastfeeding techniques and two infant feeding categories were used (exclusive and non-exclusive breastfeeding). Results Structural equation models (SEM) showed that multiparity was significantly positively associated with breastfeeding techniques and the jaundice of an infant was significantly negatively related to exclusive breastfeeding initiation. A multigroup analysis in the SEM showed no difference between the caesarean section and vaginal delivery groups estimates of breastfeeding techniques on exclusive breastfeeding initiation. Breastfeeding techniques were significantly positively associated with exclusive breastfeeding initiation in the entire sample and in the vaginal deliveries group. However, breastfeeding techniques were not significantly associated with exclusive breastfeeding initiation in the cesarean section group. Maternal age, maternal race, gestations, birth weight of infant, and postnatal complications had no significant impacts on breastfeeding techniques or exclusive breastfeeding initiation in our study. Overall, the models fitted the data satisfactorily (GFI = 0.979–0.987; AGFI = 0.951–0.962; IFI = 0.958–0.962; CFI = 0.955–0.960, and RMSEA = 0.029–0.034). Conclusions Multiparity and jaundice of an infant were found to affect breastfeeding technique and exclusive breastfeeding initiation respectively. Breastfeeding technique was related to exclusive breastfeeding initiation according to the mode of birth. This relationship implies the importance of early effective interventions among first-time mothers with jaundice infants in improving breastfeeding techniques and promoting exclusive breastfeeding initiation. PMID:26566028</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/46334','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/46334"><span>Improved analyses using function datasets and statistical modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>John S. Hogland; Nathaniel M. Anderson</p> <p>2014-01-01</p> <p>Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27085847','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27085847"><span>Utilizing uncoded consultation notes from electronic medical records for predictive modeling of colorectal cancer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hoogendoorn, Mark; Szolovits, Peter; Moons, Leon M G; Numans, Mattijs E</p> <p>2016-05-01</p> <p>Machine learning techniques can be used to extract predictive models for diseases from electronic medical records (EMRs). However, the nature of EMRs makes it difficult to apply off-the-shelf machine learning techniques while still exploiting the rich content of the EMRs. In this paper, we explore the usage of a range of natural language processing (NLP) techniques to extract valuable predictors from uncoded consultation notes and study whether they can help to improve predictive performance. We study a number of existing techniques for the extraction of predictors from the consultation notes, namely a bag of words based approach and topic modeling. In addition, we develop a dedicated technique to match the uncoded consultation notes with a medical ontology. We apply these techniques as an extension to an existing pipeline to extract predictors from EMRs. We evaluate them in the context of predictive modeling for colorectal cancer (CRC), a disease known to be difficult to diagnose before performing an endoscopy. Our results show that we are able to extract useful information from the consultation notes. The predictive performance of the ontology-based extraction method moves significantly beyond the benchmark of age and gender alone (area under the receiver operating characteristic curve (AUC) of 0.870 versus 0.831). We also observe more accurate predictive models by adding features derived from processing the consultation notes compared to solely using coded data (AUC of 0.896 versus 0.882) although the difference is not significant. The extracted features from the notes are shown be equally predictive (i.e. there is no significant difference in performance) compared to the coded data of the consultations. It is possible to extract useful predictors from uncoded consultation notes that improve predictive performance. Techniques linking text to concepts in medical ontologies to derive these predictors are shown to perform best for predicting CRC in our EMR dataset. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27794285','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27794285"><span>Assessment of traffic noise levels in urban areas using different soft computing techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tomić, J; Bogojević, N; Pljakić, M; Šumarac-Pavlović, D</p> <p>2016-10-01</p> <p>Available traffic noise prediction models are usually based on regression analysis of experimental data, and this paper presents the application of soft computing techniques in traffic noise prediction. Two mathematical models are proposed and their predictions are compared to data collected by traffic noise monitoring in urban areas, as well as to predictions of commonly used traffic noise models. The results show that application of evolutionary algorithms and neural networks may improve process of development, as well as accuracy of traffic noise prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19820030371&hterms=handling+techniques&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dhandling%2Btechniques','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19820030371&hterms=handling+techniques&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dhandling%2Btechniques"><span>Flight test experience with high-alpha control system techniques on the F-14 airplane</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gera, J.; Wilson, R. J.; Enevoldson, E. K.; Nguyen, L. T.</p> <p>1981-01-01</p> <p>Improved handling qualities of fighter aircraft at high angles of attack can be provided by various stability and control augmentation techniques. NASA and the U.S. Navy are conducting a joint flight demonstration of these techniques on an F-14 airplane. This paper reports on the flight test experience with a newly designed lateral-directional control system which suppresses such high angle of attack handling qualities problems as roll reversal, wing rock, and directional divergence while simultaneously improving departure/spin resistance. The technique of integrating a piloted simulation into the flight program was used extensively in this program. This technique had not been applied previously to high angle of attack testing and required the development of a valid model to simulate the test airplane at extremely high angles of attack.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1389295','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1389295"><span>Data Synthesis and Data Assimilation at Global Change Experiments and Fluxnet Toward Improving Land Process Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Luo, Yiqi</p> <p></p> <p>The project was conducted during the period from 7/1/2012 to 6/30/2017 with three major tasks: (1) data synthesis and development of data assimilation (DA) techniques to constrain modeled ecosystem feedback to climate change; (2) applications of DA techniques to improve process models at different scales from ecosystem to regions and the globe; and 3) improvements of modeling soil carbon (C) dynamics by land surface models. During this period, we have synthesized published data from soil incubation experiments (e.g., Chen et al., 2016; Xu et al., 2016; Feng et al., 2016), global change experiments (e.g., Li et al., 2013; Shi etmore » al., 2015, 2016; Liang et al., 2016) and fluxnet (e.g., Niu et al., 2012., Xia et al., 2015; Li et al., 2016). These data have been organized into multiple data products and have been used to identify general mechanisms and estimate parameters for model improvement. We used the data sets that we collected and the DA techniques to improve model performance of both ecosystem models and global land models. The objectives are: 1) to improve model simulations of litter and soil carbon storage (e.g., Schädel et al., 2013; Hararuk and Luo, 2014; Hararuk et al., 2014; Liang et al., 2015); 2) to explore the effects of CO 2, warming and precipitation on ecosystem processes (e.g., van Groenigen et al., 2014; Shi et al., 2015, 2016; Feng et al., 2017); and 3) to estimate parameters variability in different ecosystems (e.g., Li et al., 2016). We developed a traceability framework, which was based on matrix approaches and decomposed the modeled steady-state terrestrial ecosystem carbon storage capacity into four can trace the difference in ecosystem carbon storage capacity among different biomes to four traceable components: net primary productivity (NPP), baseline C residence times, environmental scalars and climate forcing (Xia et al., 2013). With this framework, we can diagnose the differences in modeled carbon storage across ecosystems, biomes, and models. This framework has been successfully implemented by several global land models, such as CABLE (Xia et al., 2013), LPJ-GUESS (Ahlström et al., 2015), CLM (Hararuk et al., 2014; Huang et al., 2017, submitted; Shi et al., 2017, submitted), and ORCHIDEE (Huang et al., 2017, unpublished). Moreover, we have identified the theoretical foundation of the determinants of transient C storage dynamics by adding another term, C storage potential, to the steady-state traceability framework (Luo et al., 2017). The theoretical foundation of transient C storage dynamics has been applied to develop a transient traceability framework to explore the traceable components of transient C storage dynamics responded to the rising CO 2 and climate change in the two contrasting ecosystem types Duke needleleaved forest and Harvard deciduous broadleaved forest (Jiang et al., 2017, in revision). Overall, with the data synthesis, data assimilation techniques, and the steady-state and transient traceability frameworks, we have greatly improved land process models for predicting responses and feedback of terrestrial C dynamics to global change. The matrix approaches has the potential to be applied in theoretical research on nitrogen and phosphorus cycle, and therefore, the coupling of carbon-nitrogen-phosphorus.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1415019','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1415019"><span>PSH Transient Simulation Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Muljadi, Eduard</p> <p></p> <p>PSH Transient Simulation Modeling presentation from the WPTO FY14 - FY16 Peer Review. Transient effects are an important consideration when designing a PSH system, yet numerical techniques for hydraulic transient analysis still need improvements for adjustable-speed (AS) reversible pump-turbine applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MarGR.tmp...35Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MarGR.tmp...35Y"><span>Full waveform inversion of combined towed streamer and limited OBS seismic data: a theoretical study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Huachen; Zhang, Jianzhong</p> <p>2018-06-01</p> <p>In marine seismic oil exploration, full waveform inversion (FWI) of towed-streamer data is used to reconstruct velocity models. However, the FWI of towed-streamer data easily converges to a local minimum solution due to the lack of low-frequency content. In this paper, we propose a new FWI technique using towed-streamer data, its integrated data sets and limited OBS data. Both integrated towed-streamer seismic data and OBS data have low-frequency components. Therefore, at early iterations in the new FWI technique, the OBS data combined with the integrated towed-streamer data sets reconstruct an appropriate background model. And the towed-streamer seismic data play a major role in later iterations to improve the resolution of the model. The new FWI technique is tested on numerical examples. The results show that when starting models are not accurate enough, the models inverted using the new FWI technique are superior to those inverted using conventional FWI.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdSpR..61.2057T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdSpR..61.2057T"><span>Model based Computerized Ionospheric Tomography in space and time</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tuna, Hakan; Arikan, Orhan; Arikan, Feza</p> <p>2018-04-01</p> <p>Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OcMod.115...86A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OcMod.115...86A"><span>Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Altuntas, Alper; Baugh, John</p> <p>2017-07-01</p> <p>Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19810025338&hterms=fas&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dfas','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19810025338&hterms=fas&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dfas"><span>Advantages of multigrid methods for certifying the accuracy of PDE modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Forester, C. K.</p> <p>1981-01-01</p> <p>Numerical techniques for assessing and certifying the accuracy of the modeling of partial differential equations (PDE) to the user's specifications are analyzed. Examples of the certification process with conventional techniques are summarized for the three dimensional steady state full potential and the two dimensional steady Navier-Stokes equations using fixed grid methods (FG). The advantages of the Full Approximation Storage (FAS) scheme of the multigrid technique of A. Brandt compared with the conventional certification process of modeling PDE are illustrated in one dimension with the transformed potential equation. Inferences are drawn for how MG will improve the certification process of the numerical modeling of two and three dimensional PDE systems. Elements of the error assessment process that are common to FG and MG are analyzed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3464892','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3464892"><span>Systems modeling and simulation applications for critical care medicine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area. PMID:22703718</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCHyd.203....1O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCHyd.203....1O"><span>Conservative strategy-based ensemble surrogate model for optimal groundwater remediation design at DNAPLs-contaminated sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo</p> <p>2017-08-01</p> <p>The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3231074','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3231074"><span>Performance Analysis of Receive Diversity in Wireless Sensor Networks over GBSBE Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Goel, Shivali; Abawajy, Jemal H.; Kim, Tai-hoon</p> <p>2010-01-01</p> <p>Wireless sensor networks have attracted a lot of attention recently. In this paper, we develop a channel model based on the elliptical model for multipath components involving randomly placed scatterers in the scattering region with sensors deployed on a field. We verify that in a sensor network, the use of receive diversity techniques improves the performance of the system. Extensive performance analysis of the system is carried out for both single and multiple antennas with the applied receive diversity techniques. Performance analyses based on variations in receiver height, maximum multipath delay and transmit power have been performed considering different numbers of antenna elements present in the receiver array, Our results show that increasing the number of antenna elements for a wireless sensor network does indeed improve the BER rates that can be obtained. PMID:22163510</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A43G0334Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A43G0334Y"><span>Analog ensemble and Bayesian regression techniques to improve the wind speed prediction during extreme storms in the NE U.S.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, J.; Astitha, M.; Delle Monache, L.; Alessandrini, S.</p> <p>2016-12-01</p> <p>Accuracy of weather forecasts in Northeast U.S. has become very important in recent years, given the serious and devastating effects of extreme weather events. Despite the use of evolved forecasting tools and techniques strengthened by increased super-computing resources, the weather forecasting systems still have their limitations in predicting extreme events. In this study, we examine the combination of analog ensemble and Bayesian regression techniques to improve the prediction of storms that have impacted NE U.S., mostly defined by the occurrence of high wind speeds (i.e. blizzards, winter storms, hurricanes and thunderstorms). The predicted wind speed, wind direction and temperature by two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) are combined using the mentioned techniques, exploring various ways that those variables influence the minimization of the prediction error (systematic and random). This study is focused on retrospective simulations of 146 storms that affected the NE U.S. in the period 2005-2016. In order to evaluate the techniques, leave-one-out cross validation procedure was implemented regarding 145 storms as the training dataset. The analog ensemble method selects a set of past observations that corresponded to the best analogs of the numerical weather prediction and provides a set of ensemble members of the selected observation dataset. The set of ensemble members can then be used in a deterministic or probabilistic way. In the Bayesian regression framework, optimal variances are estimated for the training partition by minimizing the root mean square error and are applied to the out-of-sample storm. The preliminary results indicate a significant improvement in the statistical metrics of 10-m wind speed for 146 storms using both techniques (20-30% bias and error reduction in all observation-model pairs). In this presentation, we discuss the various combinations of atmospheric predictors and techniques and illustrate how the long record of predicted storms is valuable in the improvement of wind speed prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5750317','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5750317"><span>Robotic and endoscopic transoral thyroidectomy: feasibility and description of the technique in the cadaveric model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kahramangil, Bora; Mohsin, Khuzema; Alzahrani, Hassan; Bu Ali, Daniah; Tausif, Syed; Kang, Sang-Wook; Kandil, Emad</p> <p>2017-01-01</p> <p>Background Numerous new approaches have been described over the years to improve the cosmetic outcomes of thyroid surgery. Transoral approach is a new technique that aims to achieve superior cosmetic outcomes by concealing the incision in the oral cavity. Methods Transoral thyroidectomy through vestibular approach was performed in two institutions on cadaveric models. Procedure was performed endoscopically in one institution, while the robotic technique was utilized at the other. Results Transoral thyroidectomy was successfully performed at both institutions with robotic and endoscopic techniques. All vital structures were identified and preserved. Conclusions Transoral thyroidectomy has been performed in animal and cadaveric models, as well as in some clinical studies. Our initial experience indicates the feasibility of this approach. More clinical studies are required to elucidate its full utility. PMID:29302476</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE10036E..0NI','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE10036E..0NI"><span>Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.</p> <p>2016-02-01</p> <p>Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1982rgrw.rept.....S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1982rgrw.rept.....S"><span>The report of the Gravity Field Workshop</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smith, D. E.</p> <p>1982-04-01</p> <p>A Gravity Field Workshop was convened to review the actions which could be taken prior to a GRAVSAT mission to improve the Earth's gravity field model. This review focused on the potential improvements in the Earth's gravity field which could be obtained using the current satellite and surface gravity data base. In particular, actions to improve the quality of the gravity field determination through refined measurement corrections, selected data augmentation and a more accurate reprocessing of the data were considered. In addition, recommendations were formulated which define actions which NASA should take to develop the necessary theoretical and computation techniques for gravity model determination and to use these approaches to improve the accuracy of the Earth's gravity model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=couple&pg=3&id=EJ875432','ERIC'); return false;" href="https://eric.ed.gov/?q=couple&pg=3&id=EJ875432"><span>Couples Counseling Directive Technique: A (Mis)communication Model to Promote Insight, Catharsis, Disclosure, and Problem Resolution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Mahaffey, Barbara A.</p> <p>2010-01-01</p> <p>A psychoeducational model for improving couple communication is proposed. An important goal in couples counseling is to assist couples in resolving communication conflicts. The proposed communication model helps to establish a therapeutic environment that encourages insight, therapeutic alliance formation, catharsis, self-disclosure, symptom…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=TICE&pg=5&id=EJ476608','ERIC'); return false;" href="https://eric.ed.gov/?q=TICE&pg=5&id=EJ476608"><span>A Problem Solving Model for Use in Science Student Teacher Supervision.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Cavallo, Ann M. L.; Tice, Craig J.</p> <p>1993-01-01</p> <p>Describes and suggests the use of a problem-solving model that improves communication between student teachers and supervisors through the student teaching practicum. The aim of the model is to promote experimentation with various teaching techniques and to stimulate thinking among student teachers about their teaching experiences. (PR)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JGRD..11911682S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JGRD..11911682S"><span>Improving the representation of clouds, radiation, and precipitation using spectral nudging in the Weather Research and Forecasting model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Spero, Tanya L.; Otte, Martin J.; Bowden, Jared H.; Nolte, Christopher G.</p> <p>2014-10-01</p> <p>Spectral nudging—a scale-selective interior constraint technique—is commonly used in regional climate models to maintain consistency with large-scale forcing while permitting mesoscale features to develop in the downscaled simulations. Several studies have demonstrated that spectral nudging improves the representation of regional climate in reanalysis-forced simulations compared with not using nudging in the interior of the domain. However, in the Weather Research and Forecasting (WRF) model, spectral nudging tends to produce degraded precipitation simulations when compared to analysis nudging—an interior constraint technique that is scale indiscriminate but also operates on moisture fields which until now could not be altered directly by spectral nudging. Since analysis nudging is less desirable for regional climate modeling because it dampens fine-scale variability, changes are proposed to the spectral nudging methodology to capitalize on differences between the nudging techniques and aim to improve the representation of clouds, radiation, and precipitation without compromising other fields. These changes include adding spectral nudging toward moisture, limiting nudging to below the tropopause, and increasing the nudging time scale for potential temperature, all of which collectively improve the representation of mean and extreme precipitation, 2 m temperature, clouds, and radiation, as demonstrated using a model-simulated 20 year historical period. Such improvements to WRF may increase the fidelity of regional climate data used to assess the potential impacts of climate change on human health and the environment and aid in climate change mitigation and adaptation studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1117065-generator-dynamic-model-validation-parameter-calibration-using-phasor-measurements-point-connection','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1117065-generator-dynamic-model-validation-parameter-calibration-using-phasor-measurements-point-connection"><span>Generator Dynamic Model Validation and Parameter Calibration Using Phasor Measurements at the Point of Connection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry</p> <p>2013-05-01</p> <p>Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4527045','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4527045"><span>Improved Pulse Wave Velocity Estimation Using an Arterial Tube-Load Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gao, Mingwu; Zhang, Guanqun; Olivier, N. Bari; Mukkamala, Ramakrishna</p> <p>2015-01-01</p> <p>Pulse wave velocity (PWV) is the most important index of arterial stiffness. It is conventionally estimated by non-invasively measuring central and peripheral blood pressure (BP) and/or velocity (BV) waveforms and then detecting the foot-to-foot time delay between the waveforms wherein wave reflection is presumed absent. We developed techniques for improved estimation of PWV from the same waveforms. The techniques effectively estimate PWV from the entire waveforms, rather than just their feet, by mathematically eliminating the reflected wave via an arterial tube-load model. In this way, the techniques may be more robust to artifact while revealing the true PWV in absence of wave reflection. We applied the techniques to estimate aortic PWV from simultaneously and sequentially measured central and peripheral BP waveforms and simultaneously measured central BV and peripheral BP waveforms from 17 anesthetized animals during diverse interventions that perturbed BP widely. Since BP is the major acute determinant of aortic PWV, especially under anesthesia wherein vasomotor tone changes are minimal, we evaluated the techniques in terms of the ability of their PWV estimates to track the acute BP changes in each subject. Overall, the PWV estimates of the techniques tracked the BP changes better than those of the conventional technique (e.g., diastolic BP root-mean-squared-errors of 3.4 vs. 5.2 mmHg for the simultaneous BP waveforms and 7.0 vs. 12.2 mmHg for the BV and BP waveforms (p < 0.02)). With further testing, the arterial tube-load model-based PWV estimation techniques may afford more accurate arterial stiffness monitoring in hypertensive and other patients. PMID:24263016</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19780023724','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19780023724"><span>The GISS sounding temperature impact test</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Halem, M.; Ghil, M.; Atlas, R.; Susskind, J.; Quirk, W. J.</p> <p>1978-01-01</p> <p>The impact of DST 5 and DST 6 satellite sounding data on mid-range forecasting was studied. The GISS temperature sounding technique, the GISS time-continuous four-dimensional assimilation procedure based on optimal statistical analysis, the GISS forecast model, and the verification techniques developed, including impact on local precipitation forecasts are described. It is found that the impact of sounding data was substantial and beneficial for the winter test period, Jan. 29 - Feb. 21. 1976. Forecasts started from initial state obtained with the aid of satellite data showed a mean improvement of about 4 points in the 48 and 772 hours Sub 1 scores as verified over North America and Europe. This corresponds to an 8 to 12 hour forecast improvement in the forecast range at 48 hours. An automated local precipitation forecast model applied to 128 cities in the United States showed on an average 15% improvement when satellite data was used for numerical forecasts. The improvement was 75% in the midwest.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3205439','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3205439"><span>Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna</p> <p>2011-01-01</p> <p>A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA187680','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA187680"><span>Analysis of Learning Curve Fitting Techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1987-09-01</p> <p>1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004SPIE.5375.1017R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004SPIE.5375.1017R"><span>Edge printability: techniques used to evaluate and improve extreme wafer edge printability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Roberts, Bill; Demmert, Cort; Jekauc, Igor; Tiffany, Jason P.</p> <p>2004-05-01</p> <p>The economics of semiconductor manufacturing have forced process engineers to develop techniques to increase wafer yield. Improvements in process controls and uniformities in all areas of the fab have reduced film thickness variations at the very edge of the wafer surface. This improved uniformity has provided the opportunity to consider decreasing edge exclusions, and now the outermost extents of the wafer must be considered in the yield model and expectations. These changes have increased the requirements on lithography to improve wafer edge printability in areas that previously were not even coated. This has taxed all software and hardware components used in defining the optical focal plane at the wafer edge. We have explored techniques to determine the capabilities of extreme wafer edge printability and the components of the systems that influence this printability. We will present current capabilities and new detection techniques and the influence that the individual hardware and software components have on edge printability. We will show effects of focus sensor designs, wafer layout, utilization of dummy edge fields, the use of non-zero overlay targets and chemical/optical edge bead optimization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1993bu...reptQ....O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1993bu...reptQ....O"><span>Segment-based acoustic models for continuous speech recognition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ostendorf, Mari; Rohlicek, J. R.</p> <p>1993-07-01</p> <p>This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Cross+AND+fit&pg=7&id=EJ1038897','ERIC'); return false;" href="https://eric.ed.gov/?q=Cross+AND+fit&pg=7&id=EJ1038897"><span>Examining Relational Engagement across the Transition to High Schools in Three US High Schools Reformed to Improve Relationship Quality</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Davis, Heather A.; Chang, Mei-Lin; Andrzejewski, Carey E.; Poirier, Ryan R.</p> <p>2014-01-01</p> <p>The purpose of this study was to examine changes in students' relational engagement across the transition to high school in three schools reformed to improve the quality of student-teacher relationships. In order to analyze this data we employed latent growth curve (LGC) modeling techniques (n = 637). We ran three LGC models on three…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9989E..0PG','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9989E..0PG"><span>Updates on measurements and modeling techniques for expendable countermeasures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.</p> <p>2016-10-01</p> <p>The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1714988P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1714988P"><span>Electromagnetic modelling, inversion and data-processing techniques for GPR: ongoing activities in Working Group 3 of COST Action TU1208</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pajewski, Lara; Giannopoulos, Antonis; van der Kruk, Jan</p> <p>2015-04-01</p> <p>This work aims at presenting the ongoing research activities carried out in Working Group 3 (WG3) 'EM methods for near-field scattering problems by buried structures; data processing techniques' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. WG3 is structured in four Projects. Project 3.1 deals with 'Electromagnetic modelling for GPR applications.' Project 3.2 is concerned with 'Inversion and imaging techniques for GPR applications.' The topic of Project 3.3 is the 'Development of intrinsic models for describing near-field antenna effects, including antenna-medium coupling, for improved radar data processing using full-wave inversion.' Project 3.4 focuses on 'Advanced GPR data-processing algorithms.' Electromagnetic modeling tools that are being developed and improved include the Finite-Difference Time-Domain (FDTD) technique and the spectral domain Cylindrical-Wave Approach (CWA). One of the well-known freeware and versatile FDTD simulators is GprMax that enables an improved realistic representation of the soil/material hosting the sought structures and of the GPR antennas. Here, input/output tools are being developed to ease the definition of scenarios and the visualisation of numerical results. The CWA expresses the field scattered by subsurface two-dimensional targets with arbitrary cross-section as a sum of cylindrical waves. In this way, the interaction is taken into account of multiple scattered fields within the medium hosting the sought targets. Recently, the method has been extended to deal with through-the-wall scenarios. One of the inversion techniques currently being improved is Full-Waveform Inversion (FWI) for on-ground, off-ground, and crosshole GPR configurations. In contrast to conventional inversion tools which are often based on approximations and use only part of the available data, FWI uses the complete measured data and detailed modeling tools to obtain an improved estimation of medium properties. During the first year of the Action, information was collected and shared about state-of-the-art of the available modelling, imaging, inversion, and data-processing methods. Advancements achieved by WG3 Members were presented during the TU1208 Second General Meeting (April 30 - May 2, 2014, Vienna, Austria) and the 15th International Conference on Ground Penetrating Radar (June 30 - July 4, 2014, Brussels, Belgium). Currently, a database of numerical and experimental GPR responses from natural and manmade structures is being designed. A geometrical and physical description of the scenarios, together with the available synthetic and experimental data, will be at the disposal of the scientific community. Researchers will thus have a further opportunity of testing and validating, against reliable data, their electromagnetic forward- and inverse-scattering techniques, imaging methods and data-processing algorithms. The motivation to start this database came out during TU1208 meetings and takes inspiration by successful past initiatives carried out in different areas, as the Ipswich and Fresnel databases in the field of free-space electromagnetic scattering, and the Marmousi database in seismic science. Acknowledgement The Authors thank COST, for funding the Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar.'</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..308a2051S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..308a2051S"><span>Electromagnetic Modelling of MMIC CPWs for High Frequency Applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sinulingga, E. P.; Kyabaggu, P. B. K.; Rezazadeh, A. A.</p> <p>2018-02-01</p> <p>Realising the theoretical electrical characteristics of components through modelling can be carried out using computer-aided design (CAD) simulation tools. If the simulation model provides the expected characteristics, the fabrication process of Monolithic Microwave Integrated Circuit (MMIC) can be performed for experimental verification purposes. Therefore improvements can be suggested before mass fabrication takes place. This research concentrates on development of MMIC technology by providing accurate predictions of the characteristics of MMIC components using an improved Electromagnetic (EM) modelling technique. The knowledge acquired from the modelling and characterisation process in this work can be adopted by circuit designers for various high frequency applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22201574','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22201574"><span>Kaizen: a process improvement model for the business of health care and perioperative nursing professionals.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tetteh, Hassan A</p> <p>2012-01-01</p> <p>Kaizen is a proven management technique that has a practical application for health care in the context of health care reform and the 2010 Institute of Medicine landmark report on the future of nursing. Compounded productivity is the unique benefit of kaizen, and its principles are change, efficiency, performance of key essential steps, and the elimination of waste through small and continuous process improvements. The kaizen model offers specific instruction for perioperative nurses to achieve process improvement in a five-step framework that includes teamwork, personal discipline, improved morale, quality circles, and suggestions for improvement. Published by Elsevier Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12030098','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12030098"><span>Meeting the needs of an ever-demanding market.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rigby, Richard</p> <p>2002-04-01</p> <p>Balancing cost and performance in packaging is critical. This article outlines techniques to assist in this whilst delivering added value and product differentiation. The techniques include a rigorous statistical process capable of delivering cost reduction and improved quality and a computer modelling process that can save time when validating new packaging options.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28901067','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28901067"><span>[Improvement of powder flowability and hygroscopicity of traditional Chinese medicine extract by surface coating modification technology].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zeng, Rong-Gui; Jiang, Qie-Ying; Liao, Zheng-Gen; Zhao, Guo-Wei; Luo, Yun; Luo, Juan; Lv, Dan</p> <p>2016-06-01</p> <p>To study the improvement of powder flowability and hygroscopicity of traditional Chinese medicine extract by surface coating modification technology. The 1% hydrophobic silica nanoparticles were used as surface modifier, and andrographis extract powder was taken as a model drug. Three different techniques were used for coating model drugs, with angle of repose, compressibility, flat angle and cohesion as the comprehensive evaluation indexes for the powder flowability. The powder particle size and the size distribution were measured by Mastersizer 2000. FEI scanning electron microscope was used to observe the surface morphology and structure of the powder. The percentage of Si element on the powder surface was measured by energy dispersive spectrometer. The hygroscopicity of powder was determined by Chinese pharmacopoeia method. All of the three techniques can improve the flowability of powder extract. In particular, hygroscopicity of extract powder can also be improved by dispersion and then high-speed mixing, which can produce a higher percentage of Si element on the powder surface. The improvement principle may be correlated with a modifier adhered to the powder surface. Copyright© by the Chinese Pharmaceutical Association.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014ApPRv...1a1307J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014ApPRv...1a1307J"><span>Modeling techniques for quantum cascade lasers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jirauschek, Christian; Kubis, Tillmann</p> <p>2014-03-01</p> <p>Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22269555-modeling-techniques-quantum-cascade-lasers','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22269555-modeling-techniques-quantum-cascade-lasers"><span>Modeling techniques for quantum cascade lasers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Jirauschek, Christian; Kubis, Tillmann</p> <p>2014-03-15</p> <p>Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation ofmore » quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992wadc.iafcT....K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992wadc.iafcT....K"><span>Artificial neural networks in Space Station optimal attitude control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kumar, Renjith R.; Seywald, Hans; Deshpande, Samir M.; Rahman, Zia</p> <p>1992-08-01</p> <p>Innovative techniques of using 'Artificial Neural Networks' (ANN) for improving the performance of the pitch axis attitude control system of Space Station Freedom using Control Moment Gyros (CMGs) are investigated. The first technique uses a feedforward ANN with multilayer perceptrons to obtain an on-line controller which improves the performance of the control system via a model following approach. The second techique uses a single layer feedforward ANN with a modified back propagation scheme to estimate the internal plant variations and the external disturbances separately. These estimates are then used to solve two differential Riccati equations to obtain time varying gains which improve the control system performance in successive orbits.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19649268','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19649268"><span>The use of simple reparameterizations to improve the efficiency of Markov chain Monte Carlo estimation for multilevel models with applications to discrete time survival models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Browne, William J; Steele, Fiona; Golalizadeh, Mousa; Green, Martin J</p> <p>2009-06-01</p> <p>We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28699193','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28699193"><span>Outcomes of laryngohyoid suspension techniques in an ovine model of profound oropharyngeal dysphagia.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Johnson, Christopher M; Venkatesan, Naren N; Siddiqui, M Tausif; Cates, Daniel J; Kuhn, Maggie A; Postma, Gregory M; Belafsky, Peter C</p> <p>2017-12-01</p> <p>To evaluate the efficacy of various techniques of laryngohyoid suspension in the elimination of aspiration utilizing a cadaveric ovine model of profound oropharyngeal dysphagia. Animal study. The head and neck of a Dorper cross ewe was placed in the lateral fluoroscopic view. Five conditions were tested: baseline, thyroid cartilage to hyoid approximation (THA), thyroid cartilage to hyoid to mandible (laryngohyoid) suspension (LHS), LHS with cricopharyngeus muscle myotomy (LHS-CPM), and cricopharyngeus muscle myotomy (CPM) alone. Five 20-mL trials of barium sulfate were delivered into the oropharynx under fluoroscopy for each condition. Outcome measures included the penetration aspiration scale (PAS) and the National Institutes of Health (NIH) Swallow Safety Scale (NIH-SSS). Median baseline PAS and NIH-SSS scores were 8 and 6, respectively, indicating severe impairment. THA scores were not improved from baseline. LHS alone reduced the PAS to 1 (P = .025) and NIH-SSS to 2 (P = .025) from baseline. LHS-CPM reduced the PAS to 1 (P = .025) and NIH-SSS to 0 (P = .025) from baseline. CPM alone did not improve scores. LHS-CPM displayed improved NIH-SSS over LHS alone (P = .003). This cadaveric model represents end-stage profound oropharyngeal dysphagia such as what could result from severe neurological insult. CPM alone failed to improve fluoroscopic outcomes in this model. Thyrohyoid approximation also failed to improve outcomes. LHS significantly improved both PAS and NIH-SSS. The addition of CPM to LHS resulted in improvement over suspension alone. NA. Laryngoscope, 127:E422-E427, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950024822','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950024822"><span>A process improvement model for software verification and validation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Callahan, John; Sabolish, George</p> <p>1994-01-01</p> <p>We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950020394','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950020394"><span>A process improvement model for software verification and validation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Callahan, John; Sabolish, George</p> <p>1994-01-01</p> <p>We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26492763','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26492763"><span>Implementing a trustworthy cost-accounting model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Spence, Jay; Seargeant, Dan</p> <p>2015-03-01</p> <p>Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJE...104.1228K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJE...104.1228K"><span>Extended behavioural modelling of FET and lattice-mismatched HEMT devices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khawam, Yahya; Albasha, Lutfi</p> <p>2017-07-01</p> <p>This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140010769','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140010769"><span>Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kukreja, Sunil L.; Boyle, Richard D.</p> <p>2014-01-01</p> <p>Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920016829','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920016829"><span>Development and validation of techniques for improving software dependability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Knight, John C.</p> <p>1992-01-01</p> <p>A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1064637.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1064637.pdf"><span>A Survey of Educational Data-Mining Research</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Huebner, Richard A.</p> <p>2013-01-01</p> <p>Educational data mining (EDM) is an emerging discipline that focuses on applying data mining tools and techniques to educationally related data. The discipline focuses on analyzing educational data to develop models for improving learning experiences and improving institutional effectiveness. A literature review on educational data mining topics…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1438728','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1438728"><span>Data mining and statistical inference in selective laser melting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kamath, Chandrika</p> <p></p> <p>Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1438728-data-mining-statistical-inference-selective-laser-melting','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1438728-data-mining-statistical-inference-selective-laser-melting"><span>Data mining and statistical inference in selective laser melting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Kamath, Chandrika</p> <p>2016-01-11</p> <p>Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1911237M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1911237M"><span>Improving medium-range ensemble streamflow forecasts through statistical post-processing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mendoza, Pablo; Wood, Andy; Clark, Elizabeth; Nijssen, Bart; Clark, Martyn; Ramos, Maria-Helena; Nowak, Kenneth; Arnold, Jeffrey</p> <p>2017-04-01</p> <p>Probabilistic hydrologic forecasts are a powerful source of information for decision-making in water resources operations. A common approach is the hydrologic model-based generation of streamflow forecast ensembles, which can be implemented to account for different sources of uncertainties - e.g., from initial hydrologic conditions (IHCs), weather forecasts, and hydrologic model structure and parameters. In practice, hydrologic ensemble forecasts typically have biases and spread errors stemming from errors in the aforementioned elements, resulting in a degradation of probabilistic properties. In this work, we compare several statistical post-processing techniques applied to medium-range ensemble streamflow forecasts obtained with the System for Hydromet Applications, Research and Prediction (SHARP). SHARP is a fully automated prediction system for the assessment and demonstration of short-term to seasonal streamflow forecasting applications, developed by the National Center for Atmospheric Research, University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. The suite of post-processing techniques includes linear blending, quantile mapping, extended logistic regression, quantile regression, ensemble analogs, and the generalized linear model post-processor (GLMPP). We assess and compare these techniques using multi-year hindcasts in several river basins in the western US. This presentation discusses preliminary findings about the effectiveness of the techniques for improving probabilistic skill, reliability, discrimination, sharpness and resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..17.9175G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..17.9175G"><span>Do High Dynamic Range threatments improve the results of Structure from Motion approaches in Geomorphology?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gómez-Gutiérrez, Álvaro; Juan de Sanjosé-Blasco, José; Schnabel, Susanne; de Matías-Bejarano, Javier; Pulido-Fernández, Manuel; Berenguer-Sempere, Fernando</p> <p>2015-04-01</p> <p>In this work, the hypothesis of improving 3D models obtained with Structure from Motion (SfM) approaches using images pre-processed by High Dynamic Range (HDR) techniques is tested. Photographs of the Veleta Rock Glacier in Spain were captured with different exposure values (EV0, EV+1 and EV-1), two focal lengths (35 and 100 mm) and under different weather conditions for the years 2008, 2009, 2011, 2012 and 2014. HDR images were produced using the different EV steps within Fusion F.1 software. Point clouds were generated using commercial and free available SfM software: Agisoft Photoscan and 123D Catch. Models Obtained using pre-processed images and non-preprocessed images were compared in a 3D environment with a benchmark 3D model obtained by means of a Terrestrial Laser Scanner (TLS). A total of 40 point clouds were produced, georeferenced and compared. Results indicated that for Agisoft Photoscan software differences in the accuracy between models obtained with pre-processed and non-preprocessed images were not significant from a statistical viewpoint. However, in the case of the free available software 123D Catch, models obtained using images pre-processed by HDR techniques presented a higher point density and were more accurate. This tendency was observed along the 5 studied years and under different capture conditions. More work should be done in the near future to corroborate whether the results of similar software packages can be improved by HDR techniques (e.g. ARC3D, Bundler and PMVS2, CMP SfM, Photosynth and VisualSFM).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3599124','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3599124"><span>Designing and benchmarking the MULTICOM protein structure prediction system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background Predicting protein structure from sequence is one of the most significant and challenging problems in bioinformatics. Numerous bioinformatics techniques and tools have been developed to tackle almost every aspect of protein structure prediction ranging from structural feature prediction, template identification and query-template alignment to structure sampling, model quality assessment, and model refinement. How to synergistically select, integrate and improve the strengths of the complementary techniques at each prediction stage and build a high-performance system is becoming a critical issue for constructing a successful, competitive protein structure predictor. Results Over the past several years, we have constructed a standalone protein structure prediction system MULTICOM that combines multiple sources of information and complementary methods at all five stages of the protein structure prediction process including template identification, template combination, model generation, model assessment, and model refinement. The system was blindly tested during the ninth Critical Assessment of Techniques for Protein Structure Prediction (CASP9) in 2010 and yielded very good performance. In addition to studying the overall performance on the CASP9 benchmark, we thoroughly investigated the performance and contributions of each component at each stage of prediction. Conclusions Our comprehensive and comparative study not only provides useful and practical insights about how to select, improve, and integrate complementary methods to build a cutting-edge protein structure prediction system but also identifies a few new sources of information that may help improve the design of a protein structure prediction system. Several components used in the MULTICOM system are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:23442819</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70023810','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70023810"><span>Improving the analysis of slug tests</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>McElwee, C.D.</p> <p>2002-01-01</p> <p>This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018HydJ...26..923H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018HydJ...26..923H"><span>Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hou, Zeyu; Lu, Wenxi</p> <p>2018-05-01</p> <p>Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JHyd..329..636K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JHyd..329..636K"><span>Daily pan evaporation modelling using a neuro-fuzzy computing technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kişi, Özgür</p> <p>2006-10-01</p> <p>SummaryEvaporation, as a major component of the hydrologic cycle, is important in water resources development and management. This paper investigates the abilities of neuro-fuzzy (NF) technique to improve the accuracy of daily evaporation estimation. Five different NF models comprising various combinations of daily climatic variables, that is, air temperature, solar radiation, wind speed, pressure and humidity are developed to evaluate degree of effect of each of these variables on evaporation. A comparison is made between the estimates provided by the NF model and the artificial neural networks (ANNs). The Stephens-Stewart (SS) method is also considered for the comparison. Various statistic measures are used to evaluate the performance of the models. Based on the comparisons, it was found that the NF computing technique could be employed successfully in modelling evaporation process from the available climatic data. The ANN also found to perform better than the SS method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19990052601','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19990052601"><span>An Integrated Environment for Efficient Formal Design and Verification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1998-01-01</p> <p>The general goal of this project was to improve the practicality of formal methods by combining techniques from model checking and theorem proving. At the time the project was proposed, the model checking and theorem proving communities were applying different tools to similar problems, but there was not much cross-fertilization. This project involved a group from SRI that had substantial experience in the development and application of theorem-proving technology, and a group at Stanford that specialized in model checking techniques. Now, over five years after the proposal was submitted, there are many research groups working on combining theorem-proving and model checking techniques, and much more communication between the model checking and theorem proving research communities. This project contributed significantly to this research trend. The research work under this project covered a variety of topics: new theory and algorithms; prototype tools; verification methodology; and applications to problems in particular domains.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JOM....69k2137T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JOM....69k2137T"><span>Simulation and Modeling in High Entropy Alloys</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Toda-Caraballo, I.; Wróbel, J. S.; Nguyen-Manh, D.; Pérez, P.; Rivera-Díaz-del-Castillo, P. E. J.</p> <p>2017-11-01</p> <p>High entropy alloys (HEAs) is a fascinating field of research, with an increasing number of new alloys discovered. This would hardly be conceivable without the aid of materials modeling and computational alloy design to investigate the immense compositional space. The simplicity of the microstructure achieved contrasts with the enormous complexity of its composition, which, in turn, increases the variety of property behavior observed. Simulation and modeling techniques are of paramount importance in the understanding of such material performance. There are numerous examples of how different models have explained the observed experimental results; yet, there are theories and approaches developed for conventional alloys, where the presence of one element is predominant, that need to be adapted or re-developed. In this paper, we review of the current state of the art of the modeling techniques applied to explain HEAs properties, identifying the potential new areas of research to improve the predictability of these techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8756E..03B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8756E..03B"><span>Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael</p> <p>2013-05-01</p> <p>This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19720004489','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19720004489"><span>Redundancy management of electrohydraulic servoactuators by mathematical model referencing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Campbell, R. A.</p> <p>1971-01-01</p> <p>A description of a mathematical model reference system is presented which provides redundancy management for an electrohydraulic servoactuator. The mathematical model includes a compensation network that calculates reference parameter perturbations induced by external disturbance forces. This is accomplished by using the measured pressure differential data taken from the physical system. This technique was experimentally verified by tests performed using the H-1 engine thrust vector control system for Saturn IB. The results of these tests are included in this report. It was concluded that this technique improves the tracking accuracy of the model reference system to the extent that redundancy management of electrohydraulic servosystems may be performed using this method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930003161','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930003161"><span>Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lee, George</p> <p>1992-01-01</p> <p>A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22714168','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22714168"><span>Improvement of modulation bandwidth in electroabsorption-modulated laser by utilizing the resonance property in bonding wire.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kwon, Oh Kee; Han, Young Tak; Baek, Yong Soon; Chung, Yun C</p> <p>2012-05-21</p> <p>We present and demonstrate a simple and cost-effective technique for improving the modulation bandwidth of electroabsorption-modulated laser (EML). This technique utilizes the RF resonance caused by the EML chip (i.e., junction capacitance) and bonding wire (i.e, wire inductance). We analyze the effects of the lengths of the bonding wires on the frequency responses of EML by using an equivalent circuit model. To verify this analysis, we package a lumped EML chip on the sub-mount and measure its frequency responses. The results show that, by using the proposed technique, we can increase the modulation bandwidth of EML from ~16 GHz to ~28 GHz.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SSEle.123...19J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SSEle.123...19J"><span>Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jarndal, Anwar; Ghannouchi, Fadhel M.</p> <p>2016-09-01</p> <p>In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16998525','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16998525"><span>[Cornea transplant].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Garralda, A; Epelde, A; Iturralde, O; Compains, E; Maison, C; Altarriba, M; Goldaracena, M B; Maraví-Poma, E</p> <p>2006-01-01</p> <p>The keratoplasty, or cornea transplant, is one of the oldest surgical techniques in opthalmology, whose indication are: 1) tectonic, in order to preserve corneal anatomy and integrity; 2) clinical, in order to eliminate the inflamed corneal tissue in cases refractory to medical treatment; 3) optical, in order to improve visual acuity; and 4) cosmetic, in order to improve the appearance of the eye. Improvements in technique and instruments, as well as in post-operative treatment and the means of preserving donated tissue, have improved survival of the grafts. The Pamplona Model of transplant coordination of the Virgen del Camino Hospital is considered to be original and unique in Spain. The logistics of this program include the protocol for detection and extraction of corneas as well as for keratoplasties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/6074286','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/6074286"><span>Methods and benefits of experimental seismic evaluation of nuclear power plants. Final report</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Not Available</p> <p>1979-07-01</p> <p>This study reviews experimental techniques, instrumentation requirements, safety considerations, and benefits of performing vibration tests on nuclear power plant containments and internal components. The emphasis is on testing to improve seismic structural models. Techniques for identification of resonant frequencies, damping, and mode shapes, are discussed. The benefits of testing with regard to increased damping and more accurate computer models are oulined. A test plan, schedule and budget are presented for a typical PWR nuclear power plant.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19750011822','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19750011822"><span>Error analysis of Dobson spectrophotometer measurements of the total ozone content</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Holland, A. C.; Thomas, R. W. L.</p> <p>1975-01-01</p> <p>A study of techniques for measuring atmospheric ozone is reported. This study represents the second phase of a program designed to improve techniques for the measurement of atmospheric ozone. This phase of the program studied the sensitivity of Dobson direct sun measurements and the ozone amounts inferred from those measurements to variation in the atmospheric temperature profile. The study used the plane - parallel Monte-Carlo model developed and tested under the initial phase of this program, and a series of standard model atmospheres.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24149751','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24149751"><span>Kinematic and kinetic improvements associated with action observation facilitated learning of the power clean in Australian footballers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sakadjian, Alex; Panchuk, Derek; Pearce, Alan J</p> <p>2014-06-01</p> <p>This study investigated the effectiveness of action observation (AO) on facilitating learning of the power clean technique (kinematics) compared with traditional strength coaching methods and whether improvements in performance (kinetics) were associated with an improvement in lifting technique. Fifteen subjects (age, 20.9 ± 2.3 years) with no experience in performing the power clean exercise attended 12 training and testing sessions over a 4-week period. Subjects were assigned to 2 matched groups, based on preintervention power clean performance and performed 3 sets of 5 repetitions of the power clean exercise at each training session. Subjects in the traditional coaching group (TC; n = 7) received the standard coaching feedback (verbal cues and physical practice), whereas subjects in the AO group (n = 8) received similar verbal coaching cues and physical practice but also observed a video of a skilled model before performing each set. Kinematic data were collected from video recordings of subjects who were fitted with joint center markings during testing, whereas kinetic data were collected from a weightlifting analyzer attached to the barbell. Subjects were tested before intervention, at the end of weeks 2 and 3, and at after intervention at the end of week 4. Faster improvements (3%) were observed in power clean technique with AO-facilitated learning in the first week and performance improvements (mean peak power of the subject's 15 repetitions) over time were significant (p < 0.001). In addition, performance improvement was significantly associated (R = 0.215) with technique improvements. In conclusion, AO combined with verbal coaching and physical practice of the power clean exercise resulted in significantly faster technique improvements and improvement in performance compared with traditional coaching methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22407767-mo-advances-model-based-image-reconstruction','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22407767-mo-advances-model-based-image-reconstruction"><span>MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chen, G; Pan, X; Stayman, J</p> <p>2014-06-15</p> <p>Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3900508','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3900508"><span>Severity-Based Adaptation with Limited Data for ASR to Aid Dysarthric Speakers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mustafa, Mumtaz Begum; Salim, Siti Salwah; Mohamed, Noraini; Al-Qatab, Bassam; Siong, Chng Eng</p> <p>2014-01-01</p> <p>Automatic speech recognition (ASR) is currently used in many assistive technologies, such as helping individuals with speech impairment in their communication ability. One challenge in ASR for speech-impaired individuals is the difficulty in obtaining a good speech database of impaired speakers for building an effective speech acoustic model. Because there are very few existing databases of impaired speech, which are also limited in size, the obvious solution to build a speech acoustic model of impaired speech is by employing adaptation techniques. However, issues that have not been addressed in existing studies in the area of adaptation for speech impairment are as follows: (1) identifying the most effective adaptation technique for impaired speech; and (2) the use of suitable source models to build an effective impaired-speech acoustic model. This research investigates the above-mentioned two issues on dysarthria, a type of speech impairment affecting millions of people. We applied both unimpaired and impaired speech as the source model with well-known adaptation techniques like the maximum likelihood linear regression (MLLR) and the constrained-MLLR(C-MLLR). The recognition accuracy of each impaired speech acoustic model is measured in terms of word error rate (WER), with further assessments, including phoneme insertion, substitution and deletion rates. Unimpaired speech when combined with limited high-quality speech-impaired data improves performance of ASR systems in recognising severely impaired dysarthric speech. The C-MLLR adaptation technique was also found to be better than MLLR in recognising mildly and moderately impaired speech based on the statistical analysis of the WER. It was found that phoneme substitution was the biggest contributing factor in WER in dysarthric speech for all levels of severity. The results show that the speech acoustic models derived from suitable adaptation techniques improve the performance of ASR systems in recognising impaired speech with limited adaptation data. PMID:24466004</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H11E1217T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H11E1217T"><span>Automatic domain updating technique for improving computational efficiency of 2-D flood-inundation simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.</p> <p>2017-12-01</p> <p>Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA485042','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA485042"><span>Challenging Aerospace Problems for Intelligent Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2003-06-01</p> <p>importance of each rule. Techniques such as logarithmic regression or Saaty’s AHP may be employed to apply the weights on to the fuzzy rules. 15-9 Given u...at which designs could be evaluated. This implies that modeling techniques such as neural networks, fuzzy systems and so on can play an important role...failure conditions [4-6]. These approaches apply techniques, such as neural networks, fuzzy logic, and parameter identification, to improve aircraft</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=principles+AND+management&pg=4&id=EJ1109391','ERIC'); return false;" href="https://eric.ed.gov/?q=principles+AND+management&pg=4&id=EJ1109391"><span>Modeling Success: Using Preenrollment Data to Identify Academically At-Risk Students</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gansemer-Topf, Ann M.; Compton, Jonathan; Wohlgemuth, Darin; Forbes, Greg; Ralston, Ekaterina</p> <p>2015-01-01</p> <p>Improving student success and degree completion is one of the core principles of strategic enrollment management. To address this principle, institutional data were used to develop a statistical model to identify academically at-risk students. The model employs multiple linear regression techniques to predict students at risk of earning below a…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1018438','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1018438"><span>Summary and Findings of the ARL Dynamic Failure Forum</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2016-09-29</p> <p>short beam shear, quasi -static indentation, depth of penetration, and V50 limit velocity. o Experimental technique suggestions for improvement included...art in experimental , theoretical, and computational studies of dynamic failure. The forum also focused on identifying technologies and approaches...Army-specific problems. Experimental exploration of material behavior and an improved ability to parameterize material models is essential to improving</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26788888','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26788888"><span>Application of separable parameter space techniques to multi-tracer PET compartment modeling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J</p> <p>2016-02-07</p> <p>Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PMB....61.1238Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PMB....61.1238Z"><span>Application of separable parameter space techniques to multi-tracer PET compartment modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.</p> <p>2016-02-01</p> <p>Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010114460','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010114460"><span>A Global Optimization Methodology for Rocket Propulsion Applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>2001-01-01</p> <p>While the response surface method is an effective method in engineering optimization, its accuracy is often affected by the use of limited amount of data points for model construction. In this chapter, the issues related to the accuracy of the RS approximations and possible ways of improving the RS model using appropriate treatments, including the iteratively re-weighted least square (IRLS) technique and the radial-basis neural networks, are investigated. A main interest is to identify ways to offer added capabilities for the RS method to be able to at least selectively improve the accuracy in regions of importance. An example is to target the high efficiency region of a fluid machinery design space so that the predictive power of the RS can be maximized when it matters most. Analytical models based on polynomials, with controlled level of noise, are used to assess the performance of these techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10132E..48L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10132E..48L"><span>Contrast-enhanced spectral mammography based on a photon-counting detector: quantitative accuracy and radiation dose</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo</p> <p>2017-03-01</p> <p>Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=True+AND+experimental+AND+design&id=EJ1003819','ERIC'); return false;" href="https://eric.ed.gov/?q=True+AND+experimental+AND+design&id=EJ1003819"><span>Balancing Treatment and Control Groups in Quasi-Experiments: An Introduction to Propensity Scoring</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Connelly, Brian S.; Sackett, Paul R.; Waters, Shonna D.</p> <p>2013-01-01</p> <p>Organizational and applied sciences have long struggled with improving causal inference in quasi-experiments. We introduce organizational researchers to propensity scoring, a statistical technique that has become popular in other applied sciences as a means for improving internal validity. Propensity scoring statistically models how individuals in…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=empathy&id=EJ1121802','ERIC'); return false;" href="https://eric.ed.gov/?q=empathy&id=EJ1121802"><span>Utilizing Improvisation to Teach Empathy Skills in Counselor Education</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Bayne, Hannah B.; Jangha, Awa</p> <p>2016-01-01</p> <p>Empathy development is foundational to counselor training, yet there is scant research on techniques for teaching empathy aside from traditional microskills models. The authors discuss empathy as a skill set, highlight how improvisation (improv) can be used to enhance training, and describe how to incorporate improv activities within the classroom.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000SPIE.3985...86L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000SPIE.3985...86L"><span>Improved helicopter aeromechanical stability analysis using segmented constrained layer damping and hybrid optimization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Qiang; Chattopadhyay, Aditi</p> <p>2000-06-01</p> <p>Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CPL...687..270A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CPL...687..270A"><span>Thermal characterization assessment of rigid and flexible water models in a nanogap using molecular dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Akıner, Tolga; Mason, Jeremy; Ertürk, Hakan</p> <p>2017-11-01</p> <p>The thermal properties of the TIP3P and TIP5P water models are investigated using equilibrium and non-equilibrium molecular dynamics techniques in the presence of solid surfaces. The performance of the non-equilibrium technique for rigid molecules is found to depend significantly on the distribution of atomic degrees of freedom. An improved approach to distribute atomic degrees of freedom is proposed for which the thermal conductivity of the TIP5P model agrees more closely with equilibrium molecular dynamics and experimental results than the existing state of the art.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015HESS...19.4001W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015HESS...19.4001W"><span>Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.</p> <p>2015-09-01</p> <p>Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.8536O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.8536O"><span>Reconstructing extreme AMOC events through nudging of the ocean surface: A perfect model approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ortega, Pablo; Guilyardi, Eric; Swingedouw, Didier; Mignot, Juliette; Nguyen, Sebastien</p> <p>2017-04-01</p> <p>While the Atlantic Meridional Overturning Circulation (AMOC) is thought to be a crucial component of the North Atlantic climate and its predictability, past changes in its strength are challenging to quantify, and only limited information is available. In this study, we use a perfect model approach with the IPSL-CM5A-LR model to assess the performance of several surface nudging techniques in reconstructing the variability of the AMOC. Special attention is given to the reproducibility of an extreme positive AMOC peak from a preindustrial control simulation. Nudging includes standard relaxation techniques towards the sea surface temperature and salinity anomalies of this target control simulation, and/or the prescription of the wind-stress fields. Surface nudging approaches using standard fixed restoring terms succeed in reproducing most of the target AMOC variability, including the timing of the extreme event, but systematically underestimate its amplitude. A detailed analysis of the AMOC variability mechanisms reveals that the underestimation of the extreme AMOC maximum comes from a deficit in the formation of the dense water masses in the main convection region, located south of Iceland in the model. This issue is largely corrected after introducing a novel surface nudging approach, which uses a varying restoring coefficient that is proportional to the simulated mixed layer depth, which, in essence, keeps the restoring time scale constant. This new technique substantially improves water mass transformation in the regions of convection, and in particular, the formation of the densest waters, which are key for the representation of the AMOC extreme. It is therefore a promising strategy that may help to better initialize the AMOC variability and other ocean features in the models, and thus improve decadal climate predictions. As this restoring technique only uses surface data, for which better and longer observations are available, it opens up opportunities for improved reconstructions of the AMOC over the last few decades.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27165361','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27165361"><span>Zebrafish Models of Human Leukemia: Technological Advances and Mechanistic Insights.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Harrison, Nicholas R; Laroche, Fabrice J F; Gutierrez, Alejandro; Feng, Hui</p> <p>2016-01-01</p> <p>Insights concerning leukemic pathophysiology have been acquired in various animal models and further efforts to understand the mechanisms underlying leukemic treatment resistance and disease relapse promise to improve therapeutic strategies. The zebrafish (Danio rerio) is a vertebrate organism with a conserved hematopoietic program and unique experimental strengths suiting it for the investigation of human leukemia. Recent technological advances in zebrafish research including efficient transgenesis, precise genome editing, and straightforward transplantation techniques have led to the generation of a number of leukemia models. The transparency of the zebrafish when coupled with improved lineage-tracing and imaging techniques has revealed exquisite details of leukemic initiation, progression, and regression. With these advantages, the zebrafish represents a unique experimental system for leukemic research and additionally, advances in zebrafish-based high-throughput drug screening promise to hasten the discovery of novel leukemia therapeutics. To date, investigators have accumulated knowledge of the genetic underpinnings critical to leukemic transformation and treatment resistance and without doubt, zebrafish are rapidly expanding our understanding of disease mechanisms and helping to shape therapeutic strategies for improved outcomes in leukemic patients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4933302','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4933302"><span>Zebrafish Models of Human Leukemia: Technological Advances and Mechanistic Insights</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Harrison, Nicholas R.; Laroche, Fabrice J.F.; Gutierrez, Alejandro</p> <p>2016-01-01</p> <p>Insights concerning leukemic pathophysiology have been acquired in various animal models and further efforts to understand the mechanisms underlying leukemic treatment resistance and disease relapse promise to improve therapeutic strategies. The zebrafish (Danio rerio) is a vertebrate organism with a conserved hematopoietic program and unique experimental strengths suiting it for the investigation of human leukemia. Recent technological advances in zebrafish research including efficient transgenesis, precise genome editing, and straightforward transplantation techniques have led to the generation of a number of leukemia models. The transparency of the zebrafish when coupled with improved lineage-tracing and imaging techniques has revealed exquisite details of leukemic initiation, progression, and regression. With these advantages, the zebrafish represents a unique experimental system for leukemic research and additionally, advances in zebrafish-based high-throughput drug screening promise to hasten the discovery of novel leukemia therapeutics. To date, investigators have accumulated knowledge of the genetic underpinnings critical to leukemic transformation and treatment resistance and without doubt, zebrafish are rapidly expanding our understanding of disease mechanisms and helping to shape therapeutic strategies for improved outcomes in leukemic patients. PMID:27165361</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013APS..MAR.Q1331C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013APS..MAR.Q1331C"><span>Improved transcranial magnetic stimulation coil design with realistic head modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Crowther, Lawrence; Hadimani, Ravi; Jiles, David</p> <p>2013-03-01</p> <p>We are investigating Transcranial magnetic stimulation (TMS) as a noninvasive technique based on electromagnetic induction which causes stimulation of the neurons in the brain. TMS can be used as a pain-free alternative to conventional electroconvulsive therapy (ECT) which is still widely implemented for treatment of major depression. Development of improved TMS coils capable of stimulating subcortical regions could also allow TMS to replace invasive deep brain stimulation (DBS) which requires surgical implantation of electrodes in the brain. Our new designs allow new applications of the technique to be established for a variety of diagnostic and therapeutic applications of psychiatric disorders and neurological diseases. Calculation of the fields generated inside the head is vital for the use of this method for treatment. In prior work we have implemented a realistic head model, incorporating inhomogeneous tissue structures and electrical conductivities, allowing the site of neuronal activation to be accurately calculated. We will show how we utilize this model in the development of novel TMS coil designs to improve the depth of penetration and localization of stimulation produced by stimulator coils.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..332a2043S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..332a2043S"><span>Improving the quality of learning in science through optimization of lesson study for learning community</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Setyaningsih, S.</p> <p>2018-03-01</p> <p>Lesson Study for Learning Community is one of lecturer profession building system through collaborative and continuous learning study based on the principles of openness, collegiality, and mutual learning to build learning community in order to form professional learning community. To achieve the above, we need a strategy and learning method with specific subscription technique. This paper provides a description of how the quality of learning in the field of science can be improved by implementing strategies and methods accordingly, namely by applying lesson study for learning community optimally. Initially this research was focused on the study of instructional techniques. Learning method used is learning model Contextual teaching and Learning (CTL) and model of Problem Based Learning (PBL). The results showed that there was a significant increase in competence, attitudes, and psychomotor in the four study programs that were modelled. Therefore, it can be concluded that the implementation of learning strategies in Lesson study for Learning Community is needed to be used to improve the competence, attitude and psychomotor of science students.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=317290&keyword=Force&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=317290&keyword=Force&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Techniques for Improved Retrospective Fine-scale Meteorology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Pleim-Xiu Land-Surface model (PX LSM) was developed for retrospective meteorological simulations to drive chemical transport models. One of the key features of the PX LSM is the indirect soil moisture and temperature nudging. The idea is to provide a three hourly 2-m temperature ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA589049','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA589049"><span>Air Force and Army Corps of Engineers Improperly Managed the Award of Contracts for the Blue Devil Block 2 Persistent Surveillance System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2013-09-19</p> <p>environments. This can include the development of new and/or improved analytical and numerical models, rapid data-processing techniques, and new subsurface ... imaging techniques that include active and passive sensor modalities in a variety of rural and urban terrains. Of particular interest is the broadband</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED117851.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED117851.pdf"><span>Destruction or Loss of School Property: Analysis and Suggestions for Improvement of School Security.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Nelken, Ira; Kline, Sam</p> <p></p> <p>In recent years the costs of school vandalism and the incidence of vandalism in the public schools have been rising. The study concerns itself with the application of production functions, Monte Carlo techniques, and Shannon's model of information theory to determine the most efficient use of preventive vandalism techniques in a large school…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8060E..0GL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8060E..0GL"><span>Improved representation of situational awareness within a dismounted small combat unit constructive simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, K. David; Colony, Mike</p> <p>2011-06-01</p> <p>Modeling and simulation has been established as a cost-effective means of supporting the development of requirements, exploring doctrinal alternatives, assessing system performance, and performing design trade-off analysis. The Army's constructive simulation for the evaluation of equipment effectiveness in small combat unit operations is currently limited to representation of situation awareness without inclusion of the many uncertainties associated with real world combat environments. The goal of this research is to provide an ability to model situation awareness and decision process uncertainties in order to improve evaluation of the impact of battlefield equipment on ground soldier and small combat unit decision processes. Our Army Probabilistic Inference and Decision Engine (Army-PRIDE) system provides this required uncertainty modeling through the application of two critical techniques that allow Bayesian network technology to be applied to real-time applications. (Object-Oriented Bayesian Network methodology and Object-Oriented Inference technique). In this research, we implement decision process and situation awareness models for a reference scenario using Army-PRIDE and demonstrate its ability to model a variety of uncertainty elements, including: confidence of source, information completeness, and information loss. We also demonstrate that Army-PRIDE improves the realism of the current constructive simulation's decision processes through Monte Carlo simulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DMP.K1028S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DMP.K1028S"><span>Improvements to the YbF electron electric dipole moment experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sauer, B. E.; Rabey, I. M.; Devlin, J. A.; Tarbutt, M. R.; Ho, C. J.; Hinds, E. A.</p> <p>2017-04-01</p> <p>The standard model of particle physics predicts that the permanent electric dipole moment (EDM) of the electron is very nearly zero. Many extensions to the standard model predict an electron EDM just below current experimental limits. We are currently working to improve the sensitivity of the Imperial College YbF experiment. We have implemented combined laser-radiofrequency pumping techniques which both increase the number of molecules which participate in the EDM experiment and also increase the probability of detection. Combined, these techniques give nearly two orders of magnitude increase in the experimental sensitivity. At this enhanced sensitivity magnetic effects which were negligible become important. We have developed a new way to construct the electrodes for electric field plates which minimizes the effect of magnetic Johnson noise. The new YbF experiment is expected to comparable in sensitivity to the most sensitive measurements of the electron EDM to date. We will also discuss laser cooling techniques which promise an even larger increase in sensitivity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3231115','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3231115"><span>Position and Speed Control of Brushless DC Motors Using Sensorless Techniques and Application Trends</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gamazo-Real, José Carlos; Vázquez-Sánchez, Ernesto; Gómez-Gil, Jaime</p> <p>2010-01-01</p> <p>This paper provides a technical review of position and speed sensorless methods for controlling Brushless Direct Current (BLDC) motor drives, including the background analysis using sensors, limitations and advances. The performance and reliability of BLDC motor drivers have been improved because the conventional control and sensing techniques have been improved through sensorless technology. Then, in this paper sensorless advances are reviewed and recent developments in this area are introduced with their inherent advantages and drawbacks, including the analysis of practical implementation issues and applications. The study includes a deep overview of state-of-the-art back-EMF sensing methods, which includes Terminal Voltage Sensing, Third Harmonic Voltage Integration, Terminal Current Sensing, Back-EMF Integration and PWM strategies. Also, the most relevant techniques based on estimation and models are briefly analysed, such as Sliding-mode Observer, Extended Kalman Filter, Model Reference Adaptive System, Adaptive observers (Full-order and Pseudoreduced-order) and Artificial Neural Networks. PMID:22163582</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030064186','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030064186"><span>Practical Formal Verification of Diagnosability of Large Models via Symbolic Model Checking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cavada, Roberto; Pecheur, Charles</p> <p>2003-01-01</p> <p>This document reports on the activities carried out during a four-week visit of Roberto Cavada at the NASA Ames Research Center. The main goal was to test the practical applicability of the framework proposed, where a diagnosability problem is reduced to a Symbolic Model Checking problem. Section 2 contains a brief explanation of major techniques currently used in Symbolic Model Checking, and how these techniques can be tuned in order to obtain good performances when using Model Checking tools. Diagnosability is performed on large and structured models of real plants. Section 3 describes how these plants are modeled, and how models can be simplified to improve the performance of Symbolic Model Checkers. Section 4 reports scalability results. Three test cases are briefly presented, and several parameters and techniques have been applied on those test cases in order to produce comparison tables. Furthermore, comparison between several Model Checkers is reported. Section 5 summarizes the application of diagnosability verification to a real application. Several properties have been tested, and results have been highlighted. Finally, section 6 draws some conclusions, and outlines future lines of research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26423295','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26423295"><span>The Use of Mathematical Modelling for Improving the Tissue Engineering of Organs and Stem Cell Therapy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lemon, Greg; Sjoqvist, Sebastian; Lim, Mei Ling; Feliu, Neus; Firsova, Alexandra B; Amin, Risul; Gustafsson, Ylva; Stuewer, Annika; Gubareva, Elena; Haag, Johannes; Jungebluth, Philipp; Macchiarini, Paolo</p> <p>2016-01-01</p> <p>Regenerative medicine is a multidisciplinary field where continued progress relies on the incorporation of a diverse set of technologies from a wide range of disciplines within medicine, science and engineering. This review describes how one such technique, mathematical modelling, can be utilised to improve the tissue engineering of organs and stem cell therapy. Several case studies, taken from research carried out by our group, ACTREM, demonstrate the utility of mechanistic mathematical models to help aid the design and optimisation of protocols in regenerative medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70181018','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70181018"><span>Predicting cyanobacterial abundance, microcystin, and geosmin in a eutrophic drinking-water reservoir using a 14-year dataset</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Harris, Ted D.; Graham, Jennifer L.</p> <p>2017-01-01</p> <p>Cyanobacterial blooms degrade water quality in drinking water supply reservoirs by producing toxic and taste-and-odor causing secondary metabolites, which ultimately cause public health concerns and lead to increased treatment costs for water utilities. There have been numerous attempts to create models that predict cyanobacteria and their secondary metabolites, most using linear models; however, linear models are limited by assumptions about the data and have had limited success as predictive tools. Thus, lake and reservoir managers need improved modeling techniques that can accurately predict large bloom events that have the highest impact on recreational activities and drinking-water treatment processes. In this study, we compared 12 unique linear and nonlinear regression modeling techniques to predict cyanobacterial abundance and the cyanobacterial secondary metabolites microcystin and geosmin using 14 years of physiochemical water quality data collected from Cheney Reservoir, Kansas. Support vector machine (SVM), random forest (RF), boosted tree (BT), and Cubist modeling techniques were the most predictive of the compared modeling approaches. SVM, RF, and BT modeling techniques were able to successfully predict cyanobacterial abundance, microcystin, and geosmin concentrations <60,000 cells/mL, 2.5 µg/L, and 20 ng/L, respectively. Only Cubist modeling predicted maxima concentrations of cyanobacteria and geosmin; no modeling technique was able to predict maxima microcystin concentrations. Because maxima concentrations are a primary concern for lake and reservoir managers, Cubist modeling may help predict the largest and most noxious concentrations of cyanobacteria and their secondary metabolites.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1006a2012A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1006a2012A"><span>The development of learning materials based on core model to improve students’ learning outcomes in topic of Chemical Bonding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Avianti, R.; Suyatno; Sugiarto, B.</p> <p>2018-04-01</p> <p>This study aims to create an appropriate learning material based on CORE (Connecting, Organizing, Reflecting, Extending) model to improve students’ learning achievement in Chemical Bonding Topic. This study used 4-D models as research design and one group pretest-posttest as design of the material treatment. The subject of the study was teaching materials based on CORE model, conducted on 30 students of Science class grade 10. The collecting data process involved some techniques such as validation, observation, test, and questionnaire. The findings were that: (1) all the contents were valid, (2) the practicality and the effectiveness of all the contents were good. The conclusion of this research was that the CORE model is appropriate to improve students’ learning outcomes for studying Chemical Bonding.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27554268','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27554268"><span>Do Bone Graft and Cracking of the Sclerotic Cavity Improve Fixation of Titanium and Hydroxyapatite-coated Revision Implants in an Animal Model?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Elmengaard, Brian; Baas, Joergen; Jakobsen, Thomas; Kold, Soren; Jensen, Thomas B; Bechtold, Joan E; Soballe, Kjeld</p> <p>2017-02-01</p> <p>We previously introduced a manual surgical technique that makes small perforations (cracks) through the sclerotic bone shell that typically forms during the process of aseptic loosening ("crack" revision technique). Perforating just the shell (without violating the proximal cortex) can maintain overall bone continuity while allowing marrow and vascular elements to access the implant surface. Because many revisions require bone graft to fill defects, we wanted to determine if bone graft could further increase implant fixation beyond what we have experimentally shown with the crack technique alone. Also, because both titanium (Ti6Al4V) and hydroxyapatite (HA) implant surfaces are used in revisions, we also wanted to determine their relative effectiveness in this model. We hypothesized that both (1) allografted plasma-sprayed Ti6Al4V; and (2) allografted plasma-sprayed HA-coated implants inserted with a crack revision technique have better fixation compared with a noncrack revision technique in each case. Under approval from our Institutional Animal Care and Use Committee, a female canine animal model was used to evaluate the uncemented revision technique (crack, noncrack) using paired contralateral implants while implant surface (Ti6Al4V, HA) was qualitatively compared between the two (unpaired) series. All groups received bone allograft tightly packed around the implant. This revision model includes a cylindrical implant pistoning 500 μm in a 0.75-mm gap, with polyethylene particles, for 8 weeks. This engenders a bone and tissue response representative of the metaphyseal cancellous region of an aseptically loosened component. At 8 weeks, the original implants were revised and followed for an additional 4 weeks. Mechanical fixation was assessed by load, stiffness, and energy to failure when loaded in axial pushout. Histomorphometry was used to determine the amount and location of bone and fibrous tissue in the grafted gap. The grafted crack revision improved mechanical shear strength, stiffness, and energy to failure (for Ti6Al4V 27- to 69-fold increase and HA twofold increases). The histomorphometric analysis demonstrated primarily fibrous membrane ongrowth and in the gap for the allografted Ti6Al4V noncrack revisions. For allografted HA noncrack revisions, bone ongrowth at the implant surface was observed, but fibrous tissue also was present in the inner gap. Although both Ti6Al4V and HA surfaces showed improved fixation with grafted crack revision, and Ti6Al4V achieved the highest percent gain, HA demonstrated the strongest overall fixation. The results of this study suggest that novel osteoconductive or osteoinductive coatings and bone graft substitutes or tissue-engineered constructs may further improve bone-implant fixation with the crack revision technique but require evaluation in a rigorous model such as presented here. This experimental study provides data on which to base clinical trials aimed to improve fixation of revision implants. Given the multifactorial nature of complex human revisions, such a protocoled clinical study is required to determine the clinical applicability of this approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27345427','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27345427"><span>Micro-computed tomography in murine models of cerebral cavernous malformations as a paradigm for brain disease.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Girard, Romuald; Zeineddine, Hussein A; Orsbon, Courtney; Tan, Huan; Moore, Thomas; Hobson, Nick; Shenkar, Robert; Lightle, Rhonda; Shi, Changbin; Fam, Maged D; Cao, Ying; Shen, Le; Neander, April I; Rorrer, Autumn; Gallione, Carol; Tang, Alan T; Kahn, Mark L; Marchuk, Douglas A; Luo, Zhe-Xi; Awad, Issam A</p> <p>2016-09-15</p> <p>Cerebral cavernous malformations (CCMs) are hemorrhagic brain lesions, where murine models allow major mechanistic discoveries, ushering genetic manipulations and preclinical assessment of therapies. Histology for lesion counting and morphometry is essential yet tedious and time consuming. We herein describe the application and validations of X-ray micro-computed tomography (micro-CT), a non-destructive technique allowing three-dimensional CCM lesion count and volumetric measurements, in transgenic murine brains. We hereby describe a new contrast soaking technique not previously applied to murine models of CCM disease. Volumetric segmentation and image processing paradigm allowed for histologic correlations and quantitative validations not previously reported with the micro-CT technique in brain vascular disease. Twenty-two hyper-dense areas on micro-CT images, identified as CCM lesions, were matched by histology. The inter-rater reliability analysis showed strong consistency in the CCM lesion identification and staging (K=0.89, p<0.0001) between the two techniques. Micro-CT revealed a 29% greater CCM lesion detection efficiency, and 80% improved time efficiency. Serial integrated lesional area by histology showed a strong positive correlation with micro-CT estimated volume (r(2)=0.84, p<0.0001). Micro-CT allows high throughput assessment of lesion count and volume in pre-clinical murine models of CCM. This approach complements histology with improved accuracy and efficiency, and can be applied for lesion burden assessment in other brain diseases. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26072864','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26072864"><span>Dictionary-based image reconstruction for superresolution in integrated circuit imaging.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim</p> <p>2015-06-01</p> <p>Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26549932','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26549932"><span>Enhancing understanding and improving prediction of severe weather through spatiotemporal relational learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McGovern, Amy; Gagne, David J; Williams, John K; Brown, Rodger A; Basara, Jeffrey B</p> <p></p> <p>Severe weather, including tornadoes, thunderstorms, wind, and hail annually cause significant loss of life and property. We are developing spatiotemporal machine learning techniques that will enable meteorologists to improve the prediction of these events by improving their understanding of the fundamental causes of the phenomena and by building skillful empirical predictive models. In this paper, we present significant enhancements of our Spatiotemporal Relational Probability Trees that enable autonomous discovery of spatiotemporal relationships as well as learning with arbitrary shapes. We focus our evaluation on two real-world case studies using our technique: predicting tornadoes in Oklahoma and predicting aircraft turbulence in the United States. We also discuss how to evaluate success for a machine learning algorithm in the severe weather domain, which will enable new methods such as ours to transfer from research to operations, provide a set of lessons learned for embedded machine learning applications, and discuss how to field our technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10132E..06B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10132E..06B"><span>Pipeline for effective denoising of digital mammography and digital breast tomosynthesis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Borges, Lucas R.; Bakic, Predrag R.; Foi, Alessandro; Maidment, Andrew D. A.; Vieira, Marcelo A. C.</p> <p>2017-03-01</p> <p>Denoising can be used as a tool to enhance image quality and enforce low radiation doses in X-ray medical imaging. The effectiveness of denoising techniques relies on the validity of the underlying noise model. In full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT), calibration steps like the detector offset and flat-fielding can affect some assumptions made by most denoising techniques. Furthermore, quantum noise found in X-ray images is signal-dependent and can only be treated by specific filters. In this work we propose a pipeline for FFDM and DBT image denoising that considers the calibration steps and simplifies the modeling of the noise statistics through variance-stabilizing transformations (VST). The performance of a state-of-the-art denoising method was tested with and without the proposed pipeline. To evaluate the method, objective metrics such as the normalized root mean square error (N-RMSE), noise power spectrum, modulation transfer function (MTF) and the frequency signal-to-noise ratio (SNR) were analyzed. Preliminary tests show that the pipeline improves denoising. When the pipeline is not used, bright pixels of the denoised image are under-filtered and dark pixels are over-smoothed due to the assumption of a signal-independent Gaussian model. The pipeline improved denoising up to 20% in terms of spatial N-RMSE and up to 15% in terms of frequency SNR. Besides improving the denoising, the pipeline does not increase signal smoothing significantly, as shown by the MTF. Thus, the proposed pipeline can be used with state-of-the-art denoising techniques to improve the quality of DBT and FFDM images.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=221970&Lab=NHEERL&keyword=dream&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=221970&Lab=NHEERL&keyword=dream&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Comparison of De Novo Network Reverse Engineering Methods with Applications to Ecotoxicology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>The DREAM competitions for network modeling comparisons have made several points clear: 1) incorporating knowledge beyond gene expression data may improve modeling (e.g., data from knock-out organisms), 2) most techniques do not perform better than random, and 3) more complex met...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1985anch.conf.....B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1985anch.conf.....B"><span>Detecting isotopic ratio outliers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bayne, C. K.; Smith, D. H.</p> <p></p> <p>An alternative method is proposed for improving isotopic ratio estimates. This method mathematically models pulse-count data and uses iterative reweighted Poisson regression to estimate model parameters to calculate the isotopic ratios. This computer-oriented approach provides theoretically better methods than conventional techniques to establish error limits and to identify outliers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28425137','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28425137"><span>Three-Dimensional Printing: An Aid to Epidural Access for Neuromodulation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Taverner, Murray G; Monagle, John P</p> <p>2017-08-01</p> <p>The case report details to use of three-dimensional (3D) printing as an aid to neuromodulation. A patient is described in whom previous attempts at spinal neuromodulation had failed due to lack of epidural or intrathecal access, and the use of a 3D printed model allowed for improved planning and ultimately, success. Successful spinal cord stimulation was achieved with the plan developed by access to a 3D model of the patient's spine. Neuromodulation techniques can provide the optimal analgesic techniques for individual patients. At times these can fail due to lack of access to the site for intervention, in this case epidural access. 3D printing may provide additional information to improve the likelihood of access when anatomy is distorted and standard approaches prove difficult. © 2017 International Neuromodulation Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940019910','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940019910"><span>Next generation initiation techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans</p> <p>1993-01-01</p> <p>Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The third kind of next-generation technique involves strategies to initialize convective scale (non-hydrostatic) models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMGP41A..06P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMGP41A..06P"><span>Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.</p> <p>2016-12-01</p> <p>Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3990517','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3990517"><span>Active Learning to Understand Infectious Disease Models and Improve Policy Making</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel</p> <p>2014-01-01</p> <p>Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings. PMID:24743387</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24743387','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24743387"><span>Active learning to understand infectious disease models and improve policy making.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Willem, Lander; Stijven, Sean; Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel</p> <p>2014-04-01</p> <p>Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010ANSNN...1c5003C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010ANSNN...1c5003C"><span>Modelling the effect of structural QSAR parameters on skin penetration using genetic programming</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chung, K. K.; Do, D. Q.</p> <p>2010-09-01</p> <p>In order to model relationships between chemical structures and biological effects in quantitative structure-activity relationship (QSAR) data, an alternative technique of artificial intelligence computing—genetic programming (GP)—was investigated and compared to the traditional method—statistical. GP, with the primary advantage of generating mathematical equations, was employed to model QSAR data and to define the most important molecular descriptions in QSAR data. The models predicted by GP agreed with the statistical results, and the most predictive models of GP were significantly improved when compared to the statistical models using ANOVA. Recently, artificial intelligence techniques have been applied widely to analyse QSAR data. With the capability of generating mathematical equations, GP can be considered as an effective and efficient method for modelling QSAR data.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA598667','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA598667"><span>Analysis of High Spatial, Temporal, and Directional Resolution Recordings of Biological Sounds in the Southern California Bight</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2013-09-30</p> <p>transiting whales in the Southern California Bight, b) the use of passive underwater acoustic techniques for improved habitat assessment in biologically...sensitive areas and improved ecosystem modeling, and c) the application of the physics of excitable media to numerical modeling of biological choruses...was on the potential impact of man-made sounds on the calling behavior of transiting humpback whales in the Southern California Bight. The main</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..231a2043Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..231a2043Z"><span>Innovative application of virtual display technique in virtual museum</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Jiankang</p> <p>2017-09-01</p> <p>Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED545225.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED545225.pdf"><span>Using Evidence-Based Decision Trees Instead of Formulas to Identify At-Risk Readers. REL 2014-036</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Koon, Sharon; Petscher, Yaacov; Foorman, Barbara R.</p> <p>2014-01-01</p> <p>This study examines whether the classification and regression tree (CART) model improves the early identification of students at risk for reading comprehension difficulties compared with the more difficult to interpret logistic regression model. CART is a type of predictive modeling that relies on nonparametric techniques. It presents results in…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24838515','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24838515"><span>A systematic and critical review of model-based economic evaluations of pharmacotherapeutics in patients with bipolar disorder.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mohiuddin, Syed</p> <p>2014-08-01</p> <p>Bipolar disorder (BD) is a chronic and relapsing mental illness with a considerable health-related and economic burden. The primary goal of pharmacotherapeutics for BD is to improve patients' well-being. The use of decision-analytic models is key in assessing the added value of the pharmacotherapeutics aimed at treating the illness, but concerns have been expressed about the appropriateness of different modelling techniques and about the transparency in the reporting of economic evaluations. This paper aimed to identify and critically appraise published model-based economic evaluations of pharmacotherapeutics in BD patients. A systematic review combining common terms for BD and economic evaluation was conducted in MEDLINE, EMBASE, PSYCINFO and ECONLIT. Studies identified were summarised and critically appraised in terms of the use of modelling technique, model structure and data sources. Considering the prognosis and management of BD, the possible benefits and limitations of each modelling technique are discussed. Fourteen studies were identified using model-based economic evaluations of pharmacotherapeutics in BD patients. Of these 14 studies, nine used Markov, three used discrete-event simulation (DES) and two used decision-tree models. Most of the studies (n = 11) did not include the rationale for the choice of modelling technique undertaken. Half of the studies did not include the risk of mortality. Surprisingly, no study considered the risk of having a mixed bipolar episode. This review identified various modelling issues that could potentially reduce the comparability of one pharmacotherapeutic intervention with another. Better use and reporting of the modelling techniques in the future studies are essential. DES modelling appears to be a flexible and comprehensive technique for evaluating the comparability of BD treatment options because of its greater flexibility of depicting the disease progression over time. However, depending on the research question, modelling techniques other than DES might also be appropriate in some cases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1510710K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1510710K"><span>Comparing model-based predictions of a wind turbine wake to LiDAR measurements in complex terrain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kay, Andrew; Jones, Paddy; Boyce, Dean; Bowman, Neil</p> <p>2013-04-01</p> <p>The application of remote sensing techniques to the measurement of wind characteristics offers great potential to accurately predict the atmospheric boundary layer flow (ABL) and its interactions with wind turbines. An understanding of these interactions is important for optimizing turbine siting in wind farms and improving the power performance and lifetime of individual machines. In particular, Doppler wind Light Detection and Ranging (LiDAR) can be used to remotely measure the wind characteristics (speed, direction and turbulence intensity) approaching a rotor. This information can be utilised to improve turbine lifetime (advanced detection of incoming wind shear, wind veer and extreme wind conditions, such as gusts) and optimise power production (improved yaw, pitch and speed control). LiDAR can also make detailed measurements of the disturbed wind profile in the wake, which can damage surrounding turbines and reduce efficiency. These observational techniques can help engineers better understand and model wakes to optimize turbine spacing in large wind farms, improving efficiency and reducing the cost of energy. NEL is currently undertaking research to measure the disturbed wind profile in the wake of a 950 kW wind turbine using a ZephIR Dual Mode LiDAR at its Myres Hill wind turbine test site located near Glasgow, Scotland. Myres Hill is moderately complex terrain comprising deep peat, low lying grass and heathers, localised slopes and nearby forest, approximately 2 km away. Measurements have been obtained by vertically scanning at 10 recorded heights across and above the rotor plane to determine the wind speed, wind direction and turbulence intensity profiles. Measurement stations located at various rotor diameters downstream of the turbine were selected in an attempt to capture the development of the wake and its recovery towards free stream conditions. Results of the measurement campaign will also highlight how the wake behaves as a result of sudden gusts or rapid changes in wind direction. NEL has carried out simulations to model the wake of the turbine using Computational Fluid Dynamics (CFD) software provided by ANSYS Inc. The model incorporates a simple actuator disk concept to model the turbine and its wake, typical of that used in many commercial wind farm optimization tools. The surrounding terrain, including the forestry is modelled allowing an investigation of the wake-terrain interactions occurring across the site. The overall aim is to compare the LiDAR measurements with simulated data to assess the quality of the model and its sensitivity to variables such as mesh size and turbulence/forestry modelling techniques. Knowledge acquired from the study will help to define techniques for combining LiDAR measurements with CFD modelling to improve predictions of wake losses in large wind farms and hence, energy production. In addition, the impact of transient wind conditions on the results of predictions based on idealised, steady state models has been examined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140003581','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140003581"><span>Model-Based GN and C Simulation and Flight Software Development for Orion Missions beyond LEO</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Odegard, Ryan; Milenkovic, Zoran; Henry, Joel; Buttacoli, Michael</p> <p>2014-01-01</p> <p>For Orion missions beyond low Earth orbit (LEO), the Guidance, Navigation, and Control (GN&C) system is being developed using a model-based approach for simulation and flight software. Lessons learned from the development of GN&C algorithms and flight software for the Orion Exploration Flight Test One (EFT-1) vehicle have been applied to the development of further capabilities for Orion GN&C beyond EFT-1. Continuing the use of a Model-Based Development (MBD) approach with the Matlab®/Simulink® tool suite, the process for GN&C development and analysis has been largely improved. Furthermore, a model-based simulation environment in Simulink, rather than an external C-based simulation, greatly eases the process for development of flight algorithms. The benefits seen by employing lessons learned from EFT-1 are described, as well as the approach for implementing additional MBD techniques. Also detailed are the key enablers for improvements to the MBD process, including enhanced configuration management techniques for model-based software systems, automated code and artifact generation, and automated testing and integration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25338640','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25338640"><span>Mandibular reconstruction using plates prebent to fit rapid prototyping 3-dimensional printing models ameliorates contour deformity.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Azuma, Masaki; Yanagawa, Toru; Ishibashi-Kanno, Naomi; Uchida, Fumihiko; Ito, Takaaki; Yamagata, Kenji; Hasegawa, Shogo; Sasaki, Kaoru; Adachi, Koji; Tabuchi, Katsuhiko; Sekido, Mitsuru; Bukawa, Hiroki</p> <p>2014-10-23</p> <p>Recently, medical rapid prototyping (MRP) models, fabricated with computer-aided design and computer-aided manufacture (CAD/CAM) techniques, have been applied to reconstructive surgery in the treatment of head and neck cancers. Here, we tested the use of preoperatively manufactured reconstruction plates, which were produced using MRP models. The clinical efficacy and esthetic outcome of using these products in mandibular reconstruction was evaluated. A series of 28 patients with malignant oral tumors underwent unilateral segmental resection of the mandible and simultaneous mandibular reconstruction. Twelve patients were treated with prebent reconstruction plates that were molded to MRP mandibular models designed with CAD/CAM techniques and fabricated on a combined powder bed and inkjet head three-dimensional printer. The remaining 16 patients were treated using conventional reconstruction methods. The surgical and esthetic outcomes of the two groups were compared by imaging analysis using post-operative panoramic tomography. The mandibular symmetry in patients receiving the MRP-model-based prebent plates was significantly better than that in patients receiving conventional reconstructive surgery. Patients with head and neck cancer undergoing reconstructive surgery using a prebent reconstruction plate fabricated according to an MRP mandibular model showed improved mandibular contour compared to patients undergoing conventional mandibular reconstruction. Thus, use of this new technology for mandibular reconstruction results in an improved esthetic outcome with the potential for improved quality of life for patients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22072309','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22072309"><span>Human age estimation combining third molar and skeletal development.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Thevissen, P W; Kaur, J; Willems, G</p> <p>2012-03-01</p> <p>The wide prediction intervals obtained with age estimation methods based on third molar development could be reduced by combining these dental observations with age-related skeletal information. Therefore, on cephalometric radiographs, the most accurate age-estimating skeletal variable and related registration method were searched and added to a regression model, with age as response and third molar stages as explanatory variable. In a pilot set up on a dataset of 496 (283 M; 213 F) cephalometric radiographs, the techniques of Baccetti et al. (2005) (BA), Seedat et al. (2005) (SE), Caldas et al. (2007) and Rai et al. (2008) (RA) were verified. In the main study, data from 460 (208 F, 224 M) individuals in an age range between 3 and 26 years, for which at the same day an orthopantogram and a cephalogram were taken, were collected. On the orthopantomograms, the left third molar development was registered using the scoring system described by Gleiser and Hunt (1955) and modified by Köhler (1994) (GH). On the cephalograms, cervical vertebrae development was registered according to the BA and SE techniques. A regression model, with age as response and the GH scores as explanatory variable, was fitted to the data. Next, information of BA, SE and BA + SE was, respectively, added to this model. From all obtained models, the determination coefficients and the root mean squared errors were calculated. Inclusion of information from cephalograms based on the BA, as well as the SE, technique improved the amount of explained variance in age acquired from panoramic radiographs using the GH technique with 48%. Inclusion of cephalometric BA + SE information marginally improved the previous result (+1%). The RMSE decreased with 1.93, 1.85 and 2.03 years by adding, respectively, BA, SE and BA + SE information to the GH model. The SE technique allows clinically the fastest and easiest registration of the degree of development of the cervical vertebrae. Therefore, the choice of technique to classify cervical vertebrae development in addition to third molar development is preferably the SE technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IAUS..333..274W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IAUS..333..274W"><span>Bayesian component separation: The Planck experience</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wehus, Ingunn Kathrine; Eriksen, Hans Kristian</p> <p>2018-05-01</p> <p>Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMSA31A2562S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMSA31A2562S"><span>A new data assimilation engine for physics-based thermospheric density models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sutton, E. K.; Henney, C. J.; Hock-Mysliwiec, R.</p> <p>2017-12-01</p> <p>The successful assimilation of data into physics-based coupled Ionosphere-Thermosphere models requires rethinking the filtering techniques currently employed in fields such as tropospheric weather modeling. In the realm of Ionospheric-Thermospheric modeling, the estimation of system drivers is a critical component of any reliable data assimilation technique. How to best estimate and apply these drivers, however, remains an open question and active area of research. The recently developed method of Iterative Re-Initialization, Driver Estimation and Assimilation (IRIDEA) accounts for the driver/response time-delay characteristics of the Ionosphere-Thermosphere system relative to satellite accelerometer observations. Results from two near year-long simulations are shown: (1) from a period of elevated solar and geomagnetic activity during 2003, and (2) from a solar minimum period during 2007. This talk will highlight the challenges and successes of implementing a technique suited for both solar min and max, as well as expectations for improving neutral density forecasts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGeod.tmp..480A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGeod.tmp..480A"><span>On the assimilation of absolute geodetic dynamic topography in a global ocean model: impact on the deep ocean state</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Androsov, Alexey; Nerger, Lars; Schnur, Reiner; Schröter, Jens; Albertella, Alberta; Rummel, Reiner; Savcenko, Roman; Bosch, Wolfgang; Skachko, Sergey; Danilov, Sergey</p> <p>2018-05-01</p> <p>General ocean circulation models are not perfect. Forced with observed atmospheric fluxes they gradually drift away from measured distributions of temperature and salinity. We suggest data assimilation of absolute dynamical ocean topography (DOT) observed from space geodetic missions as an option to reduce these differences. Sea surface information of DOT is transferred into the deep ocean by defining the analysed ocean state as a weighted average of an ensemble of fully consistent model solutions using an error-subspace ensemble Kalman filter technique. Success of the technique is demonstrated by assimilation into a global configuration of the ocean circulation model FESOM over 1 year. The dynamic ocean topography data are obtained from a combination of multi-satellite altimetry and geoid measurements. The assimilation result is assessed using independent temperature and salinity analysis derived from profiling buoys of the AGRO float data set. The largest impact of the assimilation occurs at the first few analysis steps where both the model ocean topography and the steric height (i.e. temperature and salinity) are improved. The continued data assimilation over 1 year further improves the model state gradually. Deep ocean fields quickly adjust in a sustained manner: A model forecast initialized from the model state estimated by the data assimilation after only 1 month shows that improvements induced by the data assimilation remain in the model state for a long time. Even after 11 months, the modelled ocean topography and temperature fields show smaller errors than the model forecast without any data assimilation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23757566','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23757566"><span>Fuzzy neural network technique for system state forecasting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Dezhi; Wang, Wilson; Ismail, Fathy</p> <p>2013-10-01</p> <p>In many system state forecasting applications, the prediction is performed based on multiple datasets, each corresponding to a distinct system condition. The traditional methods dealing with multiple datasets (e.g., vector autoregressive moving average models and neural networks) have some shortcomings, such as limited modeling capability and opaque reasoning operations. To tackle these problems, a novel fuzzy neural network (FNN) is proposed in this paper to effectively extract information from multiple datasets, so as to improve forecasting accuracy. The proposed predictor consists of both autoregressive (AR) nodes modeling and nonlinear nodes modeling; AR models/nodes are used to capture the linear correlation of the datasets, and the nonlinear correlation of the datasets are modeled with nonlinear neuron nodes. A novel particle swarm technique [i.e., Laplace particle swarm (LPS) method] is proposed to facilitate parameters estimation of the predictor and improve modeling accuracy. The effectiveness of the developed FNN predictor and the associated LPS method is verified by a series of tests related to Mackey-Glass data forecast, exchange rate data prediction, and gear system prognosis. Test results show that the developed FNN predictor and the LPS method can capture the dynamics of multiple datasets effectively and track system characteristics accurately.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910016102','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910016102"><span>Quality improvement prototype: Johnson Space Center, National Aeronautics and Space Administration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1990-01-01</p> <p>The Johnson Space Flight Center was recognized by the Office of Management and Budget as a model for its high standards of quality. Included are an executive summary of the center's activities, an organizational overview, techniques for improving quality, the status of the quality effort and a listing of key personnel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/31194','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/31194"><span>A retrospective evaluation of traffic forecasting techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2016-08-01</p> <p>Traffic forecasting techniquessuch as extrapolation of previous years traffic volumes, regional travel demand models, or : local trip generation rateshelp planners determine needed transportation improvements. Thus, knowing the accuracy of t...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870006998','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870006998"><span>Module encapsulation technology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Willis, P.</p> <p>1986-01-01</p> <p>The identification and development techniques for low-cost module encapsulation materials were reviewed. Test results were displayed for a variety of materials. The improved prospects for modeling encapsulation systems for life prediction were reported.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3195891','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3195891"><span>Computation of physiological human vocal fold parameters by mathematical optimization of a biomechanical model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yang, Anxiong; Stingl, Michael; Berry, David A.; Lohscheller, Jörg; Voigt, Daniel; Eysholdt, Ulrich; Döllinger, Michael</p> <p>2011-01-01</p> <p>With the use of an endoscopic, high-speed camera, vocal fold dynamics may be observed clinically during phonation. However, observation and subjective judgment alone may be insufficient for clinical diagnosis and documentation of improved vocal function, especially when the laryngeal disease lacks any clear morphological presentation. In this study, biomechanical parameters of the vocal folds are computed by adjusting the corresponding parameters of a three-dimensional model until the dynamics of both systems are similar. First, a mathematical optimization method is presented. Next, model parameters (such as pressure, tension and masses) are adjusted to reproduce vocal fold dynamics, and the deduced parameters are physiologically interpreted. Various combinations of global and local optimization techniques are attempted. Evaluation of the optimization procedure is performed using 50 synthetically generated data sets. The results show sufficient reliability, including 0.07 normalized error, 96% correlation, and 91% accuracy. The technique is also demonstrated on data from human hemilarynx experiments, in which a low normalized error (0.16) and high correlation (84%) values were achieved. In the future, this technique may be applied to clinical high-speed images, yielding objective measures with which to document improved vocal function of patients with voice disorders. PMID:21877808</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4827342','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4827342"><span>Machine Reading for Extraction of Bacteria and Habitat Taxonomies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kordjamshidi, Parisa; Massa, Wouter; Provoost, Thomas; Moens, Marie-Francine</p> <p>2015-01-01</p> <p>There is a vast amount of scientific literature available from various resources such as the internet. Automating the extraction of knowledge from these resources is very helpful for biologists to easily access this information. This paper presents a system to extract the bacteria and their habitats, as well as the relations between them. We investigate to what extent current techniques are suited for this task and test a variety of models in this regard. We detect entities in a biological text and map the habitats into a given taxonomy. Our model uses a linear chain Conditional Random Field (CRF). For the prediction of relations between the entities, a model based on logistic regression is built. Designing a system upon these techniques, we explore several improvements for both the generation and selection of good candidates. One contribution to this lies in the extended exibility of our ontology mapper that uses an advanced boundary detection and assigns the taxonomy elements to the detected habitats. Furthermore, we discover value in the combination of several distinct candidate generation rules. Using these techniques, we show results that are significantly improving upon the state of art for the BioNLP Bacteria Biotopes task. PMID:27077141</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19940004385&hterms=heuristic+reasoning&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dheuristic%2Breasoning','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19940004385&hterms=heuristic+reasoning&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dheuristic%2Breasoning"><span>Requirements analysis, domain knowledge, and design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Potts, Colin</p> <p>1988-01-01</p> <p>Two improvements to current requirements analysis practices are suggested: domain modeling, and the systematic application of analysis heuristics. Domain modeling is the representation of relevant application knowledge prior to requirements specification. Artificial intelligence techniques may eventually be applicable for domain modeling. In the short term, however, restricted domain modeling techniques, such as that in JSD, will still be of practical benefit. Analysis heuristics are standard patterns of reasoning about the requirements. They usually generate questions of clarification or issues relating to completeness. Analysis heuristics can be represented and therefore systematically applied in an issue-based framework. This is illustrated by an issue-based analysis of JSD's domain modeling and functional specification heuristics. They are discussed in the context of the preliminary design of simple embedded systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25333817','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25333817"><span>Development of intelligent model for personalized guidance on wheelchair tilt and recline usage for people with spinal cord injury: methodology and preliminary report.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fu, Jicheng; Jones, Maria; Jan, Yih-Kuen</p> <p>2014-01-01</p> <p>Wheelchair tilt and recline functions are two of the most desirable features for relieving seating pressure to decrease the risk of pressure ulcers. The effective guidance on wheelchair tilt and recline usage is therefore critical to pressure ulcer prevention. The aim of this study was to demonstrate the feasibility of using machine learning techniques to construct an intelligent model to provide personalized guidance to individuals with spinal cord injury (SCI). The motivation stems from the clinical evidence that the requirements of individuals vary greatly and that no universal guidance on tilt and recline usage could possibly satisfy all individuals with SCI. We explored all aspects involved in constructing the intelligent model and proposed approaches tailored to suit the characteristics of this preliminary study, such as the way of modeling research participants, using machine learning techniques to construct the intelligent model, and evaluating the performance of the intelligent model. We further improved the intelligent model's prediction accuracy by developing a two-phase feature selection algorithm to identify important attributes. Experimental results demonstrated that our approaches held the promise: they could effectively construct the intelligent model, evaluate its performance, and refine the participant model so that the intelligent model's prediction accuracy was significantly improved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SMaS...27d5004X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SMaS...27d5004X"><span>Reduced-order modeling of piezoelectric energy harvesters with nonlinear circuits under complex conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xiang, Hong-Jun; Zhang, Zhi-Wei; Shi, Zhi-Fei; Li, Hong</p> <p>2018-04-01</p> <p>A fully coupled modeling approach is developed for piezoelectric energy harvesters in this work based on the use of available robust finite element packages and efficient reducing order modeling techniques. At first, the harvester is modeled using finite element packages. The dynamic equilibrium equations of harvesters are rebuilt by extracting system matrices from the finite element model using built-in commands without any additional tools. A Krylov subspace-based scheme is then applied to obtain a reduced-order model for improving simulation efficiency but preserving the key features of harvesters. Co-simulation of the reduced-order model with nonlinear energy harvesting circuits is achieved in a system level. Several examples in both cases of harmonic response and transient response analysis are conducted to validate the present approach. The proposed approach allows to improve the simulation efficiency by several orders of magnitude. Moreover, the parameters used in the equivalent circuit model can be conveniently obtained by the proposed eigenvector-based model order reduction technique. More importantly, this work establishes a methodology for modeling of piezoelectric energy harvesters with any complicated mechanical geometries and nonlinear circuits. The input load may be more complex also. The method can be employed by harvester designers to optimal mechanical structures or by circuit designers to develop novel energy harvesting circuits.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19800039859&hterms=film+analysis&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dfilm%2Banalysis','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19800039859&hterms=film+analysis&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dfilm%2Banalysis"><span>Man-machine analysis of translation and work tasks of Skylab films</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.</p> <p>1979-01-01</p> <p>An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29410600','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29410600"><span>Additive Manufacturing Techniques for the Reconstruction of 3D Fetal Faces.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Speranza, Domenico; Citro, Daniela; Padula, Francesco; Motyl, Barbara; Marcolin, Federica; Calì, Michele; Martorelli, Massimo</p> <p>2017-01-01</p> <p>This paper deals with additive manufacturing techniques for the creation of 3D fetal face models starting from routine 3D ultrasound data. In particular, two distinct themes are addressed. First, a method for processing and building 3D models based on the use of medical image processing techniques is proposed. Second, the preliminary results of a questionnaire distributed to future parents consider the use of these reconstructions both from an emotional and an affective point of view. In particular, the study focuses on the enhancement of the perception of maternity or paternity and the improvement in the relationship between parents and physicians in case of fetal malformations, in particular facial or cleft lip diseases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24820121','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24820121"><span>Testing a path-analytic mediation model of how motivational enhancement physiotherapy improves physical functioning in pain patients.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cheing, Gladys; Vong, Sinfia; Chan, Fong; Ditchman, Nicole; Brooks, Jessica; Chan, Chetwyn</p> <p>2014-12-01</p> <p>Pain is a complex phenomenon not easily discerned from psychological, social, and environmental characteristics and is an oft cited barrier to return to work for people experiencing low back pain (LBP). The purpose of this study was to evaluate a path-analytic mediation model to examine how motivational enhancement physiotherapy, which incorporates tenets of motivational interviewing, improves physical functioning of patients with chronic LBP. Seventy-six patients with chronic LBP were recruited from the outpatient physiotherapy department of a government hospital in Hong Kong. The re-specified path-analytic model fit the data very well, χ (2)(3, N = 76) = 3.86, p = .57; comparative fit index = 1.00; and the root mean square error of approximation = 0.00. Specifically, results indicated that (a) using motivational interviewing techniques in physiotherapy was associated with increased working alliance with patients, (b) working alliance increased patients' outcome expectancy and (c) greater outcome expectancy resulted in a reduction of subjective pain intensity and improvement in physical functioning. Change in pain intensity also directly influenced improvement in physical functioning. The effect of motivational enhancement therapy on physical functioning can be explained by social-cognitive factors such as motivation, outcome expectancy, and working alliance. The use of motivational interviewing techniques to increase outcome expectancy of patients and improve working alliance could further strengthen the impact of physiotherapy on rehabilitation outcomes of patients with chronic LBP.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040086082','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040086082"><span>Development of an Intelligent Videogrammetric Wind Tunnel Measurement System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Graves, Sharon S.; Burner, Alpheus W.</p> <p>2004-01-01</p> <p>A videogrammetric technique developed at NASA Langley Research Center has been used at five NASA facilities at the Langley and Ames Research Centers for deformation measurements on a number of sting mounted and semispan models. These include high-speed research and transport models tested over a wide range of aerodynamic conditions including subsonic, transonic, and supersonic regimes. The technique, based on digital photogrammetry, has been used to measure model attitude, deformation, and sting bending. In addition, the technique has been used to study model injection rate effects and to calibrate and validate methods for predicting static aeroelastic deformations of wind tunnel models. An effort is currently underway to develop an intelligent videogrammetric measurement system that will be both useful and usable in large production wind tunnels while providing accurate data in a robust and timely manner. Designed to encode a higher degree of knowledge through computer vision, the system features advanced pattern recognition techniques to improve automated location and identification of targets placed on the wind tunnel model to be used for aerodynamic measurements such as attitude and deformation. This paper will describe the development and strategy of the new intelligent system that was used in a recent test at a large transonic wind tunnel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JAfES.124..478A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JAfES.124..478A"><span>Numerical modeling techniques for flood analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.</p> <p>2016-12-01</p> <p>Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA461984','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA461984"><span>Adding Weather to Wargames</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2007-01-01</p> <p>Aid (IWEDA) we developed techniques that allowed significant improvement in weather effects and impacts for wargames. TAWS was run for numerous and...found that the wargame realism was increased without impacting the run time. While these techniques are applicable to wargames in general, we tested...them by incorporation into the Advanced Warfighting Simulation (AWARS) model. AWARS was modified to incorporate weather impacts upon sensor</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910015554','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910015554"><span>Multi-dimensional tunnelling and complex momentum</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bowcock, Peter; Gregory, Ruth</p> <p>1991-01-01</p> <p>The problem of modeling tunneling phenomena in more than one dimension is examined. It is found that existing techniques are inadequate in a wide class of situations, due to their inability to deal with concurrent classical motion. The generalization of these methods to allow for complex momenta is shown, and improved techniques are demonstrated with a selection of illustrative examples. Possible applications are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/808742','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/808742"><span>Development of innovative techniques and principles that may be used as models to improve plant performance. Final report</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hanna, Wayne W.; Burton, Glenn W.</p> <p>2000-06-25</p> <p>We developed fundamental methods and techniques for transferring germplasm from wild to cultivated species. Germplasm transferred included diverse cytoplasms, new genes for pest resistance, genes controlling dry matter yield and apomixis. Some of the germplasm has been shown to be valuable in plant breeding and has been incorporated into commercial cultivators.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED345908.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED345908.pdf"><span>Enhancing Teaching Effectiveness Using Experiential Techniques: Model Development and Empirical Evaluation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wagner, Richard J.; And Others</p> <p></p> <p>In U.S. colleges and universities, much attention has been focused on the need to improve teaching quality and to involve students in the learning process. At the same time, many faculty members are faced with growing class sizes and with time pressures due to research demands. One useful technique is to divide the class into small groups and…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=315452&keyword=Wrf&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=315452&keyword=Wrf&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Improving the representation of clouds, radiation, and precipitation using spectral nudging in the Weather Research and Forecasting model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Spectral nudging – a scale-selective interior constraint technique – is commonly used in regional climate models to maintain consistency with large-scale forcing while permitting mesoscale features to develop in the downscaled simulations. Several studies have demonst...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040051248&hterms=ACL&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DACL','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040051248&hterms=ACL&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DACL"><span>Corpus-Based Optimization of Language Models Derived from Unification Grammars</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rayner, Manny; Hockey, Beth Ann; James, Frankie; Bratt, Harry; Bratt, Elizabeth O.; Gawron, Mark; Goldwater, Sharon; Dowding, John; Bhagat, Amrita</p> <p>2000-01-01</p> <p>We describe a technique which makes it feasible to improve the performance of a language model derived from a manually constructed unification grammar, using low-quality untranscribed speech data and a minimum of human annotation. The method is on a medium-vocabulary spoken language command and control task.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1011458','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1011458"><span>Reduced and Validated Kinetic Mechanisms for Hydrogen-CO-sir Combustion in Gas Turbines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Yiguang Ju; Frederick Dryer</p> <p>2009-02-07</p> <p>Rigorous experimental, theoretical, and numerical investigation of various issues relevant to the development of reduced, validated kinetic mechanisms for synthetic gas combustion in gas turbines was carried out - including the construction of new radiation models for combusting flows, improvement of flame speed measurement techniques, measurements and chemical kinetic analysis of H{sub 2}/CO/CO{sub 2}/O{sub 2}/diluent mixtures, revision of the H{sub 2}/O{sub 2} kinetic model to improve flame speed prediction capabilities, and development of a multi-time scale algorithm to improve computational efficiency in reacting flow simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920014126','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920014126"><span>Improved interpretation of satellite altimeter data using genetic algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Messa, Kenneth; Lybanon, Matthew</p> <p>1992-01-01</p> <p>Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4765365','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4765365"><span>Application of separable parameter space techniques to multi-tracer PET compartment modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J</p> <p>2016-01-01</p> <p>Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..17.1780M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..17.1780M"><span>Assimilating satellite soil moisture into rainfall-runoff modelling: towards a systematic study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Massari, Christian; Tarpanelli, Angelica; Brocca, Luca; Moramarco, Tommaso</p> <p>2015-04-01</p> <p>Soil moisture is the main factor for the repartition of the mass and energy fluxes between the land surface and the atmosphere thus playing a fundamental role in the hydrological cycle. Indeed, soil moisture represents the initial condition of rainfall-runoff modelling that determines the flood response of a catchment. Different initial soil moisture conditions can discriminate between catastrophic and minor effects of a given rainfall event. Therefore, improving the estimation of initial soil moisture conditions will reduce uncertainties in early warning flood forecasting models addressing the mitigation of flood hazard. In recent years, satellite soil moisture products have become available with fine spatial-temporal resolution and a good accuracy. Therefore, a number of studies have been published in which the impact of the assimilation of satellite soil moisture data into rainfall-runoff modelling is investigated. Unfortunately, data assimilation involves a series of assumptions and choices that significantly affect the final result. Given a satellite soil moisture observation, a rainfall-runoff model and a data assimilation technique, an improvement or a deterioration of discharge predictions can be obtained depending on the choices made in the data assimilation procedure. Consequently, large discrepancies have been obtained in the studies published so far likely due to the differences in the implementation of the data assimilation technique. On this basis, a comprehensive and robust procedure for the assimilation of satellite soil moisture data into rainfall-runoff modelling is developed here and applied to six subcatchment of the Upper Tiber River Basin for which high-quality hydrometeorological hourly observations are available in the period 1989-2013. The satellite soil moisture product used in this study is obtained from the Advanced SCATterometer (ASCAT) onboard Metop-A satellite and it is available since 2007. The MISDc ("Modello Idrologico SemiDistribuito in continuo") continuous hydrological model is used for flood simulation. The Ensemble Kalman Filter (EnKF) is employed as data assimilation technique for its flexibility and good performance in a number of previous applications. Different components are involved in the developed data assimilation procedure. For the correction of the bias between satellite and modelled soil moisture data three different techniques are considered: mean-variance matching, Cumulative Density Function (CDF) matching and least square linear regression. For properly generating the ensembles of model states, required in the application of EnKF technique, an exhaustive search of the model error parameterization and structure is carried out, differentiated for each study catchments. A number of scores and statistics are employed for the evaluation the reliability of the ensemble. Similarly, different configurations for the observation error are investigated. Results show that for four out six catchments the assimilation of the ASCAT soil moisture product improves discharge simulation in the validation period 2010-2013, mainly during flood events. The two catchments in which the assimilation does not improve the results are located in the mountainous part of the region where both MISDc and satellite data perform worse. The analysis on the data assimilation choices highlights that the selection of the observation error seems to have the largest influence on discharge simulation. Finally, the bias correction approaches have a lower effect and the selection of linear techniques is preferable. The assessment of all the components involved in the data assimilation procedure provides a clear understanding of results and it is advised to follow a similar procedure in this kind of studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1982JSP....28..639K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1982JSP....28..639K"><span>Monte Carlo technique for very large ising models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kalle, C.; Winkelmann, V.</p> <p>1982-08-01</p> <p>Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11182127','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11182127"><span>Correcting for deformation in skin-based marker systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Alexander, E J; Andriacchi, T P</p> <p>2001-03-01</p> <p>A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29693322','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29693322"><span>Outside-In Systems Pharmacology Combines Innovative Computational Methods With High-Throughput Whole Vertebrate Studies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schulthess, Pascal; van Wijk, Rob C; Krekels, Elke H J; Yates, James W T; Spaink, Herman P; van der Graaf, Piet H</p> <p>2018-04-25</p> <p>To advance the systems approach in pharmacology, experimental models and computational methods need to be integrated from early drug discovery onward. Here, we propose outside-in model development, a model identification technique to understand and predict the dynamics of a system without requiring prior biological and/or pharmacological knowledge. The advanced data required could be obtained by whole vertebrate, high-throughput, low-resource dose-exposure-effect experimentation with the zebrafish larva. Combinations of these innovative techniques could improve early drug discovery. © 2018 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23773521','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23773521"><span>Improved inference in Bayesian segmentation using Monte Carlo sampling: application to hippocampal subfield volumetry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen</p> <p>2013-10-01</p> <p>Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures. Copyright © 2013 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3719857','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3719857"><span>Improved Inference in Bayesian Segmentation Using Monte Carlo Sampling: Application to Hippocampal Subfield Volumetry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Leemput, Koen Van</p> <p>2013-01-01</p> <p>Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer’s disease classification task. As an additional benefit, the technique also allows one to compute informative “error bars” on the volume estimates of individual structures. PMID:23773521</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28182686','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28182686"><span>Predicting Structure-Function Relations and Survival following Surgical and Bronchoscopic Lung Volume Reduction Treatment of Emphysema.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mondoñedo, Jarred R; Suki, Béla</p> <p>2017-02-01</p> <p>Lung volume reduction surgery (LVRS) and bronchoscopic lung volume reduction (bLVR) are palliative treatments aimed at reducing hyperinflation in advanced emphysema. Previous work has evaluated functional improvements and survival advantage for these techniques, although their effects on the micromechanical environment in the lung have yet to be determined. Here, we introduce a computational model to simulate a force-based destruction of elastic networks representing emphysema progression, which we use to track the response to lung volume reduction via LVRS and bLVR. We find that (1) LVRS efficacy can be predicted based on pre-surgical network structure; (2) macroscopic functional improvements following bLVR are related to microscopic changes in mechanical force heterogeneity; and (3) both techniques improve aspects of survival and quality of life influenced by lung compliance, albeit while accelerating disease progression. Our model predictions yield unique insights into the microscopic origins underlying emphysema progression before and after lung volume reduction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5300131','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5300131"><span>Predicting Structure-Function Relations and Survival following Surgical and Bronchoscopic Lung Volume Reduction Treatment of Emphysema</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mondoñedo, Jarred R.</p> <p>2017-01-01</p> <p>Lung volume reduction surgery (LVRS) and bronchoscopic lung volume reduction (bLVR) are palliative treatments aimed at reducing hyperinflation in advanced emphysema. Previous work has evaluated functional improvements and survival advantage for these techniques, although their effects on the micromechanical environment in the lung have yet to be determined. Here, we introduce a computational model to simulate a force-based destruction of elastic networks representing emphysema progression, which we use to track the response to lung volume reduction via LVRS and bLVR. We find that (1) LVRS efficacy can be predicted based on pre-surgical network structure; (2) macroscopic functional improvements following bLVR are related to microscopic changes in mechanical force heterogeneity; and (3) both techniques improve aspects of survival and quality of life influenced by lung compliance, albeit while accelerating disease progression. Our model predictions yield unique insights into the microscopic origins underlying emphysema progression before and after lung volume reduction. PMID:28182686</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PMB....61.8214C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PMB....61.8214C"><span>egs_brachy: a versatile and fast Monte Carlo code for brachytherapy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.</p> <p>2016-12-01</p> <p>egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27804922','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27804922"><span>egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M</p> <p>2016-12-07</p> <p>egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29190999','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29190999"><span>Animal models for the study of inflammatory bowel diseases: a meta-analysis on modalities for imaging inflammatory lesions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Auletta, Sveva; Bonfiglio, Rita; Wunder, Andreas; Varani, Michela; Galli, Filippo; Borri, Filippo; Scimeca, Manuel; Niessen, Heiko G; Schönberger, Tanja; Bonanno, Elena</p> <p>2018-03-01</p> <p>Inflammatory bowel diseases are lifelong disorders affecting the gastrointestinal tract characterized by intermittent disease flares and periods of remission with a progressive and destructive nature. Unfortunately, the exact etiology is still not completely known, therefore a causal therapy to cure the disease is not yet available. Current treatment options mainly encompass the use of non-specific anti-inflammatory agents and immunosuppressive drugs that cause significant side effects that often have a negative impact on patients' quality of life. As the majority of patients need a long-term follow-up it would be ideal to rely on a non-invasive technique with good compliance. Currently, the gold standard diagnostic tools for managing IBD are represented by invasive procedures such as colonoscopy and histopathology. Nevertheless, recent advances in imaging technology continue to improve the ability of imaging techniques to non-invasively monitor disease activity and treatment response in preclinical models of IBD. Novel and emerging imaging techniques not only allow direct visualization of intestinal inflammation, but also enable molecular imaging and targeting of specific alterations of the inflamed murine mucosa. Furthermore, molecular imaging advances allow us to increase our knowledge on the critical biological pathways involved in disease progression by characterizing in vivo processes at a cellular and molecular level and enabling significant improvements in the understanding of the etiology of IBD. This review presents a critical and updated overview on the imaging advances in animal models of IBD. Our aim is to highlight the potential beneficial impact and the range of applications that imaging techniques could offer for the improvement of the clinical monitoring and management of IBD patients: diagnosis, staging, determination of therapeutic targets, monitoring therapy and evaluation of the prognosis, personalized therapeutic approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JSCHE..67I...1N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JSCHE..67I...1N"><span>DUAL STATE-PARAMETER UPDATING SCHEME ON A CONCEPTUAL HYDROLOGIC MODEL USING SEQUENTIAL MONTE CARLO FILTERS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin</p> <p></p> <p>Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960047159','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960047159"><span>Localization Versus Abstraction: A Comparison of Two Search Reduction Techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lansky, Amy L.</p> <p>1992-01-01</p> <p>There has been much recent work on the use of abstraction to improve planning behavior and cost. Another technique for dealing with the inherently explosive cost of planning is localization. This paper compares the relative strengths of localization and abstraction in reducing planning search cost. In particular, localization is shown to subsume abstraction. Localization techniques can model the various methods of abstraction that have been used, but also provide a much more flexible framework, with a broader range of benefits.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29507254','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29507254"><span>Protein homology model refinement by large-scale energy optimization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David</p> <p>2018-03-20</p> <p>Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24122567','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24122567"><span>Initial Skill Acquisition of Handrim Wheelchair Propulsion: A New Perspective.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vegter, Riemer J K; de Groot, Sonja; Lamoth, Claudine J; Veeger, Dirkjan Hej; van der Woude, Lucas H V</p> <p>2014-01-01</p> <p>To gain insight into cyclic motor learning processes, hand rim wheelchair propulsion is a suitable cyclic task, to be learned during early rehabilitation and novel to almost every individual. To propel in an energy efficient manner, wheelchair users must learn to control bimanually applied forces onto the rims, preserving both speed and direction of locomotion. The purpose of this study was to evaluate mechanical efficiency and propulsion technique during the initial stage of motor learning. Therefore, 70 naive able-bodied men received 12-min uninstructed wheelchair practice, consisting of three 4-min blocks separated by 2 min rest. Practice was performed on a motor-driven treadmill at a fixed belt speed and constant power output relative to body mass. Energy consumption and the kinetics of propulsion technique were continuously measured. Participants significantly increased their mechanical efficiency and changed their propulsion technique from a high frequency mode with a lot of negative work to a longer-slower movement pattern with less power losses. Furthermore a multi-level model showed propulsion technique to relate to mechanical efficiency. Finally improvers and non-improvers were identified. The non-improving group was already more efficient and had a better propulsion technique in the first block of practice (i.e., the fourth minute). These findings link propulsion technique to mechanical efficiency, support the importance of a correct propulsion technique for wheelchair users and show motor learning differences.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97d2126E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97d2126E"><span>Lifted worm algorithm for the Ising model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Elçi, Eren Metin; Grimm, Jens; Ding, Lijie; Nasrawi, Abrahim; Garoni, Timothy M.; Deng, Youjin</p> <p>2018-04-01</p> <p>We design an irreversible worm algorithm for the zero-field ferromagnetic Ising model by using the lifting technique. We study the dynamic critical behavior of an energylike observable on both the complete graph and toroidal grids, and compare our findings with reversible algorithms such as the Prokof'ev-Svistunov worm algorithm. Our results show that the lifted worm algorithm improves the dynamic exponent of the energylike observable on the complete graph and leads to a significant constant improvement on toroidal grids.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170001429','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170001429"><span>Improved Propulsion Modeling for Low-Thrust Trajectory Optimization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Knittel, Jeremy M.; Englander, Jacob A.; Ozimek, Martin T.; Atchison, Justin A.; Gould, Julian J.</p> <p>2017-01-01</p> <p>Low-thrust trajectory design is tightly coupled with spacecraft systems design. In particular, the propulsion and power characteristics of a low-thrust spacecraft are major drivers in the design of the optimal trajectory. Accurate modeling of the power and propulsion behavior is essential for meaningful low-thrust trajectory optimization. In this work, we discuss new techniques to improve the accuracy of propulsion modeling in low-thrust trajectory optimization while maintaining the smooth derivatives that are necessary for a gradient-based optimizer. The resulting model is significantly more realistic than the industry standard and performs well inside an optimizer. A variety of deep-space trajectory examples are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMGP53C1162S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMGP53C1162S"><span>Improvement of geomagnetic core field modeling with a priori information about Gauss coefficient correlations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schachtschneider, R.; Rother, M.; Lesur, V.</p> <p>2013-12-01</p> <p>We introduce a method that enables us to account for existing correlations between Gauss coefficients in core field modelling. The information about the correlations are obtained from a highly accurate field model based on CHAMP data, e.g. the GRIMM-3 model. We compute the covariance matrices of the geomagnetic field, the secular variation, and acceleration up to degree 18 and use these in the regularization scheme of the core field inversion. For testing our method we followed two different approaches by applying it to two different synthetic satellite data sets. The first is a short data set with a time span of only three months. Here we test how the information about correlations help to obtain an accurate model when only very little information are available. The second data set is a large one covering several years. In this case, besides reducing the residuals in general, we focus on the improvement of the model near the boundaries of the data set where the accerelation is generally more difficult to handle. In both cases the obtained covariance matrices are included in the damping scheme of the regularization. That way information from scales that could otherwise not be resolved by the data can be extracted. We show that by using this technique we are able to improve the models of the field and the secular variation for both, the short and the long term data set, compared to approaches using more conventional regularization techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=336118&keyword=air&subject=air%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=09/03/2012&dateendpublishedpresented=09/03/2017&sortby=pubdateyear','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=336118&keyword=air&subject=air%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=09/03/2012&dateendpublishedpresented=09/03/2017&sortby=pubdateyear"><span>A simple lightning assimilation technique for improving ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Convective rainfall is often a large source of error in retrospective modeling applications. In particular, positive rainfall biases commonly exist during summer months due to overactive convective parameterizations. In this study, lightning assimilation was applied in the Kain-Fritsch (KF) convective scheme to improve retrospective simulations using the Weather Research and Forecasting (WRF) model. The assimilation method has a straightforward approach: force KF deep convection where lightning is observed and, optionally, suppress deep convection where lightning is absent. WRF simulations were made with and without lightning assimilation over the continental United States for July 2012, July 2013, and January 2013. The simulations were evaluated against NCEP stage-IV precipitation data and MADIS near-surface meteorological observations. In general, the use of lightning assimilation considerably improves the simulation of summertime rainfall. For example, the July 2012 monthly averaged bias of 6 h accumulated rainfall is reduced from 0.54 to 0.07 mm and the spatial correlation is increased from 0.21 to 0.43 when lightning assimilation is used. Statistical measures of near-surface meteorological variables also are improved. Consistent improvements also are seen for the July 2013 case. These results suggest that this lightning assimilation technique has the potential to substantially improve simulation of warm-season rainfall in retrospective WRF applications. The</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=325491&keyword=air&subject=air%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=02/27/2012&dateendpublishedpresented=02/27/2017&sortby=pubdateyear','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=325491&keyword=air&subject=air%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=02/27/2012&dateendpublishedpresented=02/27/2017&sortby=pubdateyear"><span>A Simple Lightning Assimilation Technique For Improving ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Convective rainfall is often a large source of error in retrospective modeling applications. In particular, positive rainfall biases commonly exist during summer months due to overactive convective parameterizations. In this study, lightning assimilation was applied in the Kain-Fritsch (KF) convective scheme to improve retrospective simulations using the Weather Research and Forecasting (WRF) model. The assimilation method has a straightforward approach: Force KF deep convection where lightning is observed and, optionally, suppress deep convection where lightning is absent. WRF simulations were made with and without lightning assimilation over the continental United States for July 2012, July 2013, and January 2013. The simulations were evaluated against NCEP stage-IV precipitation data and MADIS near-surface meteorological observations. In general, the use of lightning assimilation considerably improves the simulation of summertime rainfall. For example, the July 2012 monthly-averaged bias of 6-h accumulated rainfall is reduced from 0.54 mm to 0.07 mm and the spatial correlation is increased from 0.21 to 0.43 when lightning assimilation is used. Statistical measures of near-surface meteorological variables also are improved. Consistent improvements also are seen for the July 2013 case. These results suggest that this lightning assimilation technique has the potential to substantially improve simulation of warm-season rainfall in retrospective WRF appli</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18336319','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18336319"><span>Recent progress and future directions in protein-protein docking.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ritchie, David W</p> <p>2008-02-01</p> <p>This article gives an overview of recent progress in protein-protein docking and it identifies several directions for future research. Recent results from the CAPRI blind docking experiments show that docking algorithms are steadily improving in both reliability and accuracy. Current docking algorithms employ a range of efficient search and scoring strategies, including e.g. fast Fourier transform correlations, geometric hashing, and Monte Carlo techniques. These approaches can often produce a relatively small list of up to a few thousand orientations, amongst which a near-native binding mode is often observed. However, despite the use of improved scoring functions which typically include models of desolvation, hydrophobicity, and electrostatics, current algorithms still have difficulty in identifying the correct solution from the list of false positives, or decoys. Nonetheless, significant progress is being made through better use of bioinformatics, biochemical, and biophysical information such as e.g. sequence conservation analysis, protein interaction databases, alanine scanning, and NMR residual dipolar coupling restraints to help identify key binding residues. Promising new approaches to incorporate models of protein flexibility during docking are being developed, including the use of molecular dynamics snapshots, rotameric and off-rotamer searches, internal coordinate mechanics, and principal component analysis based techniques. Some investigators now use explicit solvent models in their docking protocols. Many of these approaches can be computationally intensive, although new silicon chip technologies such as programmable graphics processor units are beginning to offer competitive alternatives to conventional high performance computer systems. As cryo-EM techniques improve apace, docking NMR and X-ray protein structures into low resolution EM density maps is helping to bridge the resolution gap between these complementary techniques. The use of symmetry and fragment assembly constraints are also helping to make possible docking-based predictions of large multimeric protein complexes. In the near future, the closer integration of docking algorithms with protein interface prediction software, structural databases, and sequence analysis techniques should help produce better predictions of protein interaction networks and more accurate structural models of the fundamental molecular interactions within the cell.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70013036','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70013036"><span>Water balance models in one-month-ahead streamflow forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Alley, William M.</p> <p>1985-01-01</p> <p>Techniques are tested that incorporate information from water balance models in making 1-month-ahead streamflow forecasts in New Jersey. The results are compared to those based on simple autoregressive time series models. The relative performance of the models is dependent on the month of the year in question. The water balance models are most useful for forecasts of April and May flows. For the stations in northern New Jersey, the April and May forecasts were made in order of decreasing reliability using the water-balance-based approaches, using the historical monthly means, and using simple autoregressive models. The water balance models were useful to a lesser extent for forecasts during the fall months. For the rest of the year the improvements in forecasts over those obtained using the simpler autoregressive models were either very small or the simpler models provided better forecasts. When using the water balance models, monthly corrections for bias are found to improve minimum mean-square-error forecasts as well as to improve estimates of the forecast conditional distributions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=memory&id=EJ1162775','ERIC'); return false;" href="https://eric.ed.gov/?q=memory&id=EJ1162775"><span>Implications of the Declarative/Procedural Model for Improving Second Language Learning: The Role of Memory Enhancement Techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ullman, Michael T.; Lovelett, Jarrett T.</p> <p>2018-01-01</p> <p>The declarative/procedural (DP) model posits that the learning, storage, and use of language critically depend on two learning and memory systems in the brain: declarative memory and procedural memory. Thus, on the basis of independent research on the memory systems, the model can generate specific and often novel predictions for language. Till…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MMTB...47..537G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MMTB...47..537G"><span>Applications of Electromagnetic Levitation and Development of Mathematical Models: A Review of the Last 15 Years (2000 to 2015)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gao, Lei; Shi, Zhe; Li, Donghui; Zhang, Guifang; Yang, Yindong; McLean, Alexander; Chattopadhyay, Kinnor</p> <p>2016-02-01</p> <p>Electromagnetic levitation (EML) is a contact-less, high-temperature technique which has had extensive application with respect to the investigation of both thermophysical and thermochemical properties of liquid alloy systems. The varying magnetic field generates an induced current inside the metal droplet, and interactions are created which produce both the Lorentz force that provides support against gravity and the Joule heating effect that melts the levitated specimen. Since metal droplets are opaque, transport phenomena inside the droplet cannot be visualized. To address this aspect, several numerical modeling techniques have been developed. The present work reviews the applications of EML techniques as well as the contributions that have been made by the use of mathematical modeling to improve understanding of the inherent processes which are characteristic features of the levitation system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000SPIE.4197..146S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000SPIE.4197..146S"><span>Vision system and three-dimensional modeling techniques for quantification of the morphology of irregular particles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smith, Lyndon N.; Smith, Melvyn L.</p> <p>2000-10-01</p> <p>Particulate materials undergo processing in many industries, and therefore there are significant commercial motivators for attaining improvements in the flow and packing behavior of powders. This can be achieved by modeling the effects of particle size, friction, and most importantly, particle shape or morphology. The method presented here for simulating powders employs a random number generator to construct a model of a random particle by combining a sphere with a number of smaller spheres. The resulting 3D model particle has a nodular type of morphology, which is similar to that exhibited by the atomized powders that are used in the bulk of powder metallurgy (PM) manufacture. The irregularity of the model particles is dependent upon vision system data gathered from microscopic analysis of real powder particles. A methodology is proposed whereby randomly generated model particles of various sized and irregularities can be combined in a random packing simulation. The proposed Monte Carlo technique would allow incorporation of the effects of gravity, wall friction, and inter-particle friction. The improvements in simulation realism that this method is expected to provide would prove useful for controlling powder production, and for predicting die fill behavior during the production of PM parts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880008721','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880008721"><span>Minimum Energy Routing through Interactive Techniques (MERIT) modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wylie, Donald P.</p> <p>1988-01-01</p> <p>The MERIT program is designed to demonstrate the feasibility of fuel savings by airlines through improved route selection using wind observations from their own fleet. After a discussion of weather and aircraft data, manually correcting wind fields, automatic corrections to wind fields, and short-range prediction models, it is concluded that improvements in wind information are possible if a system is developed for analyzing wind observations and correcting the forecasts made by the major models. One data handling system, McIDAS, can easily collect and display wind observations and model forecasts. Changing the wind forecasts beyond the time of the most recent observations is more difficult; an Australian Mesoscale Model was tested with promising but not definitive results.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22010755','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22010755"><span>Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F</p> <p>2011-09-01</p> <p>Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27665707','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27665707"><span>Evaluating the performance of infectious disease forecasts: A comparison of climate-driven and seasonal dengue forecasts for Mexico.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Johansson, Michael A; Reich, Nicholas G; Hota, Aditi; Brownstein, John S; Santillana, Mauricio</p> <p>2016-09-26</p> <p>Dengue viruses, which infect millions of people per year worldwide, cause large epidemics that strain healthcare systems. Despite diverse efforts to develop forecasting tools including autoregressive time series, climate-driven statistical, and mechanistic biological models, little work has been done to understand the contribution of different components to improved prediction. We developed a framework to assess and compare dengue forecasts produced from different types of models and evaluated the performance of seasonal autoregressive models with and without climate variables for forecasting dengue incidence in Mexico. Climate data did not significantly improve the predictive power of seasonal autoregressive models. Short-term and seasonal autocorrelation were key to improving short-term and long-term forecasts, respectively. Seasonal autoregressive models captured a substantial amount of dengue variability, but better models are needed to improve dengue forecasting. This framework contributes to the sparse literature of infectious disease prediction model evaluation, using state-of-the-art validation techniques such as out-of-sample testing and comparison to an appropriate reference model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5036038','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5036038"><span>Evaluating the performance of infectious disease forecasts: A comparison of climate-driven and seasonal dengue forecasts for Mexico</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Johansson, Michael A.; Reich, Nicholas G.; Hota, Aditi; Brownstein, John S.; Santillana, Mauricio</p> <p>2016-01-01</p> <p>Dengue viruses, which infect millions of people per year worldwide, cause large epidemics that strain healthcare systems. Despite diverse efforts to develop forecasting tools including autoregressive time series, climate-driven statistical, and mechanistic biological models, little work has been done to understand the contribution of different components to improved prediction. We developed a framework to assess and compare dengue forecasts produced from different types of models and evaluated the performance of seasonal autoregressive models with and without climate variables for forecasting dengue incidence in Mexico. Climate data did not significantly improve the predictive power of seasonal autoregressive models. Short-term and seasonal autocorrelation were key to improving short-term and long-term forecasts, respectively. Seasonal autoregressive models captured a substantial amount of dengue variability, but better models are needed to improve dengue forecasting. This framework contributes to the sparse literature of infectious disease prediction model evaluation, using state-of-the-art validation techniques such as out-of-sample testing and comparison to an appropriate reference model. PMID:27665707</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21328602','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21328602"><span>An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Binder, Harald; Porzelius, Christine; Schumacher, Martin</p> <p>2011-03-01</p> <p>Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25460419','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25460419"><span>Predicting bottlenose dolphin distribution along Liguria coast (northwestern Mediterranean Sea) through different modeling techniques and indirect predictors.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Marini, C; Fossa, F; Paoli, C; Bellingeri, M; Gnone, G; Vassallo, P</p> <p>2015-03-01</p> <p>Habitat modeling is an important tool to investigate the quality of the habitat for a species within a certain area, to predict species distribution and to understand the ecological processes behind it. Many species have been investigated by means of habitat modeling techniques mainly to address effective management and protection policies and cetaceans play an important role in this context. The bottlenose dolphin (Tursiops truncatus) has been investigated with habitat modeling techniques since 1997. The objectives of this work were to predict the distribution of bottlenose dolphin in a coastal area through the use of static morphological features and to compare the prediction performances of three different modeling techniques: Generalized Linear Model (GLM), Generalized Additive Model (GAM) and Random Forest (RF). Four static variables were tested: depth, bottom slope, distance from 100 m bathymetric contour and distance from coast. RF revealed itself both the most accurate and the most precise modeling technique with very high distribution probabilities predicted in presence cells (90.4% of mean predicted probabilities) and with 66.7% of presence cells with a predicted probability comprised between 90% and 100%. The bottlenose distribution obtained with RF allowed the identification of specific areas with particularly high presence probability along the coastal zone; the recognition of these core areas may be the starting point to develop effective management practices to improve T. truncatus protection. Copyright © 2014 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19356818','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19356818"><span>The use of discrete-event simulation modelling to improve radiation therapy planning processes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Werker, Greg; Sauré, Antoine; French, John; Shechter, Steven</p> <p>2009-07-01</p> <p>The planning portion of the radiation therapy treatment process at the British Columbia Cancer Agency is efficient but nevertheless contains room for improvement. The purpose of this study is to show how a discrete-event simulation (DES) model can be used to represent this complex process and to suggest improvements that may reduce the planning time and ultimately reduce overall waiting times. A simulation model of the radiation therapy (RT) planning process was constructed using the Arena simulation software, representing the complexities of the system. Several types of inputs feed into the model; these inputs come from historical data, a staff survey, and interviews with planners. The simulation model was validated against historical data and then used to test various scenarios to identify and quantify potential improvements to the RT planning process. Simulation modelling is an attractive tool for describing complex systems, and can be used to identify improvements to the processes involved. It is possible to use this technique in the area of radiation therapy planning with the intent of reducing process times and subsequent delays for patient treatment. In this particular system, reducing the variability and length of oncologist-related delays contributes most to improving the planning time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvD..95e5007B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvD..95e5007B"><span>Improving LHC searches for dark photons using lepton-jet substructure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barello, G.; Chang, Spencer; Newby, Christopher A.; Ostdiek, Bryan</p> <p>2017-03-01</p> <p>Collider signals of dark photons are an exciting probe for new gauge forces and are characterized by events with boosted lepton jets. Existing techniques are efficient in searching for muonic lepton jets but due to substantial backgrounds have difficulty constraining lepton jets containing only electrons. This is unfortunate since upcoming intensity frontier experiments are sensitive to dark photon masses which only allow electron decays. Analyzing a recently proposed model of kinetic mixing, with new scalar particles decaying into dark photons, we find that existing techniques for electron jets can be substantially improved. We show that using lepton-jet-substructure variables, in association with a boosted decision tree, improves background rejection, significantly increasing the LHC's reach for dark photons in this region of parameter space.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Anchorage&pg=4&id=EJ288119','ERIC'); return false;" href="https://eric.ed.gov/?q=Anchorage&pg=4&id=EJ288119"><span>School Improvement Goal Setting: A Collaborative Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Snyder, Karolyn J.; And Others</p> <p>1983-01-01</p> <p>Describes the successful use of the Delphi Dialog Technique (a goal-setting process) at East High School, Anchorage, Alaska, where it was used to obtain consensus among staff members about school-growth targets. (JW)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16875734','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16875734"><span>New developments in exposure assessment: the impact on the practice of health risk assessment and epidemiological studies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nieuwenhuijsen, Mark; Paustenbach, Dennis; Duarte-Davidson, Raquel</p> <p>2006-12-01</p> <p>The field of exposure assessment has matured significantly over the past 10-15 years. Dozens of studies have measured the concentrations of numerous chemicals in many media to which humans are exposed. Others have catalogued the various exposure pathways and identified typical values which can be used in the exposure calculations for the general population such as amount of water or soil ingested per day or the percent of a chemical than can pass through the skin. In addition, studies of the duration of exposure for many tasks (e.g. showering, jogging, working in the office) have been conducted which allow for more general descriptions of the likely range of exposures. All of this information, as well as the development of new and better models (e.g. air dispersion or groundwater models), allow for better estimates of exposure. In addition to identifying better exposure factors, and better mathematical models for predicting the aerial distribution of chemicals, the conduct of simulation studies and dose-reconstruction studies can offer extraordinary opportunities for filling in data gaps regarding historical exposures which are critical to improving the power of epidemiology studies. The use of probabilistic techniques such as Monte Carlo analysis and Bayesian statistics have revolutionized the practice of exposure assessment and has greatly enhanced the quality of the risk characterization. Lastly, the field of epidemiology is about to undergo a sea change with respect to the exposure component because each year better environmental and exposure models, statistical techniques and new biological monitoring techniques are being introduced. This paper reviews these techniques and discusses where additional research is likely to pay a significant dividend. Exposure assessment techniques are now available which can significantly improve the quality of epidemiology and health risk assessment studies and vastly improve their usefulness. As more quantitative exposure components can now be incorporated into these studies, they can be better used to identify safe levels of exposure using customary risk assessment methodologies. Examples are drawn from both environmental and occupational studies illustrating how these techniques have been used to better understand exposure to specific chemicals. Some thoughts are also presented on what lessons have been learned about conducting exposure assessment for health risk assessments and epidemiological studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950027315','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950027315"><span>Using dark current data to estimate AVIRIS noise covariance and improve spectral analyses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Boardman, Joseph W.</p> <p>1995-01-01</p> <p>Starting in 1994, all AVIRIS data distributions include a new product useful for quantification and modeling of the noise in the reported radiance data. The 'postcal' file contains approximately 100 lines of dark current data collected at the end of each data acquisition run. In essence this is a regular spectral-image cube, with 614 samples, 100 lines and 224 channels, collected with a closed shutter. Since there is no incident radiance signal, the recorded DN measure only the DC signal level and the noise in the system. Similar dark current measurements, made at the end of each line are used, with a 100 line moving average, to remove the DC signal offset. Therefore, the pixel-by-pixel fluctuations about the mean of this dark current image provide an excellent model for the additive noise that is present in AVIRIS reported radiance data. The 61,400 dark current spectra can be used to calculate the noise levels in each channel and the noise covariance matrix. Both of these noise parameters should be used to improve spectral processing techniques. Some processing techniques, such as spectral curve fitting, will benefit from a robust estimate of the channel-dependent noise levels. Other techniques, such as automated unmixing and classification, will be improved by the stable and scene-independence noise covariance estimate. Future imaging spectrometry systems should have a similar ability to record dark current data, permitting this noise characterization and modeling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018OcMod.126...77M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018OcMod.126...77M"><span>Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni</p> <p>2018-06-01</p> <p>Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3366196','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3366196"><span>Information loss and reconstruction in diffuse fluorescence tomography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bonfert-Taylor, Petra; Leblond, Frederic; Holt, Robert W.; Tichauer, Kenneth; Pogue, Brian W.; Taylor, Edward C.</p> <p>2012-01-01</p> <p>This paper is a theoretical exploration of spatial resolution in diffuse fluorescence tomography. It is demonstrated that, given a fixed imaging geometry, one cannot—relative to standard techniques such as Tikhonov regularization and truncated singular value decomposition—improve the spatial resolution of the optical reconstructions via increasing the node density of the mesh considered for modeling light transport. Using techniques from linear algebra, it is shown that, as one increases the number of nodes beyond the number of measurements, information is lost by the forward model. It is demonstrated that this information cannot be recovered using various common reconstruction techniques. Evidence is provided showing that this phenomenon is related to the smoothing properties of the elliptic forward model that is used in the diffusion approximation to light transport in tissue. This argues for reconstruction techniques that are sensitive to boundaries, such as L1-reconstruction and the use of priors, as well as the natural approach of building a measurement geometry that reflects the desired image resolution. PMID:22472763</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EPJST.222.2607T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EPJST.222.2607T"><span>Dynamics of the brain: Mathematical models and non-invasive experimental studies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Toronov, V.; Myllylä, T.; Kiviniemi, V.; Tuchin, V. V.</p> <p>2013-10-01</p> <p>Dynamics is an essential aspect of the brain function. In this article we review theoretical models of neural and haemodynamic processes in the human brain and experimental non-invasive techniques developed to study brain functions and to measure dynamic characteristics, such as neurodynamics, neurovascular coupling, haemodynamic changes due to brain activity and autoregulation, and cerebral metabolic rate of oxygen. We focus on emerging theoretical biophysical models and experimental functional neuroimaging results, obtained mostly by functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS). We also included our current results on the effects of blood pressure variations on cerebral haemodynamics and simultaneous measurements of fast processes in the brain by near-infrared spectroscopy and a very novel functional MRI technique called magnetic resonance encephalography. Based on a rapid progress in theoretical and experimental techniques and due to the growing computational capacities and combined use of rapidly improving and emerging neuroimaging techniques we anticipate during next decade great achievements in the overall knowledge of the human brain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=military+AND+spending&id=EJ893862','ERIC'); return false;" href="https://eric.ed.gov/?q=military+AND+spending&id=EJ893862"><span>Military Spending and Economic Well-Being in the American States: The Post-Vietnam War Era</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Borch, Casey; Wallace, Michael</p> <p>2010-01-01</p> <p>Using growth curve modeling techniques, this research investigates whether military spending improved or worsened the economic well-being of citizens within the American states during the post-Vietnam War period. We empirically test the military Keynesianism claim that military spending improves the economic conditions of citizens through its use…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28110854','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28110854"><span>Training in Cerebral Aneurysm Clipping Using Self-Made 3-Dimensional Models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mashiko, Toshihiro; Kaneko, Naoki; Konno, Takehiko; Otani, Keisuke; Nagayama, Rie; Watanabe, Eiju</p> <p></p> <p>Recently, there have been increasingly fewer opportunities for junior surgeons to receive on-the-job training. Therefore, we created custom-built three-dimensional (3D) surgical simulators for training in connection with cerebral aneurysm clipping. Three patient-specific models were composed of a trimmed skull, retractable brain, and a hollow elastic aneurysm with its parent artery. The brain models were created using 3D printers via a casting technique. The artery models were made by 3D printing and a lost-wax technique. Four residents and 2 junior neurosurgeons attended the training courses. The trainees retracted the brain, observed the parent arteries and aneurysmal neck, selected the clip(s), and clipped the neck of an aneurysm. The duration of simulation was recorded. A senior neurosurgeon then assessed the trainee's technical skill and explained how to improve his/her performance for the procedure using a video of the actual surgery. Subsequently, the trainee attempted the clipping simulation again, using the same model. After the course, the senior neurosurgeon assessed each trainee's technical skill. The trainee critiqued the usefulness of the model and the effectiveness of the training course. Trainees succeeded in performing the simulation in line with an actual surgery. Their skills tended to improve upon completion of the training. These simulation models are easy to create, and we believe that they are very useful for training junior neurosurgeons in the surgical techniques needed for cerebral aneurysm clipping. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19980029680','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19980029680"><span>Methods to Improve the Maintenance of the Earth Catalog of Satellites During Severe Solar Storms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wilkin, Paul G.; Tolson, Robert H.</p> <p>1998-01-01</p> <p>The objective of this thesis is to investigate methods to improve the ability to maintain the inventory of orbital elements of Earth satellites during periods of atmospheric disturbance brought on by severe solar activity. Existing techniques do not account for such atmospheric dynamics, resulting in tracking errors of several seconds in predicted crossing time. Two techniques are examined to reduce of these tracking errors. First, density predicted from various atmospheric models is fit to the orbital decay rate for a number of satellites. An orbital decay model is then developed that could be used to reduce tracking errors by accounting for atmospheric changes. The second approach utilizes a Kalman filter to estimate the orbital decay rate of a satellite after every observation. The new information is used to predict the next observation. Results from the first approach demonstrated the feasibility of building an orbital decay model based on predicted atmospheric density. Correlation of atmospheric density to orbital decay was as high as 0.88. However, it is clear that contemporary: atmospheric models need further improvement in modeling density perturbations polar region brought on by solar activity. The second approach resulted in a dramatic reduction in tracking errors for certain satellites during severe solar Storms. For example, in the limited cases studied, the reduction in tracking errors ranged from 79 to 25 percent.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990019562&hterms=skin+sensors&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dskin%2Bsensors','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990019562&hterms=skin+sensors&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dskin%2Bsensors"><span>Assimilation of Goes-Derived Skin Temperature Tendencies into Mesoscale Models to Improve Forecasts of near Surface Air Temperature and Mixing Ratio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lapenta, William M.; McNider, Richard T.; Suggs, Ron; Jedlovec, Gary; Robertson, Franklin R.</p> <p>1998-01-01</p> <p>A technique has been developed for assimilating GOES-FR skin temperature tendencies into the surface energy budget equation of a mesoscale model so that the simulated rate of temperature chance closely agrees with the satellite observations. A critical assumption of the technique is that the availability of moisture (either from the soil or vegetation) is the least known term in the model's surface energy budget. Therefore, the simulated latent heat flux, which is a function of surface moisture availability, is adjusted based upon differences between the modeled and satellite-observed skin temperature tendencies. An advantage of this technique is that satellite temperature tendencies are assimilated in an energetically consistent manner that avoids energy imbalances and surface stability problems that arise from direct assimilation of surface shelter temperatures. The fact that the rate of change of the satellite skin temperature is used rather than the absolute temperature means that sensor calibration is not as critical. An advantage of this technique for short-range forecasts (0-48 h) is that it does not require a complex land-surface formulation within the atmospheric model. As a result, the need to specify poorly known soil and vegetative characteristics is eliminated. The GOES assimilation technique has been incorporated into the PSU/NCAR MM5. Results will be presented to demonstrate the ability of the assimilation scheme to improve short- term (0-48h) simulations of near-surface air temperature and mixing ratio during the warm season for several selected cases which exhibit a variety of atmospheric and land-surface conditions. In addition, validation of terms in the simulated surface energy budget will be presented using in situ data collected at the Southern Great Plains (SGP) Cloud And Radiation Testbed (CART) site as part of the Atmospheric Radiation Measurements Program (ARM).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830027094','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830027094"><span>Stochastic models for atomic clocks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Barnes, J. A.; Jones, R. H.; Tryon, P. V.; Allan, D. W.</p> <p>1983-01-01</p> <p>For the atomic clocks used in the National Bureau of Standards Time Scales, an adequate model is the superposition of white FM, random walk FM, and linear frequency drift for times longer than about one minute. The model was tested on several clocks using maximum likelihood techniques for parameter estimation and the residuals were acceptably random. Conventional diagnostics indicate that additional model elements contribute no significant improvement to the model even at the expense of the added model complexity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16224291','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16224291"><span>Systems thinking: what business modeling can do for public health.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Williams, Warren; Lyalin, David; Wingo, Phyllis A</p> <p>2005-01-01</p> <p>Today's public health programs are complex business systems with multiple levels of collaborating federal, state, and local entities. The use of proven systems engineering modeling techniques to analyze, align, and streamline public health operations is in the beginning stages. The authors review the initial business modeling efforts in immunization and cancer registries and present a case to broadly apply business modeling approaches to analyze and improve public health processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012ISPAr39B3..519Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012ISPAr39B3..519Y"><span>Research on the Improved Image Dodging Algorithm Based on Mask Technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yao, F.; Hu, H.; Wan, Y.</p> <p>2012-08-01</p> <p>The remote sensing image dodging algorithm based on Mask technique is a good method for removing the uneven lightness within a single image. However, there are some problems with this algorithm, such as how to set an appropriate filter size, for which there is no good solution. In order to solve these problems, an improved algorithm is proposed. In this improved algorithm, the original image is divided into blocks, and then the image blocks with different definitions are smoothed using the low-pass filters with different cut-off frequencies to get the background image; for the image after subtraction, the regions with different lightness are processed using different linear transformation models. The improved algorithm can get a better dodging result than the original one, and can make the contrast of the whole image more consistent.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70030278','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70030278"><span>Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.</p> <p>2006-01-01</p> <p>The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19780009540','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19780009540"><span>Application of solar energy to air conditioning systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nash, J. M.; Harstad, A. J.</p> <p>1976-01-01</p> <p>The results of a survey of solar energy system applications of air conditioning are summarized. Techniques discussed are both solar powered (absorption cycle and the heat engine/Rankine cycle) and solar related (heat pump). Brief descriptions of the physical implications of various air conditioning techniques, discussions of status, proposed technological improvements, methods of utilization and simulation models are presented, along with an extensive bibliography of related literature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10178E..03L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10178E..03L"><span>Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lovejoy, McKenna R.; Wickert, Mark A.</p> <p>2017-05-01</p> <p>A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23870978','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23870978"><span>Rapid analysis of adulterations in Chinese lotus root powder (LRP) by near-infrared (NIR) spectroscopy coupled with chemometric class modeling techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xu, Lu; Shi, Peng-Tao; Ye, Zi-Hong; Yan, Si-Min; Yu, Xiao-Ping</p> <p>2013-12-01</p> <p>This paper develops a rapid analysis method for adulteration identification of a popular traditional Chinese food, lotus root powder (LRP), by near-infrared spectroscopy and chemometrics. 85 pure LRP samples were collected from 7 main lotus producing areas of China to include most if not all of the significant variations likely to be encountered in unknown authentic materials. To evaluate the model specificity, 80 adulterated LRP samples prepared by blending pure LRP with different levels of four cheaper and commonly used starches were measured and predicted. For multivariate quality models, two class modeling methods, the traditional soft independent modeling of class analogy (SIMCA) and a recently proposed partial least squares class model (PLSCM) were used. Different data preprocessing techniques, including smoothing, taking derivative and standard normal variate (SNV) transformation were used to improve the classification performance. The results indicate that smoothing, taking second-order derivatives and SNV can improve the class models by enhancing signal-to-noise ratio, reducing baseline and background shifts. The most accurate and stable models were obtained with SNV spectra for both SIMCA (sensitivity 0.909 and specificity 0.938) and PLSCM (sensitivity 0.909 and specificity 0.925). Moreover, both SIMCA and PLSCM could detect LRP samples mixed with 5% (w/w) or more other cheaper starches, including cassava, sweet potato, potato and maize starches. Although it is difficult to perform an exhaustive collection of all pure LRP samples and possible adulterations, NIR spectrometry combined with class modeling techniques provides a reliable and effective method to detect most of the current LRP adulterations in Chinese market. Copyright © 2013 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26123325','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26123325"><span>Safer trocar insertion for closed laparoscopic access: ex vivo assessment of an improved Veress needle.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nevler, Avinoam; Har-Zahav, Gil; Rosin, Danny; Gutman, Mordechai</p> <p>2016-02-01</p> <p>Laparoscopic surgery is widely practiced surgical technique in the modern surgical toolbox. The Veress needle insertion technique, while faster and easier, is associated with higher rates of iatrogenic complications (injury to internal organs, major blood vessels, etc.), morbidity and even mortality with a reported overall risk of 0.32% during surgical interventions. In order to increase the safety and ease of closed insertion technique, we designed and tested an improved prototype of the Veress needle. The new Veress needle includes a distal expandable portion that allows elevation of the abdominal wall and safe insertion of the first trocar over it. The needle was assessed by measurement of ease of insertion, ease of trocar advancement, associated tissue damage, device integrity and weight-bearing capacity on an ex vivo Gallus domesticus animal model: The prototype was tested over 20 times using different traction forces. The experiment was qualitatively repeated on an ex vivo porcine model. In the G. domesticus model, the improved needle supported forces of up to 5.75 kg F. No damage or mechanical malfunction was seen at any stage of the experiment. Needle penetration, ease of trocar insertion, system anchoring and weight-bearing capacity were rated (1-5) by four raters--mean 4.9 ± 0.31. Inter-rater agreement was high (free marginal κ 0.75). The porcine experiment revealed similar ease of use with neither complication nor damage to the abdominal wall. We believe that the new Veress system is easy to use, requires no additional training, non-inferior in its capabilities compared to the traditional Veress needle, with the advantage of improving the safety of the first trocar insertion phase of the operation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24307477','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24307477"><span>Robotic cholecystectomy and resident education: the UC Davis experience.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nelson, Eric C; Gottlieb, Andrea H; Müller, Hans-Georg; Smith, William; Ali, Mohamed R; Vidovszky, Tamas J</p> <p>2014-06-01</p> <p>The popularity of robotic surgery highlights the need for strategies to integrate this technique into surgical education. We present 5 year data for robotic cholecystectomy (RC) as a model for training residents. Data were collected on all RC over 66 months. Duration for docking the robot (S2) and performing RC (S3), and surgical outcomes, were recorded. We used a linear mixed effects model to investigate learning curves. Thirty-eight trainees performed 160 RCs, with most performing more than four. One case was aborted due to haemodynamic instability, and two were converted to open surgery due to adhesions. There were no technical complications. The duration of S2 (mean = 6.2 ± 3.6 min) decreased considerably (p = 0.027). Trainees also demonstrated decrease in duration of S3 (mean = 38.4 ± 15.4 min), indicating improvement in technique (p = 0.008). RC is an effective model for teaching residents. Significant and reproducible improvement can be realized with low risk of adverse outcomes. Copyright © 2013 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19423880','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19423880"><span>ICCD: interactive continuous collision detection between deformable models using connectivity-based culling.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tang, Min; Curtis, Sean; Yoon, Sung-Eui; Manocha, Dinesh</p> <p>2009-01-01</p> <p>We present an interactive algorithm for continuous collision detection between deformable models. We introduce multiple techniques to improve the culling efficiency and the overall performance of continuous collision detection. First, we present a novel formulation for continuous normal cones and use these normal cones to efficiently cull large regions of the mesh as part of self-collision tests. Second, we introduce the concept of "procedural representative triangles" to remove all redundant elementary tests between nonadjacent triangles. Finally, we exploit the mesh connectivity and introduce the concept of "orphan sets" to eliminate redundant elementary tests between adjacent triangle primitives. In practice, we can reduce the number of elementary tests by two orders of magnitude. These culling techniques have been combined with bounding volume hierarchies and can result in one order of magnitude performance improvement as compared to prior collision detection algorithms for deformable models. We highlight the performance of our algorithm on several benchmarks, including cloth simulations, N-body simulations, and breaking objects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010isse.book..312T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010isse.book..312T"><span>Proactive Security Testing and Fuzzing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Takanen, Ari</p> <p></p> <p>Software is bound to have security critical flaws, and no testing or code auditing can ensure that software is flaw-less. But software security testing requirements have improved radically during the past years, largely due to criticism from security conscious consumers and Enterprise customers. Whereas in the past, security flaws were taken for granted (and patches were quietly and humbly installed), they now are probably one of the most common reasons why people switch vendors or software providers. The maintenance costs from security updates often add to become one of the biggest cost items to large Enterprise users. Fortunately test automation techniques have also improved. Techniques like model-based testing (MBT) enable efficient generation of security tests that reach good confidence levels in discovering zero-day mistakes in software. This technique is called fuzzing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/1801798','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/1801798"><span>A quality improvement management model for renal care.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vlchek, D L; Day, L M</p> <p>1991-04-01</p> <p>The purpose of this article is to explore the potential for applying the theory and tools of quality improvement (total quality management) in the renal care setting. We believe that the coupling of the statistical techniques used in the Deming method of quality improvement, with modern approaches to outcome and process analysis, will provide the renal care community with powerful tools, not only for improved quality (i.e., reduced morbidity and mortality), but also for technology evaluation and resource allocation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26472880','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26472880"><span>MCAT to XCAT: The Evolution of 4-D Computerized Phantoms for Imaging Research: Computer models that take account of body movements promise to provide evaluation and improvement of medical imaging devices and technology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Paul Segars, W; Tsui, Benjamin M W</p> <p>2009-12-01</p> <p>Recent work in the development of computerized phantoms has focused on the creation of ideal "hybrid" models that seek to combine the realism of a patient-based voxelized phantom with the flexibility of a mathematical or stylized phantom. We have been leading the development of such computerized phantoms for use in medical imaging research. This paper will summarize our developments dating from the original four-dimensional (4-D) Mathematical Cardiac-Torso (MCAT) phantom, a stylized model based on geometric primitives, to the current 4-D extended Cardiac-Torso (XCAT) and Mouse Whole-Body (MOBY) phantoms, hybrid models of the human and laboratory mouse based on state-of-the-art computer graphics techniques. This paper illustrates the evolution of computerized phantoms toward more accurate models of anatomy and physiology. This evolution was catalyzed through the introduction of nonuniform rational b-spline (NURBS) and subdivision (SD) surfaces, tools widely used in computer graphics, as modeling primitives to define a more ideal hybrid phantom. With NURBS and SD surfaces as a basis, we progressed from a simple geometrically based model of the male torso (MCAT) containing only a handful of structures to detailed, whole-body models of the male and female (XCAT) anatomies (at different ages from newborn to adult), each containing more than 9000 structures. The techniques we applied for modeling the human body were similarly used in the creation of the 4-D MOBY phantom, a whole-body model for the mouse designed for small animal imaging research. From our work, we have found the NURBS and SD surface modeling techniques to be an efficient and flexible way to describe the anatomy and physiology for realistic phantoms. Based on imaging data, the surfaces can accurately model the complex organs and structures in the body, providing a level of realism comparable to that of a voxelized phantom. In addition, they are very flexible. Like stylized models, they can easily be manipulated to model anatomical variations and patient motion. With the vast improvement in realism, the phantoms developed in our lab can be combined with accurate models of the imaging process (SPECT, PET, CT, magnetic resonance imaging, and ultrasound) to generate simulated imaging data close to that from actual human or animal subjects. As such, they can provide vital tools to generate predictive imaging data from many different subjects under various scanning parameters from which to quantitatively evaluate and improve imaging devices and techniques. From the MCAT to XCAT, we will demonstrate how NURBS and SD surface modeling have resulted in a major evolutionary advance in the development of computerized phantoms for imaging research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED549177.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED549177.pdf"><span>Using the SIOP Model to Improve Middle School Science Instruction. CREATE Brief</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Himmel, Jennifer; Short, Deborah J.; Richards, Catherine; Echevarria, Jana</p> <p>2009-01-01</p> <p>This brief provides an overview of the SIOP Model and highlights how teachers can develop content and language objectives, emphasize key vocabulary, promote interaction, and incorporate effective review and assessment techniques within the context of middle school science. It provides research-based examples and strategies in order to illustrate…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=dental+AND+chart&pg=5&id=ED012431','ERIC'); return false;" href="https://eric.ed.gov/?q=dental+AND+chart&pg=5&id=ED012431"><span>HEALTH PROGRAM INPLEMENTATION THROUGH PERT, ADMINISTRATIVE AND EDUCATIONAL USES.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>ARNOLD, MARY F.; AND OTHERS</p> <p></p> <p>THE MAIN ADVANTAGE OF THE PROGRAM EVALUATION AND REVIEW TECHNIQUE (PERT) IS THE PROVISION OF A GRAPHIC MODEL OF ACTIVITIES WITH ESTIMATES OF THE TIME, RESOURCES, PERSONNEL, AND FACILITIES NECESSARY TO ACCOMPLISH A SEQUENCE OF INTERDEPENDENT ACTIVITIES, AS IN PROGRAM IMPLEMENTATION. A PERT MODEL CAN ALSO IMPROVE COMMUNICATION BETWEEN PERSONS AND…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2012-01-05/pdf/2011-33802.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2012-01-05/pdf/2011-33802.pdf"><span>77 FR 485 - Wind Plant Performance-Public Meeting on Modeling and Testing Needs for Complex Air Flow...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2012-01-05</p> <p>... modeling needs and experimental validation techniques for complex flow phenomena in and around off- shore... experimental validation. Ultimately, research in this area may lead to significant improvements in wind plant... meeting will consist of an initial plenary session in which invited speakers will survey available...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/7935','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/7935"><span>Technique for ranking potential predictor layers for use in remote sensing analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Andrew Lister; Mike Hoppus; Rachel Riemann</p> <p>2004-01-01</p> <p>Spatial modeling using GIS-based predictor layers often requires that extraneous predictors be culled before conducting analysis. In some cases, using extraneous predictor layers might improve model accuracy but at the expense of increasing complexity and interpretability. In other cases, using extraneous layers can dilute the relationship between predictors and target...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6648925-electromagnetic-test-facility-characterization-identification-approach','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6648925-electromagnetic-test-facility-characterization-identification-approach"><span>Electromagnetic Test-Facility characterization: an identification approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zicker, J.E.; Candy, J.V.</p> <p></p> <p>The response of an object subjected to high energy, transient electromagnetic (EM) fields sometimes called electromagnetic pulses (EMP), is an important issue in the survivability of electronic systems (e.g., aircraft), especially when the field has been generated by a high altitude nuclear burst. The characterization of transient response information is a matter of national concern. In this report we discuss techniques to: (1) improve signal processing at a test facility; and (2) parameterize a particular object response. First, we discuss the application of identification-based signal processing techniques to improve signal levels at the Lawrence Livermore National Laboratory (LLNL) EM Transientmore » Test Facility. We identify models of test equipment and then use these models to deconvolve the input/output sequences for the object under test. A parametric model of the object is identified from this data. The model can be used to extrapolate the response to these threat level EMP. Also discussed is the development of a facility simulator (EMSIM) useful for experimental design and calibration and a deconvolution algorithm (DECONV) useful for removing probe effects from the measured data.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16143446','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16143446"><span>A spatial-temporal system for dynamic cadastral management.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nan, Liu; Renyi, Liu; Guangliang, Zhu; Jiong, Xie</p> <p>2006-03-01</p> <p>A practical spatio-temporal database (STDB) technique for dynamic urban land management is presented. One of the STDB models, the expanded model of Base State with Amendments (BSA), is selected as the basis for developing the dynamic cadastral management technique. Two approaches, the Section Fast Indexing (SFI) and the Storage Factors of Variable Granularity (SFVG), are used to improve the efficiency of the BSA model. Both spatial graphic data and attribute data, through a succinct engine, are stored in standard relational database management systems (RDBMS) for the actual implementation of the BSA model. The spatio-temporal database is divided into three interdependent sub-databases: present DB, history DB and the procedures-tracing DB. The efficiency of database operation is improved by the database connection in the bottom layer of the Microsoft SQL Server. The spatio-temporal system can be provided at a low-cost while satisfying the basic needs of urban land management in China. The approaches presented in this paper may also be of significance to countries where land patterns change frequently or to agencies where financial resources are limited.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4266541','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4266541"><span>Model-driven approach to data collection and reporting for quality improvement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Curcin, Vasa; Woodcock, Thomas; Poots, Alan J.; Majeed, Azeem; Bell, Derek</p> <p>2014-01-01</p> <p>Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. PMID:24874182</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28391145','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28391145"><span>Topsoil moisture mapping using geostatistical techniques under different Mediterranean climatic conditions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Martínez-Murillo, J F; Hueso-González, P; Ruiz-Sinoga, J D</p> <p>2017-10-01</p> <p>Soil mapping has been considered as an important factor in the widening of Soil Science and giving response to many different environmental questions. Geostatistical techniques, through kriging and co-kriging techniques, have made possible to improve the understanding of eco-geomorphologic variables, e.g., soil moisture. This study is focused on mapping of topsoil moisture using geostatistical techniques under different Mediterranean climatic conditions (humid, dry and semiarid) in three small watersheds and considering topography and soil properties as key factors. A Digital Elevation Model (DEM) with a resolution of 1×1m was derived from a topographical survey as well as soils were sampled to analyzed soil properties controlling topsoil moisture, which was measured during 4-years. Afterwards, some topography attributes were derived from the DEM, the soil properties analyzed in laboratory, and the topsoil moisture was modeled for the entire watersheds applying three geostatistical techniques: i) ordinary kriging; ii) co-kriging considering as co-variate topography attributes; and iii) co-kriging ta considering as co-variates topography attributes and gravel content. The results indicated topsoil moisture was more accurately mapped in the dry and semiarid watersheds when co-kriging procedure was performed. The study is a contribution to improve the efficiency and accuracy of studies about the Mediterranean eco-geomorphologic system and soil hydrology in field conditions. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5749319','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5749319"><span>Additive Manufacturing Techniques for the Reconstruction of 3D Fetal Faces</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Citro, Daniela; Padula, Francesco; Motyl, Barbara; Marcolin, Federica; Calì, Michele</p> <p>2017-01-01</p> <p>This paper deals with additive manufacturing techniques for the creation of 3D fetal face models starting from routine 3D ultrasound data. In particular, two distinct themes are addressed. First, a method for processing and building 3D models based on the use of medical image processing techniques is proposed. Second, the preliminary results of a questionnaire distributed to future parents consider the use of these reconstructions both from an emotional and an affective point of view. In particular, the study focuses on the enhancement of the perception of maternity or paternity and the improvement in the relationship between parents and physicians in case of fetal malformations, in particular facial or cleft lip diseases. PMID:29410600</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20308347','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20308347"><span>Emerging models for mobilizing family support for chronic disease management: a structured review.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rosland, Ann-Marie; Piette, John D</p> <p>2010-03-01</p> <p>We identify recent models for programmes aiming to increase effective family support for chronic illness management and self-care among adult patients without significant physical or cognitive disabilities. We then summarize evidence regarding the efficacy for each model identified. Structured review of studies published in medical and psychology databases from 1990 to the present, reference review, general Web searches and conversations with family intervention experts. Review was limited to studies on conditions that require ongoing self-management, such as diabetes, chronic heart disease and rheumatologic disease. Programmes with three separate foci were identified: (1) Programmes that guide family members in setting goals for supporting patient self-care behaviours have led to improved implementation of family support roles, but have mixed success improving patient outcomes. (2) Programmes that train family in supportive communication techniques, such as prompting patient coping techniques or use of autonomy supportive statements, have successfully improved patient symptom management and health behaviours. (3) Programmes that give families tools and infrastructure to assist in monitoring clinical symptoms and medications are being conducted, with no evidence to date on their impact on patient outcomes. The next generation of programmes to improve family support for chronic disease management incorporate a variety of strategies. Future research can define optimal clinical situations for family support programmes, the most effective combinations of support strategies, and how best to integrate family support programmes into comprehensive models of chronic disease care.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.8724N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.8724N"><span>Towards improved hydrologic predictions using data assimilation techniques for water resource management at the continental scale</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Naz, Bibi; Kurtz, Wolfgang; Kollet, Stefan; Hendricks Franssen, Harrie-Jan; Sharples, Wendy; Görgen, Klaus; Keune, Jessica; Kulkarni, Ketan</p> <p>2017-04-01</p> <p>More accurate and reliable hydrologic simulations are important for many applications such as water resource management, future water availability projections and predictions of extreme events. However, simulation of spatial and temporal variations in the critical water budget components such as precipitation, snow, evaporation and runoff is highly uncertain, due to errors in e.g. model structure and inputs (hydrologic parameters and forcings). In this study, we use data assimilation techniques to improve the predictability of continental-scale water fluxes using in-situ measurements along with remotely sensed information to improve hydrologic predications for water resource systems. The Community Land Model, version 3.5 (CLM) integrated with the Parallel Data Assimilation Framework (PDAF) was implemented at spatial resolution of 1/36 degree (3 km) over the European CORDEX domain. The modeling system was forced with a high-resolution reanalysis system COSMO-REA6 from Hans-Ertel Centre for Weather Research (HErZ) and ERA-Interim datasets for time period of 1994-2014. A series of data assimilation experiments were conducted to assess the efficiency of assimilation of various observations, such as river discharge data, remotely sensed soil moisture, terrestrial water storage and snow measurements into the CLM-PDAF at regional to continental scales. This setup not only allows to quantify uncertainties, but also improves streamflow predictions by updating simultaneously model states and parameters utilizing observational information. The results from different regions, watershed sizes, spatial resolutions and timescales are compared and discussed in this study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4349200','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4349200"><span>Emerging Models for Mobilizing Family Support for Chronic Disease Management: A Structured Review</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rosland, Ann-Marie; Piette, John D.</p> <p>2015-01-01</p> <p>Objectives We identify recent models for programs aiming to increase effective family support for chronic illness management and self-care among adult patients without significant physical or cognitive disabilities. We then summarize evidence regarding the efficacy for each model identified. Methods Structured review of studies published in medical and psychology databases from 1990 to the present, reference review, general Web searches, and conversations with family intervention experts. Review was limited to studies on conditions that require ongoing self-management, such as diabetes, chronic heart disease, and rheumatologic disease. Results Programs with three separate foci were identified: 1) Programs that guide family members in setting goals for supporting patient self-care behaviors have led to improved implementation of family support roles, but have mixed success improving patient outcomes. 2) Programs that train family in supportive communication techniques, such as prompting patient coping techniques or use of autonomy supportive statements, have successfully improved patient symptom management and health behaviors. 3) Programs that give families tools and infrastructure to assist in monitoring clinical symptoms and medications are being conducted, with no evidence to date on their impact on patient outcomes. Discussion The next generation of programs to improve family support for chronic disease management incorporate a variety of strategies. Future research can define optimal clinical situations for family support programs, the most effective combinations of support strategies, and how best to integrate family support programs into comprehensive models of chronic disease care. PMID:20308347</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23943239','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23943239"><span>Automatic control of the NMB level in general anaesthesia with a switching total system mass control strategy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Teixeira, Miguel; Mendonça, Teresa; Rocha, Paula; Rabiço, Rui</p> <p>2014-12-01</p> <p>This paper presents a model based switching control strategy to drive the neuromuscular blockade (NMB) level of patients undergoing general anesthesia to a predefined reference. A single-input single-output Wiener system with only two parameters is used to model the effect of two different muscle relaxants, atracurium and rocuronium, and a switching controller is designed based on a bank of total system mass control laws. Each of such laws is tuned for an individual model from a bank chosen to represent the behavior of the whole population. The control law to be applied at each instant corresponds to the model whose NMB response is closer to the patient's response. Moreover a scheme to improve the reference tracking quality based on the analysis of the patient's response, as well as, a comparison between the switching strategy and the Extended Kalman Kilter (EKF) technique are presented. The results are illustrated by means of several simulations, where switching shows to provide good results, both in theory and in practice, with a desirable reference tracking. The reference tracking improvement technique is able to produce a better reference tracking. Also, this technique showed a better performance than the (EKF). Based on these results, the switching control strategy with a bank of total system mass control laws proved to be robust enough to be used as an automatic control system for the NMB level.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ZNatA..72..733R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ZNatA..72..733R"><span>A New Homotopy Perturbation Scheme for Solving Singular Boundary Value Problems Arising in Various Physical Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Roul, Pradip; Warbhe, Ujwal</p> <p>2017-08-01</p> <p>The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4697117','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4697117"><span>Various diffusion magnetic resonance imaging techniques for pancreatic cancer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Tang, Meng-Yue; Zhang, Xiao-Ming; Chen, Tian-Wu; Huang, Xiao-Hua</p> <p>2015-01-01</p> <p>Pancreatic cancer is one of the most common malignant tumors and remains a treatment-refractory cancer with a poor prognosis. Currently, the diagnosis of pancreatic neoplasm depends mainly on imaging and which methods are conducive to detecting small lesions. Compared to the other techniques, magnetic resonance imaging (MRI) has irreplaceable advantages and can provide valuable information unattainable with other noninvasive or minimally invasive imaging techniques. Advances in MR hardware and pulse sequence design have particularly improved the quality and robustness of MRI of the pancreas. Diffusion MR imaging serves as one of the common functional MRI techniques and is the only technique that can be used to reflect the diffusion movement of water molecules in vivo. It is generally known that diffusion properties depend on the characterization of intrinsic features of tissue microdynamics and microstructure. With the improvement of the diffusion models, diffusion MR imaging techniques are increasingly varied, from the simplest and most commonly used technique to the more complex. In this review, the various diffusion MRI techniques for pancreatic cancer are discussed, including conventional diffusion weighted imaging (DWI), multi-b DWI based on intra-voxel incoherent motion theory, diffusion tensor imaging and diffusion kurtosis imaging. The principles, main parameters, advantages and limitations of these techniques, as well as future directions for pancreatic diffusion imaging are also discussed. PMID:26753059</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AAS...22914103A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AAS...22914103A"><span>Gravitational Wave Detection of Compact Binaries Through Multivariate Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Atallah, Dany Victor; Dorrington, Iain; Sutton, Patrick</p> <p>2017-01-01</p> <p>The first detection of gravitational waves (GW), GW150914, as produced by a binary black hole merger, has ushered in the era of GW astronomy. The detection technique used to find GW150914 considered only a fraction of the information available describing the candidate event: mainly the detector signal to noise ratios and chi-squared values. In hopes of greatly increasing detection rates, we want to take advantage of all the information available about candidate events. We employ a technique called Multivariate Analysis (MVA) to improve LIGO sensitivity to GW signals. MVA techniques are efficient ways to scan high dimensional data spaces for signal/noise classification. Our goal is to use MVA to classify compact-object binary coalescence (CBC) events composed of any combination of black holes and neutron stars. CBC waveforms are modeled through numerical relativity. Templates of the modeled waveforms are used to search for CBCs and quantify candidate events. Different MVA pipelines are under investigation to look for CBC signals and un-modelled signals, with promising results. One such MVA pipeline used for the un-modelled search can theoretically analyze far more data than the MVA pipelines currently explored for CBCs, potentially making a more powerful classifier. In principle, this extra information could improve the sensitivity to GW signals. We will present the results from our efforts to adapt an MVA pipeline used in the un-modelled search to classify candidate events from the CBC search.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=huang&pg=7&id=EJ090015','ERIC'); return false;" href="https://eric.ed.gov/?q=huang&pg=7&id=EJ090015"><span>The Animism Controversy Revisited: A Probability Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Smeets, Paul M.</p> <p>1973-01-01</p> <p>Considers methodological issues surrounding the Piaget-Huang controversy. A probability model, based on the difference between the expected and observed animistic and deanimistic responses is applied as an improved technique for the assessment of animism. (DP)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160001852','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160001852"><span>Characterization of Orbital Debris via Hyper-Velocity Ground-Based Tests</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cowardin, Heather</p> <p>2016-01-01</p> <p>The purpose of the DebriSat project is to replicate a hyper-velocity fragmentation event using modern-day spacecraft materials and construction techniques to better improve the existing DoDand NASA breakup models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040086916','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040086916"><span>A New Computational Framework for Atmospheric and Surface Remote Sensing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Timucin, Dogan A.</p> <p>2004-01-01</p> <p>A Bayesian data-analysis framework is described for atmospheric and surface retrievals from remotely-sensed hyper-spectral data. Some computational techniques are high- lighted for improved accuracy in the forward physics model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013WRR....49.3756K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013WRR....49.3756K"><span>Impact of multicollinearity on small sample hydrologic regression models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kroll, Charles N.; Song, Peter</p> <p>2013-06-01</p> <p>Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1991nacm.proc..365D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1991nacm.proc..365D"><span>Supersonic reacting internal flowfields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Drummond, J. P.</p> <p></p> <p>The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flowfields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900016778','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900016778"><span>Supersonic reacting internal flow fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Drummond, J. Philip</p> <p>1989-01-01</p> <p>The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flow fields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70015396','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70015396"><span>Spreadsheet WATERSHED modeling for nonpoint-source pollution management in a Wisconsin basin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Walker, J.F.; Pickard, S.A.; Sonzogni, W.C.</p> <p>1989-01-01</p> <p>Although several sophisticated nonpoint pollution models exist, few are available that are easy to use, cover a variety of conditions, and integrate a wide range of information to allow managers and planners to assess different control strategies. Here, a straightforward pollutant input accounting approach is presented in the form of an existing model (WATERSHED) that has been adapted to run on modern electronic spreadsheets. As an application, WATERSHED is used to assess options to improve the quality of highly eutrophic Delavan Lake in Wisconsin. WATERSHED is flexible in that several techniques, such as the Universal Soil Loss Equation or unit-area loadings, can be used to estimate nonpoint-source inputs. Once the model parameters are determined (and calibrated, if possible), the spreadsheet features can be used to conduct a sensitivity analysis of management options. In the case of Delavan Lake, it was concluded that, although some nonpoint controls were cost-effective, the overall reduction in phosphorus would be insufficient to measurably improve water quality.A straightforward pollutant input accounting approach is presented in the form of an existing model (WATERSHED) that has been adapted to run on modern electronic spreadsheets. As an application, WATERSHED is used to assess options to improve the quality of highly eutrophic Delavan Lake in Wisconsin. WATERSHED is flexible in that several techniques, such as the Universal Soil Loss Equation or unit-area loadings, can be used to estimate nonpoint-source inputs. Once the model parameters are determined (and calibrated, if possible), the spreadsheet features can be used to conduct a sensitivity analysis of management options. In the case of Delavan Lake, it was concluded that, although some nonpoint controls were cost-effective, the overall reduction in phosphorus would be insufficient to measurably improve water quality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007SPIE.6532E..18O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007SPIE.6532E..18O"><span>Finite element model correlation of a composite UAV wing using modal frequencies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oliver, Joseph A.; Kosmatka, John B.; Hemez, François M.; Farrar, Charles R.</p> <p>2007-04-01</p> <p>The current work details the implementation of a meta-model based correlation technique on a composite UAV wing test piece and associated finite element (FE) model. This method involves training polynomial models to emulate the FE input-output behavior and then using numerical optimization to produce a set of correlated parameters which can be returned to the FE model. After discussions about the practical implementation, the technique is validated on a composite plate structure and then applied to the UAV wing structure, where it is furthermore compared to a more traditional Newton-Raphson technique which iteratively uses first-order Taylor-series sensitivity. The experimental testpiece wing comprises two graphite/epoxy prepreg and Nomex honeycomb co-cured skins and two prepreg spars bonded together in a secondary process. MSC.Nastran FE models of the four structural components are correlated independently, using modal frequencies as correlation features, before being joined together into the assembled structure and compared to experimentally measured frequencies from the assembled wing in a cantilever configuration. Results show that significant improvements can be made to the assembled model fidelity, with the meta-model procedure producing slightly superior results to Newton-Raphson iteration. Final evaluation of component correlation using the assembled wing comparison showed worse results for each correlation technique, with the meta-model technique worse overall. This can be most likely be attributed to difficultly in correlating the open-section spars; however, there is also some question about non-unique update variable combinations in the current configuration, which lead correlation away from physically probably values.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29233510','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29233510"><span>Patient-specific puzzle implant preformed with 3D-printed rapid prototype model for combined orbital floor and medial wall fracture.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Young Chul; Min, Kyung Hyun; Choi, Jong Woo; Koh, Kyung S; Oh, Tae Suk; Jeong, Woo Shik</p> <p>2018-04-01</p> <p>The management of combined orbital floor and medial wall fractures involving the inferomedial strut is challenging due to absence of stable cornerstone. In this article, we proposed surgical strategies using customized 3D puzzle implant preformed with Rapid Prototype (RP) skull model. Retrospective review was done in 28 patients diagnosed with combined orbital floor and medial wall fracture. Using preoperative CT scans, original and mirror-imaged RP skull models for each patient were prepared and sterilized. In all patients, porous polyethylene-coated titanium mesh was premolded onto RP skull model in two ways; Customized 3D jigsaw puzzle technique was used in 15 patients with comminuted inferomedial strut, whereas individual 3D implant technique was used in each fracture for 13 patients with intact inferomedial strut. Outcomes including enophthalmos, visual acuity, and presence of diplopia were assessed and orbital volume was measured using OsiriX software preoperatively and postoperatively. Satisfactory results were achieved in both groups in terms of clinical improvements. Of 10 patients with preoperative diplopia, 9 improved in 6 months, except one with persistent symptom who underwent extraocular muscle rupture. 18 patients who had moderate to severe enophthalmos preoperatively improved, and one remained with mild degree. Orbital volume ratio, defined as volumetric ratio between affected and control orbit, decreased from 127.6% to 99.79% (p < 0.05) in comminuted group, and that in intact group decreased from 117.03% to 101.3% (p < 0.05). Our surgical strategies using the jigsaw puzzle and individual reconstruction technique provide accurate restoration of combined orbital floor and medial wall fractures. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014SPIE.9190E..09B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014SPIE.9190E..09B"><span>Optical simulations of organic light-emitting diodes through a combination of rigorous electromagnetic solvers and Monte Carlo ray-tracing methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bahl, Mayank; Zhou, Gui-Rong; Heller, Evan; Cassarly, William; Jiang, Mingming; Scarmozzino, Rob; Gregory, G. Groot</p> <p>2014-09-01</p> <p>Over the last two decades there has been extensive research done to improve the design of Organic Light Emitting Diodes (OLEDs) so as to enhance light extraction efficiency, improve beam shaping, and allow color tuning through techniques such as the use of patterned substrates, photonic crystal (PCs) gratings, back reflectors, surface texture, and phosphor down-conversion. Computational simulation has been an important tool for examining these increasingly complex designs. It has provided insights for improving OLED performance as a result of its ability to explore limitations, predict solutions, and demonstrate theoretical results. Depending upon the focus of the design and scale of the problem, simulations are carried out using rigorous electromagnetic (EM) wave optics based techniques, such as finite-difference time-domain (FDTD) and rigorous coupled wave analysis (RCWA), or through ray optics based technique such as Monte Carlo ray-tracing. The former are typically used for modeling nanostructures on the OLED die, and the latter for modeling encapsulating structures, die placement, back-reflection, and phosphor down-conversion. This paper presents the use of a mixed-level simulation approach which unifies the use of EM wave-level and ray-level tools. This approach uses rigorous EM wave based tools to characterize the nanostructured die and generate both a Bidirectional Scattering Distribution function (BSDF) and a far-field angular intensity distribution. These characteristics are then incorporated into the ray-tracing simulator to obtain the overall performance. Such mixed-level approach allows for comprehensive modeling of the optical characteristic of OLEDs and can potentially lead to more accurate performance than that from individual modeling tools alone.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24707224','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24707224"><span>A new cooperative MIMO scheme based on SM for energy-efficiency improvement in wireless sensor network.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Peng, Yuyang; Choi, Jaeho</p> <p>2014-01-01</p> <p>Improving the energy efficiency in wireless sensor networks (WSN) has attracted considerable attention nowadays. The multiple-input multiple-output (MIMO) technique has been proved as a good candidate for improving the energy efficiency, but it may not be feasible in WSN which is due to the size limitation of the sensor node. As a solution, the cooperative multiple-input multiple-output (CMIMO) technique overcomes this constraint and shows a dramatically good performance. In this paper, a new CMIMO scheme based on the spatial modulation (SM) technique named CMIMO-SM is proposed for energy-efficiency improvement. We first establish the system model of CMIMO-SM. Based on this model, the transmission approach is introduced graphically. In order to evaluate the performance of the proposed scheme, a detailed analysis in terms of energy consumption per bit of the proposed scheme compared with the conventional CMIMO is presented. Later, under the guide of this new scheme we extend our proposed CMIMO-SM to a multihop clustered WSN for further achieving energy efficiency by finding an optimal hop-length. Equidistant hop as the traditional scheme will be compared in this paper. Results from the simulations and numerical experiments indicate that by the use of the proposed scheme, significant savings in terms of total energy consumption can be achieved. Combining the proposed scheme with monitoring sensor node will provide a good performance in arbitrary deployed WSN such as forest fire detection system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..374a2054T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..374a2054T"><span>The Taguchi Method Application to Improve the Quality of a Sustainable Process</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Titu, A. M.; Sandu, A. V.; Pop, A. B.; Titu, S.; Ciungu, T. C.</p> <p>2018-06-01</p> <p>Taguchi’s method has always been a method used to improve the quality of the analyzed processes and products. This research shows an unusual situation, namely the modeling of some parameters, considered technical parameters, in a process that is wanted to be durable by improving the quality process and by ensuring quality using an experimental research method. Modern experimental techniques can be applied in any field and this study reflects the benefits of interacting between the agriculture sustainability principles and the Taguchi’s Method application. The experimental method used in this practical study consists of combining engineering techniques with experimental statistical modeling to achieve rapid improvement of quality costs, in fact seeking optimization at the level of existing processes and the main technical parameters. The paper is actually a purely technical research that promotes a technical experiment using the Taguchi method, considered to be an effective method since it allows for rapid achievement of 70 to 90% of the desired optimization of the technical parameters. The missing 10 to 30 percent can be obtained with one or two complementary experiments, limited to 2 to 4 technical parameters that are considered to be the most influential. Applying the Taguchi’s Method in the technique and not only, allowed the simultaneous study in the same experiment of the influence factors considered to be the most important in different combinations and, at the same time, determining each factor contribution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28144977','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28144977"><span>Changing Landscape for Peritoneal Dialysis: Optimizing Utilization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schreiber, Martin J</p> <p>2017-03-01</p> <p>The future growth of peritoneal dialysis (PD) will be directly linked to the shift in US healthcare to a value-based payment model due to PD's lower yearly cost, early survival advantage over in-center hemodialysis, and improved quality of life for patients treating their kidney disease in the home. Under this model, nephrology practices will need an increased focus on managing the transition from chronic kidney disease to end-stage renal disease (ESRD), providing patient education with the aim of accomplishing modality selection and access placement ahead of dialysis initiation. Physicians must expand their knowledge base in home therapies and work toward increased technique survival through implementation of specific practice initiatives that highlight PD catheter placement success, preservation of residual renal function, consideration of incremental PD, and competence in urgent start PD. Avoidance of both early and late PD technique failures is also critical to PD program growth. Large dialysis organizations must continue to measure and improve quality metrics for PD, expand their focus beyond the sole provision of PD to holistic patient care, and initiate programs to reduce PD hospitalization rates and encourage physicians to consider the benefits of PD as an initial modality for appropriate patients. New and innovative strategies are needed to address the main reasons for PD technique failure, improve the connectivity of the patient in the home, leverage home biometric data to improve overall outcomes, and develop PD cycler devices that lower patient treatment burden and reduce both treatment fatigue and treatment-dependent complications. © 2017 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810011550','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810011550"><span>Extended frequency turbofan model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mason, J. R.; Park, J. W.; Jaekel, R. F.</p> <p>1980-01-01</p> <p>The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015NPGeo..22..377W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015NPGeo..22..377W"><span>Nonstationary time series prediction combined with slow feature analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, G.; Chen, X.</p> <p>2015-07-01</p> <p>Almost all climate time series have some degree of nonstationarity due to external driving forces perturbing the observed system. Therefore, these external driving forces should be taken into account when constructing the climate dynamics. This paper presents a new technique of obtaining the driving forces of a time series from the slow feature analysis (SFA) approach, and then introduces them into a predictive model to predict nonstationary time series. The basic theory of the technique is to consider the driving forces as state variables and to incorporate them into the predictive model. Experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted to test the model. The results showed improved prediction skills.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20030064048&hterms=Automata&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DAutomata','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20030064048&hterms=Automata&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DAutomata"><span>Efficient Translation of LTL Formulae into Buchi Automata</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Giannakopoulou, Dimitra; Lerda, Flavio</p> <p>2001-01-01</p> <p>Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MAMMP...3...14S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MAMMP...3...14S"><span>Modelling low velocity impact induced damage in composite laminates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shi, Yu; Soutis, Constantinos</p> <p>2017-12-01</p> <p>The paper presents recent progress on modelling low velocity impact induced damage in fibre reinforced composite laminates. It is important to understand the mechanisms of barely visible impact damage (BVID) and how it affects structural performance. To reduce labour intensive testing, the development of finite element (FE) techniques for simulating impact damage becomes essential and recent effort by the composites research community is reviewed in this work. The FE predicted damage initiation and propagation can be validated by Non Destructive Techniques (NDT) that gives confidence to the developed numerical damage models. A reliable damage simulation can assist the design process to optimise laminate configurations, reduce weight and improve performance of components and structures used in aircraft construction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMNS51A1966L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMNS51A1966L"><span><p>Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.</p> <p>2015-12-01</p> <p>Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could replace the existing strategy of forward modeling to match gravity data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OptCo.385..229Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OptCo.385..229Z"><span>Mach-Zehnder modulator modulated radio-over-fiber transmission system using dual wavelength linearization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhu, Ran; Hui, Ming; Shen, Dongya; Zhang, Xiupu</p> <p>2017-02-01</p> <p>In this paper, dual wavelength linearization (DWL) technique is studied to suppress odd and even order nonlinearities simultaneously in a Mach-Zehnder modulator (MZM) modulated radio-over-fiber (RoF) transmission system. A theoretical model is given to analyze the DWL employed for MZM. In a single-tone test, the suppressions of the second order harmonic distortion (HD2) and third order harmonic distortion (HD3) at the same time are experimentally verified at different bias voltages of the MZM. The measured spurious-free dynamic ranges (SFDRs) with respect to the HD2 and HD3 are improved simultaneously compared to using a single laser. The output P1 dB is also improved by the DWL technique. Moreover, a WiFi signal is transmitted in the RoF system to test the linearization for broadband signal. The result shows that more than 1 dB improvement of the error vector magnitude (EVM) is obtained by the DWL technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ARMS...10..199P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ARMS...10..199P"><span>Improving Marine Ecosystem Models with Biochemical Tracers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pethybridge, Heidi R.; Choy, C. Anela; Polovina, Jeffrey J.; Fulton, Elizabeth A.</p> <p>2018-01-01</p> <p>Empirical data on food web dynamics and predator-prey interactions underpin ecosystem models, which are increasingly used to support strategic management of marine resources. These data have traditionally derived from stomach content analysis, but new and complementary forms of ecological data are increasingly available from biochemical tracer techniques. Extensive opportunities exist to improve the empirical robustness of ecosystem models through the incorporation of biochemical tracer data and derived indices, an area that is rapidly expanding because of advances in analytical developments and sophisticated statistical techniques. Here, we explore the trophic information required by ecosystem model frameworks (species, individual, and size based) and match them to the most commonly used biochemical tracers (bulk tissue and compound-specific stable isotopes, fatty acids, and trace elements). Key quantitative parameters derived from biochemical tracers include estimates of diet composition, niche width, and trophic position. Biochemical tracers also provide powerful insight into the spatial and temporal variability of food web structure and the characterization of dominant basal and microbial food web groups. A major challenge in incorporating biochemical tracer data into ecosystem models is scale and data type mismatches, which can be overcome with greater knowledge exchange and numerical approaches that transform, integrate, and visualize data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5508280','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5508280"><span>Rapid Prototyping 3D Model in Treatment of Pediatric Hip Dysplasia: A Case Report</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Holt, Andrew M.; Starosolski, Zbigniew; Kan, J. Herman</p> <p>2017-01-01</p> <p>Abstract Background: Rapid prototyping is an emerging technology that integrates common medical imaging with specialized production mechanisms to create detailed anatomic replicas. 3D-printed models of musculoskeletal anatomy have already proven useful in orthopedics and their applications continue to expand. Case Description: We present the case of a 10 year-old female with Down syndrome and left acetabular dysplasia and chronic hip instability who underwent periacetabular osteotomy. A rapid prototyping 3D model was created to better understand the anatomy, counsel the family about the problem and the surgical procedure, as well as guide surgical technique. The intricate detail and size match of the model with the patient’s anatomy offered unparalleled, hands-on experience with the patient’s anatomy pre-operatively and improved surgical precision. Conclusions: Our experience with rapid prototyping confirmed its ability to enhance orthopedic care by improving the surgeon’s ability to understand complex anatomy. Additionally, we report a new application utilizing intraoperative fluoroscopic comparison of the model and patient to ensure surgical precision and minimize the risk of complications. This technique could be used in other challenging cases. The increasing availability of rapid prototyping welcomes further use in all areas of orthopedics. PMID:28852351</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28852351','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28852351"><span>Rapid Prototyping 3D Model in Treatment of Pediatric Hip Dysplasia: A Case Report.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Holt, Andrew M; Starosolski, Zbigniew; Kan, J Herman; Rosenfeld, Scott B</p> <p>2017-01-01</p> <p>Rapid prototyping is an emerging technology that integrates common medical imaging with specialized production mechanisms to create detailed anatomic replicas. 3D-printed models of musculoskeletal anatomy have already proven useful in orthopedics and their applications continue to expand. We present the case of a 10 year-old female with Down syndrome and left acetabular dysplasia and chronic hip instability who underwent periacetabular osteotomy. A rapid prototyping 3D model was created to better understand the anatomy, counsel the family about the problem and the surgical procedure, as well as guide surgical technique. The intricate detail and size match of the model with the patient's anatomy offered unparalleled, hands-on experience with the patient's anatomy pre-operatively and improved surgical precision. Our experience with rapid prototyping confirmed its ability to enhance orthopedic care by improving the surgeon's ability to understand complex anatomy. Additionally, we report a new application utilizing intraoperative fluoroscopic comparison of the model and patient to ensure surgical precision and minimize the risk of complications. This technique could be used in other challenging cases. The increasing availability of rapid prototyping welcomes further use in all areas of orthopedics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002JHyd..264...69O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002JHyd..264...69O"><span>Modelling stream aquifer seepage in an alluvial aquifer: an improved loosing-stream package for MODFLOW</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Osman, Yassin Z.; Bruen, Michael P.</p> <p>2002-07-01</p> <p>Seepage from a stream, which partially penetrates an unconfined alluvial aquifer, is studied for the case when the water table falls below the streambed level. Inadequacies are identified in current modelling approaches to this situation. A simple and improved method of incorporating such seepage into groundwater models is presented. This considers the effect on seepage flow of suction in the unsaturated part of the aquifer below a disconnected stream and allows for the variation of seepage with water table fluctuations. The suggested technique is incorporated into the saturated code MODFLOW and is tested by comparing its predictions with those of a widely used variably saturated model, SWMS_2D simulating water flow and solute transport in two-dimensional variably saturated media. Comparisons are made of both seepage flows and local mounding of the water table. The suggested technique compares very well with the results of variably saturated model simulations. Most currently used approaches are shown to underestimate the seepage and associated local water table mounding, sometimes substantially. The proposed method is simple, easy to implement and requires only a small amount of additional data about the aquifer hydraulic properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26745265','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26745265"><span>Improved Power System Stability Using Backtracking Search Algorithm for Coordination Design of PSS and TCSC Damping Controller.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Niamul Islam, Naz; Hannan, M A; Mohamed, Azah; Shareef, Hussain</p> <p>2016-01-01</p> <p>Power system oscillation is a serious threat to the stability of multimachine power systems. The coordinated control of power system stabilizers (PSS) and thyristor-controlled series compensation (TCSC) damping controllers is a commonly used technique to provide the required damping over different modes of growing oscillations. However, their coordinated design is a complex multimodal optimization problem that is very hard to solve using traditional tuning techniques. In addition, several limitations of traditionally used techniques prevent the optimum design of coordinated controllers. In this paper, an alternate technique for robust damping over oscillation is presented using backtracking search algorithm (BSA). A 5-area 16-machine benchmark power system is considered to evaluate the design efficiency. The complete design process is conducted in a linear time-invariant (LTI) model of a power system. It includes the design formulation into a multi-objective function from the system eigenvalues. Later on, nonlinear time-domain simulations are used to compare the damping performances for different local and inter-area modes of power system oscillations. The performance of the BSA technique is compared against that of the popular particle swarm optimization (PSO) for coordinated design efficiency. Damping performances using different design techniques are compared in term of settling time and overshoot of oscillations. The results obtained verify that the BSA-based design improves the system stability significantly. The stability of the multimachine power system is improved by up to 74.47% and 79.93% for an inter-area mode and a local mode of oscillation, respectively. Thus, the proposed technique for coordinated design has great potential to improve power system stability and to maintain its secure operation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950025004','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950025004"><span>Flight assessment of the onboard propulsion system model for the Performance Seeking Control algorithm on an F-15 aircraft</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Orme, John S.; Schkolnik, Gerard S.</p> <p>1995-01-01</p> <p>Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70027328','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70027328"><span>Predicting mining activity with parallel genetic algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Talaie, S.; Leigh, R.; Louis, S.J.; Raines, G.L.; Beyer, H.G.; O'Reilly, U.M.; Banzhaf, Arnold D.; Blum, W.; Bonabeau, C.; Cantu-Paz, E.W.; ,; ,</p> <p>2005-01-01</p> <p>We explore several different techniques in our quest to improve the overall model performance of a genetic algorithm calibrated probabilistic cellular automata. We use the Kappa statistic to measure correlation between ground truth data and data predicted by the model. Within the genetic algorithm, we introduce a new evaluation function sensitive to spatial correctness and we explore the idea of evolving different rule parameters for different subregions of the land. We reduce the time required to run a simulation from 6 hours to 10 minutes by parallelizing the code and employing a 10-node cluster. Our empirical results suggest that using the spatially sensitive evaluation function does indeed improve the performance of the model and our preliminary results also show that evolving different rule parameters for different regions tends to improve overall model performance. Copyright 2005 ACM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22149043','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22149043"><span>An approximate model for cancellous bone screw fixation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brown, C J; Sinclair, R A; Day, A; Hess, B; Procter, P</p> <p>2013-04-01</p> <p>This paper presents a finite element (FE) model to identify parameters that affect the performance of an improved cancellous bone screw fixation technique, and hence potentially improve fracture treatment. In cancellous bone of low apparent density, it can be difficult to achieve adequate screw fixation and hence provide stable fracture fixation that enables bone healing. Data from predictive FE models indicate that cements can have a significant potential to improve screw holding power in cancellous bone. These FE models are used to demonstrate the key parameters that determine pull-out strength in a variety of screw, bone and cement set-ups, and to compare the effectiveness of different configurations. The paper concludes that significant advantages, up to an order of magnitude, in screw pull-out strength in cancellous bone might be gained by the appropriate use of a currently approved calcium phosphate cement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1156964','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1156964"><span>Cacao Intensification in Sulawesi: A Green Prosperity Model Project</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Moriarty, K.; Elchinger, M.; Hill, G.</p> <p>2014-09-01</p> <p>NREL conducted eight model projects for Millennium Challenge Corporation's (MCC) Compact with Indonesia. Green Prosperity, the largest project of the Compact, seeks to address critical constraints to economic growth while supporting the Government of Indonesia's commitment to a more sustainable, less carbon-intensive future. This study evaluates techniques to improve cacao farming in Sulawesi Indonesia with an emphasis on Farmer Field Schools and Cocoa Development Centers to educate farmers and for train the trainer programs. The study estimates the economic viability of cacao farming if smallholder implement techniques to increase yield as well as social and environmental impacts of the project.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28238448','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28238448"><span>Visually guided tube thoracostomy insertion comparison to standard of care in a large animal model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hernandez, Matthew C; Vogelsang, David; Anderson, Jeff R; Thiels, Cornelius A; Beilman, Gregory; Zielinski, Martin D; Aho, Johnathon M</p> <p>2017-04-01</p> <p>Tube thoracostomy (TT) is a lifesaving procedure for a variety of thoracic pathologies. The most commonly utilized method for placement involves open dissection and blind insertion. Image guided placement is commonly utilized but is limited by an inability to see distal placement location. Unfortunately, TT is not without complications. We aim to demonstrate the feasibility of a disposable device to allow for visually directed TT placement compared to the standard of care in a large animal model. Three swine were sequentially orotracheally intubated and anesthetized. TT was conducted utilizing a novel visualization device, tube thoracostomy visual trocar (TTVT) and standard of care (open technique). Position of the TT in the chest cavity were recorded using direct thoracoscopic inspection and radiographic imaging with the operator blinded to results. Complications were evaluated using a validated complication grading system. Standard descriptive statistical analyses were performed. Thirty TT were placed, 15 using TTVT technique, 15 using standard of care open technique. All of the TT placed using TTVT were without complication and in optimal position. Conversely, 27% of TT placed using standard of care open technique resulted in complications. Necropsy revealed no injury to intrathoracic organs. Visual directed TT placement using TTVT is feasible and non-inferior to the standard of care in a large animal model. This improvement in instrumentation has the potential to greatly improve the safety of TT. Further study in humans is required. Therapeutic Level II. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNH21B0176P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNH21B0176P"><span>Detection and Prediction of Hail Storms in Satellite Imagery using Deep Learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pullman, M.; Gurung, I.; Ramachandran, R.; Maskey, M.</p> <p>2017-12-01</p> <p>Natural hazards, such as damaging hail storms, dramatically disrupt both industry and agriculture, having significant socio-economic impacts in the United States. In 2016, hail was responsible for 3.5 billion and 23 million dollars in damage to property and crops, respectively, making it the second costliest 2016 weather phenomenon in the United States. The destructive nature and high cost of hail storms has driven research into the development of more accurate hail-prediction algorithms in an effort to mitigate societal impacts. Recently, weather forecasting efforts have turned to deep learning neural networks because neural networks can more effectively model complex, nonlinear, dynamical phenomenon that exist in large datasets through multiple stages of transformation and representation. In an effort to improve hail-prediction techniques, we propose a deep learning technique that leverages satellite imagery to detect and predict the occurrence of hail storms. The technique is applied to satellite imagery from 2006 to 2016 for the contiguous United States and incorporates hail reports obtained from the National Center for Environmental Information Storm Events Database for training and validation purposes. In this presentation, we describe a novel approach to predicting hail via a neural network model that creates a large labeled dataset of hail storms, the accuracy and results of the model, and its applications for improving hail forecasting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1295099','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1295099"><span>New mechanistic insights in the NH 3-SCR reactions at low temperature</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ruggeri, Maria Pia; Selleri, Tomasso; Nova, Isabella</p> <p>2016-05-06</p> <p>The present study is focused on the investigation of the low temperature Standard SCR reaction mechanism over Fe- and Cu-promoted zeolites. Different techniques are employed, including in situ DRIFTS, transient reaction analysis and chemical trapping techniques. The results present strong evidence of nitrite formation in the oxidative activation of NO and of their role in SCR reactions. These elements lead to a deeper understanding of the standard SCR chemistry at low temperature and can potentially improve the consistency of mechanistic mathematical models. Furthermore, comprehension of the mechanism on a fundamental level can contribute to the development of improved SCR catalysts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.A11P..03N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.A11P..03N"><span>Towards Improved Radiative Transfer Simulations of Hyperspectral Measurements for Cloudy Atmospheres</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Natraj, V.; Li, C.; Aumann, H. H.; Yung, Y. L.</p> <p>2016-12-01</p> <p>Usage of hyperspectral measurements in the infrared for weather forecasting requires radiative transfer (RT) models that can accurately compute radiances given the atmospheric state. On the other hand, it is necessary for the RT models to be fast enough to meet operational processing processing requirements. Until recently, this has proven to be a very hard challenge. In the last decade, however, significant progress has been made in this regard, due to computer speed increases, and improved and optimized RT models. This presentation will introduce a new technique, based on principal component analysis (PCA) of the inherent optical properties (such as profiles of trace gas absorption and single scattering albedo), to perform fast and accurate hyperspectral RT calculations in clear or cloudy atmospheres. PCA is a technique to compress data while capturing most of the variability in the data. By performing PCA on the optical properties, we limit the number of computationally expensive multiple scattering RT calculations to the PCA-reduced data set, and develop a series of PC-based correction factors to obtain the hyperspectral radiances. This technique has been showed to deliver accuracies of 0.1% of better with respect to brute force, line-by-line (LBL) models such as LBLRTM and DISORT, but is orders of magnitude faster than the LBL models. We will compare the performance of this method against other models on a large atmospheric state data set (7377 profiles) that includes a wide range of thermodynamic and cloud profiles, along with viewing geometry and surface emissivity information. 2016. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.213.1334S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.213.1334S"><span>Estimating crustal thickness and Vp/Vs ratio with joint constraints of receiver function and gravity data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai</p> <p>2018-05-01</p> <p>The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19790032547&hterms=sonar&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dsonar','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19790032547&hterms=sonar&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dsonar"><span>A digital strategy for manometer dynamic enhancement. [for wind tunnel monitoring</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stoughton, J. W.</p> <p>1978-01-01</p> <p>Application of digital signal processing techniques to improve the non-linear dynamic characteristics of a sonar-type mercury manometer is described. The dynamic enhancement strategy quasi-linearizes the manometer characteristics and improves the effective bandwidth in the context of a wind-tunnel pressure regulation system. Model identification data and real-time hybrid simulation data demonstrate feasibility of approach.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED086827.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED086827.pdf"><span>Use of the Job Model Concept to Guide Job Description Procedures for Army Officers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Whitmore, Paul G.</p> <p></p> <p>The objective of Work Unit SKYGUARD has been to facilitate the development of an improved Air Defense Officers Advanced Course (C-22) by the U.S. Army Air Defense School. Focus is on techniques for improving the completeness and relevance of the instructional objectives with respect to future job requirements. The job description procedures…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Improving+AND+Strategic+AND+thinking&pg=4&id=ED369421','ERIC'); return false;" href="https://eric.ed.gov/?q=Improving+AND+Strategic+AND+thinking&pg=4&id=ED369421"><span>A Quality System for Education: Using Quality and Productivity Techniques To Save Our Schools.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Spanbauer, Stanley J.; Hillman, Jo</p> <p></p> <p>This book provides a case study of the implementation of a quality improvement model to improve educational services at Fox Valley Technical College (FVTC), in Appleton, Wisconsin. Chapter 1 describes the early stages of the implementation of the quality processes at FVTC. Chapter 2 discusses the role of the chief administrator as mentor and…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1444068','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1444068"><span>A method of improving sensitivity of carbon/oxygen well logging for low porosity formation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Liu, Juntao; Zhang, Feng; Zhang, Quanying</p> <p></p> <p>Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1444068-method-improving-sensitivity-carbon-oxygen-well-logging-low-porosity-formation','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1444068-method-improving-sensitivity-carbon-oxygen-well-logging-low-porosity-formation"><span>A method of improving sensitivity of carbon/oxygen well logging for low porosity formation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Liu, Juntao; Zhang, Feng; Zhang, Quanying; ...</p> <p>2016-12-01</p> <p>Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20030054555&hterms=Inertia&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DInertia','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20030054555&hterms=Inertia&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DInertia"><span>Use of Satellite Data Assimilation to Infer Land Surface Thermal Inertia</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lapenta, William; McNider, Richard T.; Biazar, Arastoo; Suggs, Ron; Jedlovec, Gary; Dembek, Scott</p> <p>2002-01-01</p> <p>There are two important but observationally uncertain parameters in the grid averaged surface energy budgets of mesoscale models - surface moisture availability and thermal heat capacity. A technique has been successfully developed for assimilating Geostationary Operational Environmental Satellite (GOES) skin temperature tendencies during the mid-morning time frame to improve specification of surface moisture. In a new application of the technique, the use of satellite skin temperature tendencies in early evening is explored to improve specification of the surface thermal heat capacity. Together, these two satellite assimilation constraints have been shown to significantly improve the characterization of the surface energy budget of a mesoscale model on fine spatial scales. The GOES assimilation without the adjusted heat capacity was run operationally during the International H2O Project on a 12-km grid. This paper presents the results obtained when using both the moisture availability and heat capacity retrievals in concert. Preliminary results indicate that retrieved moisture availability alone improved the verification statistics of 2-meter temperature and dew point forecasts. Results from the 1.5 month long study period using the bulk heat capacity will be presented at the meeting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940012654','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940012654"><span>Investigation of high-speed free shear flows using improved pressure-strain correlated Reynolds stress turbulence model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tiwari, S. N.; Lakshmanan, B.</p> <p>1993-01-01</p> <p>A high-speed shear layer is studied using compressibility corrected Reynolds stress turbulence model which employs newly developed model for pressure-strain correlation. MacCormack explicit prediction-corrector method is used for solving the governing equations and the turbulence transport equations. The stiffness arising due to source terms in the turbulence equations is handled by a semi-implicit numerical technique. Results obtained using the new model show a sharper reduction in growth rate with increasing convective Mach number. Some improvements were also noted in the prediction of the normalized streamwise stress and Reynolds shear stress. The computed results are in good agreement with the experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25893203','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25893203"><span>New Methods in Tissue Engineering: Improved Models for Viral Infection.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ramanan, Vyas; Scull, Margaret A; Sheahan, Timothy P; Rice, Charles M; Bhatia, Sangeeta N</p> <p>2014-11-01</p> <p>New insights in the study of virus and host biology in the context of viral infection are made possible by the development of model systems that faithfully recapitulate the in vivo viral life cycle. Standard tissue culture models lack critical emergent properties driven by cellular organization and in vivo-like function, whereas animal models suffer from limited susceptibility to relevant human viruses and make it difficult to perform detailed molecular manipulation and analysis. Tissue engineering techniques may enable virologists to create infection models that combine the facile manipulation and readouts of tissue culture with the virus-relevant complexity of animal models. Here, we review the state of the art in tissue engineering and describe how tissue engineering techniques may alleviate some common shortcomings of existing models of viral infection, with a particular emphasis on hepatotropic viruses. We then discuss possible future applications of tissue engineering to virology, including current challenges and potential solutions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24102524','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24102524"><span>Exploring QSARs of the interaction of flavonoids with GABA (A) receptor using MLR, ANN and SVM techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Deeb, Omar; Shaik, Basheerulla; Agrawal, Vijay K</p> <p>2014-10-01</p> <p>Quantitative Structure-Activity Relationship (QSAR) models for binding affinity constants (log Ki) of 78 flavonoid ligands towards the benzodiazepine site of GABA (A) receptor complex were calculated using the machine learning methods: artificial neural network (ANN) and support vector machine (SVM) techniques. The models obtained were compared with those obtained using multiple linear regression (MLR) analysis. The descriptor selection and model building were performed with 10-fold cross-validation using the training data set. The SVM and MLR coefficient of determination values are 0.944 and 0.879, respectively, for the training set and are higher than those of ANN models. Though the SVM model shows improvement of training set fitting, the ANN model was superior to SVM and MLR in predicting the test set. Randomization test is employed to check the suitability of the models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IJTFM.130..479U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IJTFM.130..479U"><span>Effect of Bypass Capacitor in Common-mode Noise Reduction Technique for Automobile PCB</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Uno, Takanori; Ichikawa, Kouji; Mabuchi, Yuichi; Nakamura, Atushi</p> <p></p> <p>In this letter, we studied the use of common mode noise reduction technique for in-vehicle electronic equipment, each comprising large-scale integrated circuit (LSI), printed circuit board (PCB), wiring harnesses, and ground plane. We have improved the model circuit of the common mode noise that flows to the wire harness to add the effect of by-pass capacitors located near an LSI.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5656246','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5656246"><span>Efficient Execution Methods of Pivoting for Bulk Extraction of Entity-Attribute-Value-Modeled Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Luo, Gang; Frey, Lewis J.</p> <p>2017-01-01</p> <p>Entity-attribute-value (EAV) tables are widely used to store data in electronic medical records and clinical study data management systems. Before they can be used by various analytical (e.g., data mining and machine learning) programs, EAV-modeled data usually must be transformed into conventional relational table format through pivot operations. This time-consuming and resource-intensive process is often performed repeatedly on a regular basis, e.g., to provide a daily refresh of the content in a clinical data warehouse. Thus, it would be beneficial to make pivot operations as efficient as possible. In this paper, we present three techniques for improving the efficiency of pivot operations: 1) filtering out EAV tuples related to unneeded clinical parameters early on; 2) supporting pivoting across multiple EAV tables; and 3) conducting multi-query optimization. We demonstrate the effectiveness of our techniques through implementation. We show that our optimized execution method of pivoting using these techniques significantly outperforms the current basic execution method of pivoting. Our techniques can be used to build a data extraction tool to simplify the specification of and improve the efficiency of extracting data from the EAV tables in electronic medical records and clinical study data management systems. PMID:25608318</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016TDR.....7...78T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016TDR.....7...78T"><span>A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tsai, Yuan-Yu</p> <p>2016-03-01</p> <p>Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdSpR..61.1512K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdSpR..61.1512K"><span>Analysis of the dynamic behavior of structures using the high-rate GNSS-PPP method combined with a wavelet-neural model: Numerical simulation and experimental tests</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kaloop, Mosbeh R.; Yigit, Cemal O.; Hu, Jong W.</p> <p>2018-03-01</p> <p>Recently, the high rate global navigation satellite system-precise point positioning (GNSS-PPP) technique has been used to detect the dynamic behavior of structures. This study aimed to increase the accuracy of the extraction oscillation properties of structural movements based on the high-rate (10 Hz) GNSS-PPP monitoring technique. A developmental model based on the combination of wavelet package transformation (WPT) de-noising and neural network prediction (NN) was proposed to improve the dynamic behavior of structures for GNSS-PPP method. A complicated numerical simulation involving highly noisy data and 13 experimental cases with different loads were utilized to confirm the efficiency of the proposed model design and the monitoring technique in detecting the dynamic behavior of structures. The results revealed that, when combined with the proposed model, GNSS-PPP method can be used to accurately detect the dynamic behavior of engineering structures as an alternative to relative GNSS method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920015154','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920015154"><span>Software Surface Modeling and Grid Generation Steering Committee</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Smith, Robert E. (Editor)</p> <p>1992-01-01</p> <p>It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160014682','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160014682"><span>Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Manning, Ted A.; Lawrence, Scott L.</p> <p>2014-01-01</p> <p>As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1902b0034S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1902b0034S"><span>The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan</p> <p>2017-11-01</p> <p>In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=statistical+AND+process+AND+control&pg=2&id=EJ492691','ERIC'); return false;" href="https://eric.ed.gov/?q=statistical+AND+process+AND+control&pg=2&id=EJ492691"><span>Improving the Document Development Process: Integrating Relational Data and Statistical Process Control.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Miller, John</p> <p>1994-01-01</p> <p>Presents an approach to document numbering, document titling, and process measurement which, when used with fundamental techniques of statistical process control, reveals meaningful process-element variation as well as nominal productivity models. (SR)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=fruit+AND+fly+AND+fruit+AND+fly&pg=3&id=EJ203645','ERIC'); return false;" href="https://eric.ed.gov/?q=fruit+AND+fly+AND+fruit+AND+fly&pg=3&id=EJ203645"><span>Simulating Drosophila Genetics with the Computer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Small, James W., Jr.; Edwards, Kathryn L.</p> <p>1979-01-01</p> <p>Presents some techniques developed to help improve student understanding of Mendelian principles through the use of a computer simulation model by the genetic system of the fruit fly. Includes discussion and evaluation of this computer assisted program. (MA)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23807444','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23807444"><span>Variational stereo imaging of oceanic waves with statistical constraints.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise</p> <p>2013-11-01</p> <p>An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890003171','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890003171"><span>A two-dimensional numerical simulation of a supersonic, chemically reacting mixing layer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Drummond, J. Philip</p> <p>1988-01-01</p> <p>Research has been undertaken to achieve an improved understanding of physical phenomena present when a supersonic flow undergoes chemical reaction. A detailed understanding of supersonic reacting flows is necessary to successfully develop advanced propulsion systems now planned for use late in this century and beyond. In order to explore such flows, a study was begun to create appropriate physical models for describing supersonic combustion, and to develop accurate and efficient numerical techniques for solving the governing equations that result from these models. From this work, two computer programs were written to study reacting flows. Both programs were constructed to consider the multicomponent diffusion and convection of important chemical species, the finite rate reaction of these species, and the resulting interaction of the fluid mechanics and the chemistry. The first program employed a finite difference scheme for integrating the governing equations, whereas the second used a hybrid Chebyshev pseudospectral technique for improved accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008EJASP2008...56H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008EJASP2008...56H"><span>Near-Field Source Localization by Using Focusing Technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>He, Hongyang; Wang, Yide; Saillard, Joseph</p> <p>2008-12-01</p> <p>We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100002996','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100002996"><span>Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.</p> <p>2010-01-01</p> <p>The poster provides an overview of techniques to improve the Mars Global Reference Atmospheric Model (Mars-GRAM) sensitivity. It has been discovered during the Mars Science Laboratory (MSL) site selection process that the Mars Global Reference Atmospheric Model (Mars-GRAM) when used for sensitivity studies for TES MapYear = 0 and large optical depth values such as tau = 3 is less than realistic. A preliminary fix has been made to Mars-GRAM by adding a density factor value that was determined for tau = 0.3, 1 and 3.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE10011E..1GZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE10011E..1GZ"><span>Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan</p> <p>2016-07-01</p> <p>The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1351761','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1351761"><span>Evaluation of Advanced Signal Processing Techniques to Improve Detection and Identification of Embedded Defects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Clayton, Dwight A.; Santos-Villalobos, Hector J.; Baba, Justin S.</p> <p></p> <p>By the end of 1996, 109 Nuclear Power Plants were operating in the United States, producing 22% of the Nation’s electricity [1]. At present, more than two thirds of these power plants are more than 40 years old. The purpose of the U.S. Department of Energy Office of Nuclear Energy’s Light Water Reactor Sustainability (LWRS) Program is to develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the operating lifetimes of nuclear power plants (NPPs) beyond 60 years [2]. The most important safety structures in an NPP are constructed of concrete. The structures generallymore » do not allow for destructive evaluation and access is limited to one side of the concrete element. Therefore, there is a need for techniques and technologies that can assess the internal health of complex, reinforced concrete structures nondestructively. Previously, we documented the challenges associated with Non-Destructive Evaluation (NDE) of thick, reinforced concrete sections and prioritized conceptual designs of specimens that could be fabricated to represent NPP concrete structures [3]. Consequently, a 7 feet tall, by 7 feet wide, by 3 feet and 4-inch-thick concrete specimen was constructed with 2.257-inch-and 1-inch-diameter rebar every 6 to 12 inches. In addition, defects were embedded the specimen to assess the performance of existing and future NDE techniques. The defects were designed to give a mix of realistic and controlled defects for assessment of the necessary measures needed to overcome the challenges with more heavily reinforced concrete structures. Information on the embedded defects is documented in [4]. We also documented the superiority of Frequency Banded Decomposition (FBD) Synthetic Aperture Focusing Technique (SAFT) over conventional SAFT when probing defects under deep concrete cover. Improvements include seeing an intensity corresponding to a defect that is either not visible at all in regular, full frequency content SAFT, or an improvement in contrast over conventional SAFT reconstructed images. This report documents our efforts in four fronts: 1) Comparative study between traditional SAFT and FBD SAFT for concrete specimen with and without Alkali-Silica Reaction (ASR) damage, 2) improvement of our Model-Based Iterative Reconstruction (MBIR) for thick reinforced concrete [5], 3) development of a universal framework for sharing, reconstruction, and visualization of ultrasound NDE datasets, and 4) application of machine learning techniques for automated detection of ASR inside concrete. Our comparative study between FBD and traditional SAFT reconstruction images shows a clear difference between images of ASR and non-ASR specimens. In particular, the left first harmonic shows an increased contrast and sensitivity to ASR damage. For MBIR, we show the superiority of model-based techniques over delay and sum techniques such as SAFT. Improvements include elimination of artifacts caused by direct arrival signals, and increased contrast and Signal to Noise Ratio. For the universal framework, we document a format for data storage based on the HDF5 file format, and also propose a modular Graphic User Interface (GUI) for easy customization of data conversion, reconstruction, and visualization routines. Finally, two techniques for ASR automated detection are presented. The first technique is based on an analysis of the frequency content using Hilbert Transform Indicator (HTI) and the second technique employees Artificial Neural Network (ANN) techniques for training and classification of ultrasound data as ASR or non-ASR damaged classes. The ANN technique shows great potential with classification accuracy above 95%. These approaches are extensible to the detection of additional reinforced, thick concrete defects and damage.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ChOE...31..487Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ChOE...31..487Z"><span>Research on bulbous bow optimization based on the improved PSO algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Sheng-long; Zhang, Bao-ji; Tezdogan, Tahsin; Xu, Le-ping; Lai, Yu-yang</p> <p>2017-08-01</p> <p>In order to reduce the total resistance of a hull, an optimization framework for the bulbous bow optimization was presented. The total resistance in calm water was selected as the objective function, and the overset mesh technique was used for mesh generation. RANS method was used to calculate the total resistance of the hull. In order to improve the efficiency and smoothness of the geometric reconstruction, the arbitrary shape deformation (ASD) technique was introduced to change the shape of the bulbous bow. To improve the global search ability of the particle swarm optimization (PSO) algorithm, an improved particle swarm optimization (IPSO) algorithm was proposed to set up the optimization model. After a series of optimization analyses, the optimal hull form was found. It can be concluded that the simulation based design framework built in this paper is a promising method for bulbous bow optimization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22634554','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22634554"><span>A new improved study of cyanotoxins presence from experimental cyanobacteria concentrations in the Trasona reservoir (Northern Spain) using the MARS technique.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>García Nieto, P J; Alonso Fernández, J R; Sánchez Lasheras, F; de Cos Juez, F J; Díaz Muñiz, C</p> <p>2012-07-15</p> <p>Cyanotoxins, a kind of poisonous substances produced by cyanobacteria, are responsible for health risks in drinking and recreational water uses. The aim of this study is to improve our previous and successful work about cyanotoxins prediction from some experimental cyanobacteria concentrations in the Trasona reservoir (Asturias, Northern Spain) using the multivariate adaptive regression splines (MARS) technique at a local scale. In fact, this new improvement consists of using not only biological variables, but also the physical-chemical ones. As a result, the coefficient of determination has improved from 0.84 to 0.94, that is to say, more accurate predictive calculations and a better approximation to the real problem were obtained. Finally the agreement of the MARS model with experimental data confirmed the good performance. Copyright © 2012 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=203504&Lab=NERL&keyword=climatology&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=203504&Lab=NERL&keyword=climatology&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>An Indirect Data Assimilation Scheme for Deep Soil Temperature in the Pleim-Xiu Land Surface Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>The Pleim-Xiu land surface model (PX LSM) has been improved by the addition of a 2nd indirect data assimilation scheme. The first, which was described previously, is a technique where soil moisture in nudged according to the biases in 2-m air temperature and relative humidity be...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=video+AND+modeling&pg=4&id=EJ1134808','ERIC'); return false;" href="https://eric.ed.gov/?q=video+AND+modeling&pg=4&id=EJ1134808"><span>The Impact of Video Modeling on Improving Social Skills in Children with Autism</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Alzyoudi, Mohammed; Sartawi, AbedAlziz; Almuhiri, Osha</p> <p>2014-01-01</p> <p>Children with autism often show a lack of the interactive social skills that would allow them to engage with others successfully. They therefore frequently need training to aid them in successful social interaction. Video modeling is a widely used instructional technique that has been applied to teach children with developmental disabilities such…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=OSHA&id=EJ1054009','ERIC'); return false;" href="https://eric.ed.gov/?q=OSHA&id=EJ1054009"><span>The Impact of Video Modelling on Improving Social Skills in Children with Autism</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Alzyoudi, Mohammed; Sartawi, AbedAlziz; Almuhiri, Osha</p> <p>2015-01-01</p> <p>Children with autism often show a lack of the interactive social skills that would allow them to engage with others successfully. They therefore frequently need training to aid them in successful social interaction. Video modelling is a widely used instructional technique that has been applied to teach children with developmental disabilities such…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED300351.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED300351.pdf"><span>Improving Teacher Attitude and Morale through Maintaining Teacher Effectiveness: An Indiana Staff Development Model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gilman, David A.; And Others</p> <p></p> <p>The purpose of this study was to determine the effects of Maintaining Teaching Effectiveness, a staff development model, upon public school educators' attitudes toward various professional and personal factors. The techniques used for the project included a collegial support network and peer coaching. A total of 24 educators participated from…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19820063164&hterms=short+circuit&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dshort%2Bcircuit','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19820063164&hterms=short+circuit&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dshort%2Bcircuit"><span>Off-line, built-in test techniques for VLSI circuits</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Buehler, M. G.; Sievers, M. W.</p> <p>1982-01-01</p> <p>It is shown that the use of redundant on-chip circuitry improves the testability of an entire VLSI circuit. In the study described here, five techniques applied to a two-bit ripple carry adder are compared. The techniques considered are self-oscillation, self-comparison, partition, scan path, and built-in logic block observer. It is noted that both classical stuck-at faults and nonclassical faults, such as bridging faults (shorts), stuck-on x faults where x may be 0, 1, or vary between the two, and parasitic flip-flop faults occur in IC structures. To simplify the analysis of the testing techniques, however, a stuck-at fault model is assumed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20030064037&hterms=Java+program&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DJava%2Bprogram','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20030064037&hterms=Java+program&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DJava%2Bprogram"><span>Addressing Dynamic Issues of Program Model Checking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lerda, Flavio; Visser, Willem</p> <p>2001-01-01</p> <p>Model checking real programs has recently become an active research area. Programs however exhibit two characteristics that make model checking difficult: the complexity of their state and the dynamic nature of many programs. Here we address both these issues within the context of the Java PathFinder (JPF) model checker. Firstly, we will show how the state of a Java program can be encoded efficiently and how this encoding can be exploited to improve model checking. Next we show how to use symmetry reductions to alleviate some of the problems introduced by the dynamic nature of Java programs. Lastly, we show how distributed model checking of a dynamic program can be achieved, and furthermore, how dynamic partitions of the state space can improve model checking. We support all our findings with results from applying these techniques within the JPF model checker.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/513530','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/513530"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Morgan, C.D.; Allison, M.L.</p> <p></p> <p>The Bluebell field is productive from the Tertiary lower Green River and Wasatch Formations of the Uinta Basin, Utah. The productive interval consists of thousands of feet of interbedded fractured clastic and carbonate beds deposited in a fluvial-dominated lacustrine environment. Wells in the Bluebell field are typically completed by perforating 40 or more beds over 1,000 to 3,000 vertical feet (300-900 m), then stimulating the entire interval. This completion technique is believed to leave many potentially productive beds damaged and/or untreated, while allowing water-bearing and low-pressure (thief) zones to communicate with the wellbore. Geologic and engineering characterization has been usedmore » to define improved completion techniques. A two-year characterization study involved detailed examination of outcrop, core, well logs, surface and subsurface fractures, produced oil-field waters, engineering parameters of the two demonstration wells, and analysis of past completion techniques and effectiveness. The characterization study resulted in recommendations for improved completion techniques and a field-demonstration program to test those techniques. The results of the characterization study and the proposed demonstration program are discussed in the second annual technical progress report. The operator of the wells was unable to begin the field demonstration this project year (October 1, 1995 to September 20, 1996). Correlation and thickness mapping of individual beds in the Wasatch Formation was completed and resulted in a. series of maps of each of the individual beds. These data were used in constructing the reservoir models. Non-fractured and fractured geostatistical models and reservoir simulations were generated for a 20-square-mile (51.8-km{sup 2}) portion of the Bluebell field. The modeling provides insights into the effects of fracture porosity and permeability in the Green River and Wasatch reservoirs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AdSpR..57.1847Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AdSpR..57.1847Z"><span>Research on ionospheric tomography based on variable pixel height</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zheng, Dunyong; Li, Peiqing; He, Jie; Hu, Wusheng; Li, Chaokui</p> <p>2016-05-01</p> <p>A novel ionospheric tomography technique based on variable pixel height was developed for the tomographic reconstruction of the ionospheric electron density distribution. The method considers the height of each pixel as an unknown variable, which is retrieved during the inversion process together with the electron density values. In contrast to conventional computerized ionospheric tomography (CIT), which parameterizes the model with a fixed pixel height, the variable-pixel-height computerized ionospheric tomography (VHCIT) model applies a disturbance to the height of each pixel. In comparison with conventional CIT models, the VHCIT technique achieved superior results in a numerical simulation. A careful validation of the reliability and superiority of VHCIT was performed. According to the results of the statistical analysis of the average root mean square errors, the proposed model offers an improvement by 15% compared with conventional CIT models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhLRv..17..110P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhLRv..17..110P"><span>Agent based simulations in disease modeling Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by Martine Ben Amar and Carlo Bianca</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pappalardo, Francesco; Pennisi, Marzio</p> <p>2016-07-01</p> <p>Fibrosis represents a process where an excessive tissue formation in an organ follows the failure of a physiological reparative or reactive process. Mathematical and computational techniques may be used to improve the understanding of the mechanisms that lead to the disease and to test potential new treatments that may directly or indirectly have positive effects against fibrosis [1]. In this scenario, Ben Amar and Bianca [2] give us a broad picture of the existing mathematical and computational tools that have been used to model fibrotic processes at the molecular, cellular, and tissue levels. Among such techniques, agent based models (ABM) can give a valuable contribution in the understanding and better management of fibrotic diseases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9783E..2NG','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9783E..2NG"><span>Optimal exposure techniques for iodinated contrast enhanced breast CT</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Glick, Stephen J.; Makeev, Andrey</p> <p>2016-03-01</p> <p>Screening for breast cancer using mammography has been very successful in the effort to reduce breast cancer mortality, and its use has largely resulted in the 30% reduction in breast cancer mortality observed since 1990 [1]. However, diagnostic mammography remains an area of breast imaging that is in great need for improvement. One imaging modality proposed for improving the accuracy of diagnostic workup is iodinated contrast-enhanced breast CT [2]. In this study, a mathematical framework is used to evaluate optimal exposure techniques for contrast-enhanced breast CT. The ideal observer signal-to-noise ratio (i.e., d') figure-of-merit is used to provide a task performance based assessment of optimal acquisition parameters under the assumptions of a linear, shift-invariant imaging system. A parallel-cascade model was used to estimate signal and noise propagation through the detector, and a realistic lesion model with iodine uptake was embedded into a structured breast background. Ideal observer performance was investigated across kVp settings, filter materials, and filter thickness. Results indicated many kVp spectra/filter combinations can improve performance over currently used x-ray spectra.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018WRR....54..827N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018WRR....54..827N"><span>Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish</p> <p>2018-02-01</p> <p>Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA557165','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA557165"><span>Exploring Techniques for Improving Retrievals of Bio-optical Properties of Coastal Waters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2011-09-30</p> <p>BRDF model was developed for coastal waters, and validated on the data of the two LISCO instruments, and its comparison with MODIS satellite imagery...in field conditions to validate radiative transfer modeling and assess possibilities for the separation of organic and inorganic particulate...to retrieve water components and compared with NOMAD and field CCNY data. Simulated datasets were also used to develop a BRDF model for coastal</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1028485','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1028485"><span>Consistent Alignment of World Embedding Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2017-03-02</p> <p>propose a solution that aligns variations of the same model (or different models) in a joint low-dimensional la- tent space leveraging carefully...representations of linguistic enti- ties, most often referred to as embeddings. This includes techniques that rely on matrix factoriza- tion (Levy & Goldberg ...higher, the variation is much higher as well. As we increase the size of the neighborhood, or improve the quality of our sample by only picking the most</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=models+AND+linear&id=ED521177','ERIC'); return false;" href="https://eric.ed.gov/?q=models+AND+linear&id=ED521177"><span>An Investigation of the Fit of Linear Regression Models to Data from an SAT[R] Validity Study. Research Report 2011-3</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael</p> <p>2011-01-01</p> <p>This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000033267','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000033267"><span>Evaluation of Bogus Vortex Techniques with Four-Dimensional Variational Data Assimilation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pu, Zhao-Xia; Braun, Scott A.</p> <p>2000-01-01</p> <p>The effectiveness of techniques for creating "bogus" vortices in numerical simulations of hurricanes is examined by using the Penn State/NCAR nonhydrostatic mesoscale model (MM5) and its adjoint system. A series of four-dimensional variational data assimilation (4-D VAR) experiments is conducted to generate an initial vortex for Hurricane Georges (1998) in the Atlantic Ocean by assimilating bogus sea-level pressure and surface wind information into the mesoscale numerical model. Several different strategies are tested for improving the vortex representation. The initial vortices produced by the 4-D VAR technique are able to reproduce many of the structural features of mature hurricanes. The vortices also result in significant improvements to the hurricane forecasts in terms of both intensity and track. In particular, with assimilation of only bogus sea-level pressure information, the response in the wind field is contained largely within the divergent component, with strong convergence leading to strong upward motion near the center. Although the intensity of the initial vortex seems to be well represented, a dramatic spin down of the storm occurs within the first 6 h of the forecast. With assimilation of bogus surface wind data only, an expected dominance of the rotational component of the wind field is generated, but the minimum pressure is adjusted inadequately compared to the actual hurricane minimum pressure. Only when both the bogus surface pressure and wind information are assimilated together does the model produce a vortex that represents the actual intensity of the hurricane and results in significant improvements to forecasts of both hurricane intensity and track.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1918576S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1918576S"><span>Application of commercial microwave link (CML) derived precipitation data in a hydrology model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smiatek, Gerhard; Chwala, Christian; Kunstmann, Harald</p> <p>2017-04-01</p> <p>In 2016 very local and extremely intensive convective events caused severe flooding in the Alpine space. Despite the large number of monitoring stations most of the rainfall events were not captured accurately by the existing rain gauge network. As the number of traditional precipitation monitoring sites is in general decreasing, novel rain monitoring techniques are gaining attention. One of the new techniques is the rainfall estimation from signal attenuation in commercial microwave link (CML) networks operated by cellular phone companies. In this contribution, we use CML-derived rainfall information to improve the streamflow forecast of the distributed hydrology model WaSiM-ETH in hindcasting and nowcasting modes. Our model domain covers the complex terrain of the Ammer catchment located in the German Alps. The hydrology model is operated with a spatial resolution of 100m and with an hourly time step. We present two alternative methods of rainfall estimation from CMLs and compare the results to data from rain gauges and a weather radar. Finally, we show the impact of the rainfall data sets on the hydrology model initialization and in discharge simulations of the Ammer River for selected episodes in 2015 and 2016. We found that the densification of the observation network by the CML observations leads to a significant improvement of the model performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMPA11B0218S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMPA11B0218S"><span>Can we use Earth Observations to improve monthly water level forecasts?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Slater, L. J.; Villarini, G.</p> <p>2017-12-01</p> <p>Dynamical-statistical hydrologic forecasting approaches benefit from different strengths in comparison with traditional hydrologic forecasting systems: they are computationally efficient, can integrate and `learn' from a broad selection of input data (e.g., General Circulation Model (GCM) forecasts, Earth Observation time series, teleconnection patterns), and can take advantage of recent progress in machine learning (e.g. multi-model blending, post-processing and ensembling techniques). Recent efforts to develop a dynamical-statistical ensemble approach for forecasting seasonal streamflow using both GCM forecasts and changing land cover have shown promising results over the U.S. Midwest. Here, we use climate forecasts from several GCMs of the North American Multi Model Ensemble (NMME) alongside 15-minute stage time series from the National River Flow Archive (NRFA) and land cover classes extracted from the European Space Agency's Climate Change Initiative 300 m annual Global Land Cover time series. With these data, we conduct systematic long-range probabilistic forecasting of monthly water levels in UK catchments over timescales ranging from one to twelve months ahead. We evaluate the improvement in model fit and model forecasting skill that comes from using land cover classes as predictors in the models. This work opens up new possibilities for combining Earth Observation time series with GCM forecasts to predict a variety of hazards from space using data science techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A13L..01C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A13L..01C"><span>Improving Air Quality (and Weather) Predictions using Advanced Data Assimilation Techniques Applied to Coupled Models during KORUS-AQ</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.</p> <p>2017-12-01</p> <p>Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24874182','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24874182"><span>Model-driven approach to data collection and reporting for quality improvement.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Curcin, Vasa; Woodcock, Thomas; Poots, Alan J; Majeed, Azeem; Bell, Derek</p> <p>2014-12-01</p> <p>Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120003878','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120003878"><span>Improved Impact of Atmospheric Infrared Sounder (AIRS) Radiance Assimilation in Numerical Weather Prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zavodsky, Bradley; Chou, Shih-Hung; Jedlovec, Gary</p> <p>2012-01-01</p> <p>Improvements to global and regional numerical weather prediction (NWP) have been demonstrated through assimilation of data from NASA s Atmospheric Infrared Sounder (AIRS). Current operational data assimilation systems use AIRS radiances, but impact on regional forecasts has been much smaller than for global forecasts. Retrieved profiles from AIRS contain much of the information that is contained in the radiances and may be able to reveal reasons for this reduced impact. Assimilating AIRS retrieved profiles in an identical analysis configuration to the radiances, tracking the quantity and quality of the assimilated data in each technique, and examining analysis increments and forecast impact from each data type can yield clues as to the reasons for the reduced impact. By doing this with regional scale models individual synoptic features (and the impact of AIRS on these features) can be more easily tracked. This project examines the assimilation of hyperspectral sounder data used in operational numerical weather prediction by comparing operational techniques used for AIRS radiances and research techniques used for AIRS retrieved profiles. Parallel versions of a configuration of the Weather Research and Forecasting (WRF) model with Gridpoint Statistical Interpolation (GSI) that mimics the analysis methodology, domain, and observational datasets for the regional North American Mesoscale (NAM) model run at the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC) are run to examine the impact of each type of AIRS data set. The first configuration will assimilate the AIRS radiance data along with other conventional and satellite data using techniques implemented within the operational system; the second configuration will assimilate AIRS retrieved profiles instead of AIRS radiances in the same manner. Preliminary results of this study will be presented and focus on the analysis impact of the radiances and profiles for selected cases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1817396E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1817396E"><span>Assessments on GOCE-based Gravity Field Model Comparisons with Terrestrial Data Using Wavelet Decomposition and Spectral Enhancement Approaches</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Erol, Serdar; Serkan Isık, Mustafa; Erol, Bihter</p> <p>2016-04-01</p> <p>The recent Earth gravity field satellite missions data lead significant improvement in Global Geopotential Models in terms of both accuracy and resolution. However the improvement in accuracy is not the same everywhere in the Earth and therefore quantifying the level of improvement locally is necessary using the independent data. The validations of the level-3 products from the gravity field satellite missions, independently from the estimation procedures of these products, are possible using various arbitrary data sets, as such the terrestrial gravity observations, astrogeodetic vertical deflections, GPS/leveling data, the stationary sea surface topography. Quantifying the quality of the gravity field functionals via recent products has significant importance for determination of the regional geoid modeling, base on the satellite and terrestrial data fusion with an optimal algorithm, beside the statistical reporting the improvement rates depending on spatial location. In the validations, the errors and the systematic differences between the data and varying spectral content of the compared signals should be considered in order to have comparable results. In this manner this study compares the performance of Wavelet decomposition and spectral enhancement techniques in validation of the GOCE/GRACE based Earth gravity field models using GPS/leveling and terrestrial gravity data in Turkey. The terrestrial validation data are filtered using Wavelet decomposition technique and the numerical results from varying levels of decomposition are compared with the results which are derived using the spectral enhancement approach with contribution of an ultra-high resolution Earth gravity field model. The tests include the GO-DIR-R5, GO-TIM-R5, GOCO05S, EIGEN-6C4 and EGM2008 global models. The conclusion discuss the superiority and drawbacks of both concepts as well as reporting the performance of tested gravity field models with an estimate of their contribution to modeling the geoid in Turkish territory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9786E..02R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9786E..02R"><span>Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry</p> <p>2016-03-01</p> <p>In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20000074649&hterms=planetary+boundaries&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dplanetary%2Bboundaries','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20000074649&hterms=planetary+boundaries&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dplanetary%2Bboundaries"><span>Enhancement of Directional Ambiguity Removal Skill in Scatterometer Data Processing Using Planetary Boundary Layer Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kim, Young-Joon; Pak, Kyung S.; Dunbar, R. Scott; Hsiao, S. Vincent; Callahan, Philip S.</p> <p>2000-01-01</p> <p>Planetary boundary layer (PBL) models are utilized to enhance directional ambiguity removal skill in scatterometer data processing. The ambiguity in wind direction retrieved from scatterometer measurements is removed with the aid of physical directional information obtained from PBL models. This technique is based on the observation that sea level pressure is scalar and its field is more coherent than the corresponding wind. An initial wind field obtained from the scatterometer measurements is used to derive a pressure field with a PBL model. After filtering small-scale noise in the derived pressure field, a wind field is generated with an inverted PBL model. This derived wind information is then used to remove wind vector ambiguities in the scatterometer data. It is found that the ambiguity removal skill can be improved when the new technique is used properly in conjunction with the median filter being used for scatterometer wind dealiasing at JPL. The new technique is applied to regions of cyclone systems which are important for accurate weather prediction but where the errors of ambiguity removal are often large.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AtmEn.169..267S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AtmEn.169..267S"><span>Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar</p> <p>2017-11-01</p> <p>Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6402E..05T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6402E..05T"><span>Lip-reading enhancement for law enforcement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Theobald, Barry J.; Harvey, Richard; Cox, Stephen J.; Lewis, Colin; Owen, Gari P.</p> <p>2006-09-01</p> <p>Accurate lip-reading techniques would be of enormous benefit for agencies involved in counter-terrorism and other law-enforcement areas. Unfortunately, there are very few skilled lip-readers, and it is apparently a difficult skill to transmit, so the area is under-resourced. In this paper we investigate the possibility of making the lip-reading task more amenable to a wider range of operators by enhancing lip movements in video sequences using active appearance models. These are generative, parametric models commonly used to track faces in images and video sequences. The parametric nature of the model allows a face in an image to be encoded in terms of a few tens of parameters, while the generative nature allows faces to be re-synthesised using the parameters. The aim of this study is to determine if exaggerating lip-motions in video sequences by amplifying the parameters of the model improves lip-reading ability. We also present results of lip-reading tests undertaken by experienced (but non-expert) adult subjects who claim to use lip-reading in their speech recognition process. The results, which are comparisons of word error-rates on unprocessed and processed video, are mixed. We find that there appears to be the potential to improve the word error rate but, for the method to improve the intelligibility there is need for more sophisticated tracking and visual modelling. Our technique can also act as an expression or visual gesture amplifier and so has applications to animation and the presentation of information via avatars or synthetic humans.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014SPIE.9138E..0KG','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014SPIE.9138E..0KG"><span>Knee cartilage segmentation using active shape models and local binary patterns</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>González, Germán.; Escalante-Ramírez, Boris</p> <p>2014-05-01</p> <p>Segmentation of knee cartilage has been useful for opportune diagnosis and treatment of osteoarthritis (OA). This paper presents a semiautomatic segmentation technique based on Active Shape Models (ASM) combined with Local Binary Patterns (LBP) and its approaches to describe the surrounding texture of femoral cartilage. The proposed technique is tested on a 16-image database of different patients and it is validated through Leave- One-Out method. We compare different segmentation techniques: ASM-LBP, ASM-medianLBP, and ASM proposed by Cootes. The ASM-LBP approaches are tested with different ratios to decide which of them describes the cartilage texture better. The results show that ASM-medianLBP has better performance than ASM-LBP and ASM. Furthermore, we add a routine which improves the robustness versus two principal problems: oversegmentation and initialization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018npjCM...4....4U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018npjCM...4....4U"><span>Adaptive design of an X-ray magnetic circular dichroism spectroscopy experiment with Gaussian process modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ueno, Tetsuro; Hino, Hideitsu; Hashimoto, Ai; Takeichi, Yasuo; Sawada, Masahiro; Ono, Kanta</p> <p>2018-01-01</p> <p>Spectroscopy is a widely used experimental technique, and enhancing its efficiency can have a strong impact on materials research. We propose an adaptive design for spectroscopy experiments that uses a machine learning technique to improve efficiency. We examined X-ray magnetic circular dichroism (XMCD) spectroscopy for the applicability of a machine learning technique to spectroscopy. An XMCD spectrum was predicted by Gaussian process modelling with learning of an experimental spectrum using a limited number of observed data points. Adaptive sampling of data points with maximum variance of the predicted spectrum successfully reduced the total data points for the evaluation of magnetic moments while providing the required accuracy. The present method reduces the time and cost for XMCD spectroscopy and has potential applicability to various spectroscopies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110014614','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110014614"><span>System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.</p> <p>2011-01-01</p> <p>Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1259454','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1259454"><span>Material point method modeling in oil and gas reservoirs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Vanderheyden, William Brian; Zhang, Duan</p> <p>2016-06-28</p> <p>A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21721720','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21721720"><span>Improved 3-omega measurement of thermal conductivity in liquid, gases, and powders using a metal-coated optical fiber.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schiffres, Scott N; Malen, Jonathan A</p> <p>2011-06-01</p> <p>A novel 3ω thermal conductivity measurement technique called metal-coated 3ω is introduced for use with liquids, gases, powders, and aerogels. This technique employs a micron-scale metal-coated glass fiber as a heater/thermometer that is suspended within the sample. Metal-coated 3ω exceeds alternate 3ω based fluid sensing techniques in a number of key metrics enabling rapid measurements of small samples of materials with very low thermal effusivity (gases), using smaller temperature oscillations with lower parasitic conduction losses. Its advantages relative to existing fluid measurement techniques, including transient hot-wire, steady-state methods, and solid-wire 3ω are discussed. A generalized n-layer concentric cylindrical periodic heating solution that accounts for thermal boundary resistance is presented. Improved sensitivity to boundary conductance is recognized through this model. Metal-coated 3ω was successfully validated through a benchmark study of gases and liquids spanning two-orders of magnitude in thermal conductivity. © 2011 American Institute of Physics</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870008285','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870008285"><span>Potential benefits of magnetic suspension and balance systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lawing, Pierce L.; Dress, David A.; Kilgore, Robert A.</p> <p>1987-01-01</p> <p>The potential of Magnetic Suspension and Balance Systems (MSBS) to improve conventional wind tunnel testing techniques is discussed. Topics include: elimination of model geometry distortion and support interference to improve the measurement accuracy of aerodynamic coefficients; removal of testing restrictions due to supports; improved dynamic stability data; and stores separation testing. Substantial increases in wind tunnel productivity are anticipated due to the coalescence of these improvements. Specific improvements in testing methods for missiles, helicopters, fighter aircraft, twin fuselage transports and bombers, state separation, water tunnels, and automobiles are also forecast. In a more speculative vein, new wind tunnel test techniques are envisioned as a result of applying MSBS, including free-flight computer trajectories in the test section, pilot-in-the-loop and designer-in-the-loop testing, shipboard missile launch simulation, and optimization of hybrid hypersonic configurations. Also addressed are potential applications of MSBS to such diverse technologies as medical research and practice, industrial robotics, space weaponry, and ore processing in space.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMGC22B..03O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMGC22B..03O"><span>Improving wave forecasting by integrating ensemble modelling and machine learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>O'Donncha, F.; Zhang, Y.; James, S. C.</p> <p>2017-12-01</p> <p>Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AtmEn..79..495G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AtmEn..79..495G"><span>Comparison of results of an obstacle resolving microscale model with wind tunnel data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grawe, David; Schlünzen, K. Heinke; Pascheke, Frauke</p> <p>2013-11-01</p> <p>The microscale transport and stream model MITRAS has been improved and a new technique has been implemented to improve numerical stability for complex obstacle configurations. Results of the updated version have been compared with wind tunnel data using an evaluation method that has been established for simple obstacle configurations. MITRAS is a part of the M-SYS model system for the assessment of ambient air quality. A comparison of model results for the flow field against quality ensured wind tunnel data has been carried out for both idealised and realistic test cases. Results of the comparison show a very good agreement of the wind field for most test cases and identify areas of possible improvement of the model. The evaluated MITRAS results can be used as input data for the M-SYS microscale chemistry model MICTM. This paper describes how such a comparison can be carried out for simple as well as realistic obstacle configurations and what difficulties arise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.B51C1816R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.B51C1816R"><span>MODIS Data Assimilation in the CROPGRO model for improving soybean yield estimations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Richetti, J.; Monsivais-Huertero, A.; Ahmad, I.; Judge, J.</p> <p>2017-12-01</p> <p>Soybean is one of the main agricultural commodities in the world. Thus, having better estimates of its agricultural production is important. Improving the soybean crop models in Brazil is crucial for better understanding of the soybean market and enhancing decision making, because Brazil is the second largest soybean producer in the world, Parana state is responsible for almost 20% of it, and by itself would be the fourth greatest soybean producer in the world. Data assimilation techniques provide a method to improve spatio-temporal continuity of crops through integration of remotely sensed observations and crop growth models. This study aims to use MODIS EVI to improve DSSAT-CROPGRO soybean yield estimations in the Parana state, southern Brazil. The method uses the Ensemble Kalman filter which assimilates MODIS Terra and Aqua combined products (MOD13Q1 and MYD13Q1) into the CROPGRO model to improve the agricultural production estimates through update of light interception data over time. Expected results will be validated with monitored commercial farms during the period of 2013-2014.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7445E..03N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7445E..03N"><span>Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.</p> <p>2009-08-01</p> <p>The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29594637','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29594637"><span>Application of Genetic Algorithm (GA) Assisted Partial Least Square (PLS) Analysis on Trilinear and Non-trilinear Fluorescence Data Sets to Quantify the Fluorophores in Multifluorophoric Mixtures: Improving Quantification Accuracy of Fluorimetric Estimations of Dilute Aqueous Mixtures.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kumar, Keshav</p> <p>2018-03-01</p> <p>Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920033527&hterms=dynamic+speckle+water+structure&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Ddynamic%2Bspeckle%2Bwater%2Bstructure','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920033527&hterms=dynamic+speckle+water+structure&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Ddynamic%2Bspeckle%2Bwater%2Bstructure"><span>Remote sensing science for the Nineties; Proceedings of IGARSS '90 - 10th Annual International Geoscience and Remote Sensing Symposium, University of Maryland, College Park, May 20-24, 1990. Vols. 1, 2, & 3</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1990-01-01</p> <p>Various papers on remote sensing (RS) for the nineties are presented. The general topics addressed include: subsurface methods, radar scattering, oceanography, microwave models, atmospheric correction, passive microwave systems, RS in tropical forests, moderate resolution land analysis, SAR geometry and SNR improvement, image analysis, inversion and signal processing for geoscience, surface scattering, rain measurements, sensor calibration, wind measurements, terrestrial ecology, agriculture, geometric registration, subsurface sediment geology, radar modulation mechanisms, radar ocean scattering, SAR calibration, airborne radar systems, water vapor retrieval, forest ecosystem dynamics, land analysis, multisensor data fusion. Also considered are: geologic RS, RS sensor optical measurements, RS of snow, temperature retrieval, vegetation structure, global change, artificial intelligence, SAR processing techniques, geologic RS field experiment, stochastic modeling, topography and Digital Elevation model, SAR ocean waves, spaceborne lidar and optical, sea ice field measurements, millimeter waves, advanced spectroscopy, spatial analysis and data compression, SAR polarimetry techniques. Also discussed are: plant canopy modeling, optical RS techniques, optical and IR oceanography, soil moisture, sea ice back scattering, lightning cloud measurements, spatial textural analysis, SAR systems and techniques, active microwave sensing, lidar and optical, radar scatterometry, RS of estuaries, vegetation modeling, RS systems, EOS/SAR Alaska, applications for developing countries, SAR speckle and texture.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PApGe.174.1743C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PApGe.174.1743C"><span>Moho Modeling Using FFT Technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Wenjin; Tenzer, Robert</p> <p>2017-04-01</p> <p>To improve the numerical efficiency, the Fast Fourier Transform (FFT) technique was facilitated in Parker-Oldenburg's method for a regional gravimetric Moho recovery, which assumes the Earth's planar approximation. In this study, we extend this definition for global applications while assuming a spherical approximation of the Earth. In particular, we utilize the FFT technique for a global Moho recovery, which is practically realized in two numerical steps. The gravimetric forward modeling is first applied, based on methods for a spherical harmonic analysis and synthesis of the global gravity and lithospheric structure models, to compute the refined gravity field, which comprises mainly the gravitational signature of the Moho geometry. The gravimetric inverse problem is then solved iteratively in order to determine the Moho depth. The application of FFT technique to both numerical steps reduces the computation time to a fraction of that required without applying this fast algorithm. The developed numerical producers are used to estimate the Moho depth globally, and the gravimetric result is validated using the global (CRUST1.0) and regional (ESC) seismic Moho models. The comparison reveals a relatively good agreement between the gravimetric and seismic models, with the RMS of differences (of 4-5 km) at the level of expected uncertainties of used input datasets, while without the presence of significant systematic bias.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..260a2013R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..260a2013R"><span>UAV State Estimation Modeling Techniques in AHRS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Razali, Shikin; Zhahir, Amzari</p> <p>2017-11-01</p> <p>Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1993sme..confR....K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1993sme..confR....K"><span>Developments in Signature Process Control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Keller, L. B.; Dominski, Marty</p> <p>1993-01-01</p> <p>Developments in the adaptive process control technique known as Signature Process Control for Advanced Composites (SPCC) are described. This computer control method for autoclave processing of composites was used to develop an optimum cure cycle for AFR 700B polyamide and for an experimental poly-isoimide. An improved process cycle was developed for Avimid N polyamide. The potential for extending the SPCC technique to pre-preg quality control, press modeling, pultrusion and RTM is briefly discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFDE31001W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFDE31001W"><span>On the Conditioning of Machine-Learning-Assisted Turbulence Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng</p> <p>2017-11-01</p> <p>Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24374127','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24374127"><span>Dynamic one-dimensional modeling of secondary settling tanks and design impacts of sizing decisions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Ben; Stenstrom, Michael K</p> <p>2014-03-01</p> <p>As one of the most significant components in the activated sludge process (ASP), secondary settling tanks (SSTs) can be investigated with mathematical models to optimize design and operation. This paper takes a new look at the one-dimensional (1-D) SST model by analyzing and considering the impacts of numerical problems, especially the process robustness. An improved SST model with Yee-Roe-Davis technique as the PDE solver is proposed and compared with the widely used Takács model to show its improvement in numerical solution quality. The improved and Takács models are coupled with a bioreactor model to reevaluate ASP design basis and several popular control strategies for economic plausibility, contaminant removal efficiency and system robustness. The time-to-failure due to rising sludge blanket during overloading, as a key robustness indicator, is analyzed to demonstrate the differences caused by numerical issues in SST models. The calculated results indicate that the Takács model significantly underestimates time to failure, thus leading to a conservative design. Copyright © 2013 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AAS...23130804L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AAS...23130804L"><span>Bolometric Luminosities of Peculiar Type II-P Supernovae: Observational and Theoretical Approaches</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lusk, Jeremy Alexander</p> <p>2018-01-01</p> <p>In the three decades since the explosion of SN 1987A, only a handful of other supernovae have been detected which are also thought to originate from blue supergiant progenitors. In this study, we use the five best observed of these supernovae (SNe 1998A, 2000cb, 2006V, 2006au, and 2009E) to examine the bolometric properties of the class through observations and theoretical models. Several techniques for taking photometric observations and inferring bolometric luminosities have been used in the literature. Our newly-improved python package SuperBoL implements many of these techniques. The challenge remains that the true bolometric luminosity of the supernova cannot be directly observed. We must turn to theoretical models in order to examine the validity of the different observationally-based techniques. In this work, we make use of the NLTE generalized atmosphere code PHOENIX to produce synthetic spectra of known luminosity which match the observed supernova spectra. Synthetic photometry of these models is then used as input to SuperBoL to test the different observationally-based bolometric luminosity techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JSV...383..384U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JSV...383..384U"><span>Spindle speed variation technique in turning operations: Modeling and real implementation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Urbikain, G.; Olvera, D.; de Lacalle, L. N. López; Elías-Zúñiga, A.</p> <p>2016-11-01</p> <p>Chatter is still one of the most challenging problems in machining vibrations. Researchers have focused their efforts to prevent, avoid or reduce chatter vibrations by introducing more accurate predictive physical methods. Among them, the techniques based on varying the rotational speed of the spindle (or SSV, Spindle Speed ​​Variation) have gained great relevance. However, several problems need to be addressed due to technical and practical reasons. On one hand, they can generate harmful overheating of the spindle especially at high speeds. On the other hand, the machine may be unable to perform the interpolation properly. Moreover, it is not trivial to select the most appropriate tuning parameters. This paper conducts a study of the real implementation of the SSV technique in turning systems. First, a stability model based on perturbation theory was developed for simulation purposes. Secondly, the procedure to realistically implement the technique in a conventional turning center was tested and developed. The balance between the improved stability margins and acceptable behavior of the spindle is ensured by energy consumption measurements. Mathematical model shows good agreement with experimental cutting tests. </ce:displayed-quote></ce:para></p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AIPC.1738K0004C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AIPC.1738K0004C"><span>Numerical modeling of cold room's hinged door opening and closing processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Carneiro, R.; Gaspar, P. D.; Silva, P. D.; Domingues, L. C.</p> <p>2016-06-01</p> <p>The need of rationalize energy consumption in agrifood industry has fasten the development of methodologies to improve the thermal and energy performances of cold rooms. This paper presents a three-dimensional (3D) transient Computational Fluid Dynamics (CFD) modelling of a cold room to evaluate the air infiltration rate through hinged doors. A species transport model is used for modelling the tracer gas concentration decay technique. Numerical predictions indicate that air temperature difference between spaces affects the air infiltration. For this case study, the infiltration rate increases 0.016 m3 s-1 per K of air temperature difference. The knowledge about the evolution of air infiltration during door opening/closing times allows to draw some conclusions about its influence on the air conditions inside the cold room, as well as to suggest best practices and simple technical improvements that can minimize air infiltration, and consequently improve thermal performance and energy consumption rationalization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080014033','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080014033"><span>Recent Improvements in Semi-Span Testing at the National Transonic Facility (Invited)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gatlin, G. M.; Tomek, W. G.; Payne, F. M.; Griffiths, R. C.</p> <p>2006-01-01</p> <p>Three wind tunnel investigations of a commercial transport, high-lift, semi-span configuration have recently been conducted in the National Transonic Facility at the NASA Langley Research Center. Throughout the course of these investigations multiple improvements have been developed in the facility semi-span test capability. The primary purpose of the investigations was to assess Reynolds number scale effects on a modern commercial transport configuration up to full-scale flight test conditions (Reynolds numbers on the order of 27 million). The tests included longitudinal aerodynamic studies at subsonic takeoff and landing conditions across a range of Reynolds numbers from that available in conventional wind tunnels up to flight conditions. The purpose of this paper is to discuss lessons learned and improvements incorporated into the semi-span testing process. Topics addressed include enhanced thermal stabilization and moisture reduction procedures, assessments and improvements in model sealing techniques, compensation of model reference dimensions due to test temperature, significantly improved semi-span model access capability, and assessments of data repeatability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18970830','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18970830"><span>ASTM clustering for improving coal analysis by near-infrared spectroscopy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Andrés, J M; Bona, M T</p> <p>2006-11-15</p> <p>Multivariate analysis techniques have been applied to near-infrared (NIR) spectra coals to investigate the relationship between nine coal properties (moisture (%), ash (%), volatile matter (%), fixed carbon (%), heating value (kcal/kg), carbon (%), hydrogen (%), nitrogen (%) and sulphur (%)) and the corresponding predictor variables. In this work, a whole set of coal samples was grouped into six more homogeneous clusters following the ASTM reference method for classification prior to the application of calibration methods to each coal set. The results obtained showed a considerable improvement of the error determination compared with the calibration for the whole sample set. For some groups, the established calibrations approached the quality required by the ASTM/ISO norms for laboratory analysis. To predict property values for a new coal sample it is necessary the assignation of that sample to its respective group. Thus, the discrimination and classification ability of coal samples by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS) in the NIR range was also studied by applying Soft Independent Modelling of Class Analogy (SIMCA) and Linear Discriminant Analysis (LDA) techniques. Modelling of the groups by SIMCA led to overlapping models that cannot discriminate for unique classification. On the other hand, the application of Linear Discriminant Analysis improved the classification of the samples but not enough to be satisfactory for every group considered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1223307-demonstration-emulator-based-bayesian-calibration-safety-analysis-codes-theory-formulation','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1223307-demonstration-emulator-based-bayesian-calibration-safety-analysis-codes-theory-formulation"><span>Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert</p> <p>2015-05-28</p> <p>System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFMAE11A..06M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFMAE11A..06M"><span>Lightning Forecasts and Data Assimilation into Numerical Weather Prediction Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>MacGorman, D. R.; Mansell, E. R.; Fierro, A.; Ziegler, C.</p> <p>2012-12-01</p> <p>This presentation reviews two aspects of lightning in numerical weather prediction (NWP) models: forecasting lightning and assimilating lightning data into NWP models to improve weather forecasts. One of the earliest routine forecasts of lightning was developed for fire weather operations. This approach used a multi-parameter regression analysis of archived cloud-to-ground (CG) lightning data and archived NWP data to optimize the combination of model state variables to use in forecast equations for various CG rates. Since then, understanding of how storms produce lightning has improved greatly. As the treatment of ice in microphysics packages used by NWP models has improved and the horizontal resolution of models has begun approaching convection-permitting scales (with convection-resolving scales on the horizon), it is becoming possible to use this improved understanding in NWP models to predict lightning more directly. An important role for data assimilation in NWP models is to depict the location, timing, and spatial extent of thunderstorms during model spin-up so that the effects of prior convection that can strongly influence future thunderstorm activity, such as updrafts and outflow boundaries, can be included in the initial state of a NWP model run. Radar data have traditionally been used, but systems that map lightning activity with varying degrees of coverage, detail, and detection efficiency are now available routinely over large regions and reveal information about storms that is complementary to the information provided by radar. Because data from lightning mapping systems are compact, easily handled, and reliably indicate the location and timing of thunderstorms, even in regions with little or no radar coverage, several groups have investigated techniques for assimilating these data into NWP models. This application will become even more valuable with the launch of the Geostationary Lightning Mapper on the GOES-R satellite, which will extend routine coverage even farther into remote regions and provides the most promising means for routine thunderstorm detection over oceans. On-going research is continually expanding the methods used to assimilate lightning data, which began with simple techniques for assimilating CG data and now are being extended to assimilate total lightning data. Most approaches either have used the lightning data simply to indicate where the subgrid scale convective parameterization of a model should produce deep convection or have used the lightning data to indicate how to modify a model variable related to thunderstorms, such as rainfall rate or water vapor mixing ratio. The developing methods for explicitly predicting lightning activity provide another, more direct means for assimilating total lightning data, besides providing information valuable to the general public and to many governmental and commercial enterprises. Such a direct approach could be particularly useful for ensemble techniques used to produce probabilistic thunderstorm forecasts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ClDy...49.1181V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ClDy...49.1181V"><span>Comparison of full field and anomaly initialisation for decadal climate prediction: towards an optimal consistency between the ocean and sea-ice anomaly initialisation state</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco J.</p> <p>2017-08-01</p> <p>Decadal prediction exploits sources of predictability from both the internal variability through the initialisation of the climate model from observational estimates, and the external radiative forcings. When a model is initialised with the observed state at the initial time step (Full Field Initialisation—FFI), the forecast run drifts towards the biased model climate. Distinguishing between the climate signal to be predicted and the model drift is a challenging task, because the application of a-posteriori bias correction has the risk of removing part of the variability signal. The anomaly initialisation (AI) technique aims at addressing the drift issue by answering the following question: if the model is allowed to start close to its own attractor (i.e. its biased world), but the phase of the simulated variability is constrained toward the contemporaneous observed one at the initialisation time, does the prediction skill improve? The relative merits of the FFI and AI techniques applied respectively to the ocean component and the ocean and sea ice components simultaneously in the EC-Earth global coupled model are assessed. For both strategies the initialised hindcasts show better skill than historical simulations for the ocean heat content and AMOC along the first two forecast years, for sea ice and PDO along the first forecast year, while for AMO the improvements are statistically significant for the first two forecast years. The AI in the ocean and sea ice components significantly improves the skill of the Arctic sea surface temperature over the FFI.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20120011988&hterms=homepage&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dhomepage','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20120011988&hterms=homepage&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dhomepage"><span>The International Reference Ionosphere Today and in the Future</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bilitza, Dieter; McKinnell, Lee-Ane; Reinisch, Bodo; Fuller-Rowell,Tim</p> <p>2010-01-01</p> <p>The international reference ionosphere (IRI) is the internationally recognized and recommended standard for the specification of plasma parameters in Earth's ionosphere. It describes monthly averages of electron density, electron temperature, ion temperature, ion composition, and several additional parameters in the altitude range from 60 to 1,500 km. A joint working group of the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI) is in charge of developing and improving the IRI model. As requested by COSPAR and URSI, IRI is an empirical model being based on most of the available and reliable data sources for the ionospheric plasma. The paper describes the latest version of the model and reviews efforts towards future improvements, including the development of new global models for the F2 peak density and height, and a new approach to describe the electron density in the topside and plasmasphere. Our emphasis will be on the electron density because it is the IRI parameter most relevant to geodetic techniques and studies. Annual IRI meetings are the main venue for the discussion of IRI activities, future improvements, and additions to the model. A new special IRI task force activity is focusing on the development of a real-time IRI (RT-IRI) by combining data assimilation techniques with the IRI model. A first RT-IRI task force meeting was held in 2009 in Colorado Springs. We will review the outcome of this meeting and the plans for the future. The IRI homepage is at http://www.IRI.gsfc.nasa.gov</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2533310','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2533310"><span>Can modeling of HIV treatment processes improve outcomes? Capitalizing on an operations research approach to the global pandemic</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xiong, Wei; Hupert, Nathaniel; Hollingsworth, Eric B; O'Brien, Megan E; Fast, Jessica; Rodriguez, William R</p> <p>2008-01-01</p> <p>Background Mathematical modeling has been applied to a range of policy-level decisions on resource allocation for HIV care and treatment. We describe the application of classic operations research (OR) techniques to address logistical and resource management challenges in HIV treatment scale-up activities in resource-limited countries. Methods We review and categorize several of the major logistical and operational problems encountered over the last decade in the global scale-up of HIV care and antiretroviral treatment for people with AIDS. While there are unique features of HIV care and treatment that pose significant challenges to effective modeling and service improvement, we identify several analogous OR-based solutions that have been developed in the service, industrial, and health sectors. Results HIV treatment scale-up includes many processes that are amenable to mathematical and simulation modeling, including forecasting future demand for services; locating and sizing facilities for maximal efficiency; and determining optimal staffing levels at clinical centers. Optimization of clinical and logistical processes through modeling may improve outcomes, but successful OR-based interventions will require contextualization of response strategies, including appreciation of both existing health care systems and limitations in local health workforces. Conclusion The modeling techniques developed in the engineering field of operations research have wide potential application to the variety of logistical problems encountered in HIV treatment scale-up in resource-limited settings. Increasing the number of cross-disciplinary collaborations between engineering and public health will help speed the appropriate development and application of these tools. PMID:18680594</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.P23G..05R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.P23G..05R"><span>Super Resolution and Interference Suppression Technique applied to SHARAD Radar Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Raguso, M. C.; Mastrogiuseppe, M.; Seu, R.; Piazzo, L.</p> <p>2017-12-01</p> <p>We will present a super resolution and interference suppression technique applied to the data acquired by the SHAllow RADar (SHARAD) on board the NASA's 2005 Mars Reconnaissance Orbiter (MRO) mission, currently operating around Mars [1]. The algorithms allow to improve the range resolution roughly by a factor of 3 and the Signal to Noise Ratio (SNR) by a several decibels. Range compression algorithms usually adopt conventional Fourier transform techniques, which are limited in the resolution by the transmitted signal bandwidth, analogous to the Rayleigh's criterion in optics. In this work, we investigate a super resolution method based on autoregressive models and linear prediction techniques [2]. Starting from the estimation of the linear prediction coefficients from the spectral data, the algorithm performs the radar bandwidth extrapolation (BWE), thereby improving the range resolution of the pulse-compressed coherent radar data. Moreover, the EMIs (ElectroMagnetic Interferences) are detected and the spectra is interpolated in order to reconstruct an interference free spectrum, thereby improving the SNR. The algorithm can be applied to the single complex look image after synthetic aperture processing (SAR). We apply the proposed algorithm to simulated as well as to real radar data. We will demonstrate the effective enhancement on vertical resolution with respect to the classical spectral estimator. We will show that the imaging of the subsurface layered structures observed in radargrams is improved, allowing additional insights for the scientific community in the interpretation of the SHARAD radar data, which will help to further our understanding of the formation and evolution of known geological features on Mars. References: [1] Seu et al. 2007, Science, 2007, 317, 1715-1718 [2] K.M. Cuomo, "A Bandwidth Extrapolation Technique for Improved Range Resolution of Coherent Radar Data", Project Report CJP-60, Revision 1, MIT Lincoln Laboratory (4 Dec. 1992).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23393510','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23393510"><span>Role of community pharmacists in asthma - Australian research highlighting pathways for future primary care models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Saini, B; Krass, I; Smith, L; Bosnic-Anticevich, S; Armour, C</p> <p>2011-01-01</p> <p>Asthma is one of the most common chronic conditions affecting the Australian population. Amongst primary healthcare professionals, pharmacists are the most accessible and this places pharmacists in an excellent position to play a role in the management of asthma. Globally, trials of many community pharmacy-based asthma care models have provided evidence that pharmacist delivered interventions can improve clinical, humanistic and economic outcomes for asthma patients. In Australia, a decade of coordinated research efforts, in various aspects of asthma care, has culminated in the implementation trial of the Pharmacy Asthma Management Service (PAMS), a comprehensive disease management model.There has been research investigating asthma medication adherence through data mining, ways in which usual asthma care can be improved. Our research has focused on self-management education, inhaler technique interventions, spirometry trials, interprofessional models of care, and regional trials addressing the particular needs of rural communities. We have determined that inhaler technique education is a necessity and should be repeated if correct technique is to be maintained. We have identified this effectiveness of health promotion and health education, conducted within and outside the confines of the pharmacy, in public for a and settings such as schools, and established that this outreach role is particularly well received and increases the opportunity for people with asthma to engage in their asthma management.Our research has identified that asthma patients have needs which pharmacists delivering specialized models of care, can address. There is a lot of evidence for the effectiveness of asthma care by pharmacists, the future must involve integration of this role into primary care.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19860014097','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19860014097"><span>Investigation to advance prediction techniques of the low-speed aerodynamics of V/STOL aircraft</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Maskew, B.; Strash, D.; Nathman, J.; Dvorak, F. A.</p> <p>1985-01-01</p> <p>A computer program, VSAERO, has been applied to a number of V/STOL configurations with a view to advancing prediction techniques for the low-speed aerodynamic characteristics. The program couples a low-order panel method with surface streamline calculation and integral boundary layer procedures. The panel method--which uses piecewise constant source and doublet panels-includes an iterative procedure for wake shape and models boundary layer displacement effect using the source transpiration technique. Certain improvements to a basic vortex tube jet model were installed in the code prior to evaluation. Very promising results were obtained for surface pressures near a jet issuing at 90 deg from a flat plate. A solid core model was used in the initial part of the jet with a simple entrainment model. Preliminary representation of the downstream separation zone significantly improve the correlation. The program accurately predicted the pressure distribution inside the inlet on the Grumman 698-411 design at a range of flight conditions. Furthermore, coupled viscous/potential flow calculations gave very close correlation with experimentally determined operational boundaries dictated by the onset of separation inside the inlet. Experimentally observed degradation of these operational boundaries between nacelle-alone tests and tests on the full configuration were also indicated by the calculation. Application of the program to the General Dynamics STOL fighter design were equally encouraging. Very close agreement was observed between experiment and calculation for the effects of power on pressure distribution, lift and lift curve slope.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>