Sample records for incremental analysis update

  1. Development of the Nonstationary Incremental Analysis Update Algorithm for Sequential Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Ham, Yoo-Geun; Song, Hyo-Jong; Jung, Jaehee; Lim, Gyu-Ho

    2017-04-01

    This study introduces a altered version of the incremental analysis updates (IAU), called the nonstationary IAU (NIAU) method, to enhance the assimilation accuracy of the IAU while retaining the continuity of the analysis. Analogous to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. Still, unlike the IAU, the NIAU method applies time-evolved forcing employing the forward operator as rectifications to the model. The solution of the NIAU is better than that of the IAU, of which analysis is performed at the start of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.

  2. Situation Model Updating in Young and Older Adults: Global versus Incremental Mechanisms

    PubMed Central

    Bailey, Heather R.; Zacks, Jeffrey M.

    2015-01-01

    Readers construct mental models of situations described by text. Activity in narrative text is dynamic, so readers must frequently update their situation models when dimensions of the situation change. Updating can be incremental, such that a change leads to updating just the dimension that changed, or global, such that the entire model is updated. Here, we asked whether older and young adults make differential use of incremental and global updating. Participants read narratives containing changes in characters and spatial location and responded to recognition probes throughout the texts. Responses were slower when probes followed a change, suggesting that situation models were updated at changes. When either dimension changed, responses to probes for both dimensions were slowed; this provides evidence for global updating. Moreover, older adults showed stronger evidence of global updating than did young adults. One possibility is that older adults perform more global updating to offset reduced ability to manipulate information in working memory. PMID:25938248

  3. Comparison of different incremental analysis update schemes in a realistic assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.

    2017-07-01

    In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.

  4. Endogenous-cue prospective memory involving incremental updating of working memory: an fMRI study.

    PubMed

    Halahalli, Harsha N; John, John P; Lukose, Ammu; Jain, Sanjeev; Kutty, Bindu M

    2015-11-01

    Prospective memory paradigms are conventionally classified on the basis of event-, time-, or activity-based intention retrieval. In the vast majority of such paradigms, intention retrieval is provoked by some kind of external event. However, prospective memory retrieval cues that prompt intention retrieval in everyday life are commonly endogenous, i.e., linked to a specific imagined retrieval context. We describe herein a novel prospective memory paradigm wherein the endogenous cue is generated by incremental updating of working memory, and investigated the hemodynamic correlates of this task. Eighteen healthy adult volunteers underwent functional magnetic resonance imaging while they performed a prospective memory task where the delayed intention was triggered by an endogenous cue generated by incremental updating of working memory. Working memory and ongoing task control conditions were also administered. The 'endogenous-cue prospective memory condition' with incremental working memory updating was associated with maximum activations in the right rostral prefrontal cortex, and additional activations in the brain regions that constitute the bilateral fronto-parietal network, central and dorsal salience networks as well as cerebellum. In the working memory control condition, maximal activations were noted in the left dorsal anterior insula. Activation of the bilateral dorsal anterior insula, a component of the central salience network, was found to be unique to this 'endogenous-cue prospective memory task' in comparison to previously reported exogenous- and endogenous-cue prospective memory tasks without incremental working memory updating. Thus, the findings of the present study highlight the important role played by the dorsal anterior insula in incremental working memory updating that is integral to our endogenous-cue prospective memory task.

  5. FRED: a program development tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shilling, J.

    1985-09-01

    The structured, screen-based editor FRED is introduced. FRED provides incremental parsing and semantic analysis. The parsing is based on an LL(1) top-down algorithm which has been modified to provide follow-the-cursor parsing and soft templates. The languages accepted by the editor are LL(1) languages with the addition of the Unknown and preferred production non-terminal classes. The semantic analysis is based on the incremental update of attribute grammar equations. We briefly describe the interface between FRED and an automated reference librarian system that is under development.

  6. Single-pass incremental force updates for adaptively restrained molecular dynamics.

    PubMed

    Singh, Krishna Kant; Redon, Stephane

    2018-03-30

    Adaptively restrained molecular dynamics (ARMD) allows users to perform more integration steps in wall-clock time by switching on and off positional degrees of freedoms. This article presents new, single-pass incremental force updates algorithms to efficiently simulate a system using ARMD. We assessed different algorithms for speedup measurements and implemented them in the LAMMPS MD package. We validated the single-pass incremental force update algorithm on four different benchmarks using diverse pair potentials. The proposed algorithm allows us to perform simulation of a system faster than traditional MD in both NVE and NVT ensembles. Moreover, ARMD using the new single-pass algorithm speeds up the convergence of observables in wall-clock time. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Distributed Common Ground System Army Increment 1 (DCGS-A Inc 1)

    DTIC Science & Technology

    2016-03-01

    Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal...updated prior to the FDD ITAB in December 2012 and provided additional COA analysis/validation referenced in the FDD ADM (December 14, 2012) and FDD ...required by 10 U.S.C. 2334(a)(6). The Army Cost Review Board developed the FDD Army Cost Position (ACP), dated October 19, 2012, through the update of

  8. Online and unsupervised face recognition for continuous video stream

    NASA Astrophysics Data System (ADS)

    Huo, Hongwen; Feng, Jufu

    2009-10-01

    We present a novel online face recognition approach for video stream in this paper. Our method includes two stages: pre-training and online training. In the pre-training phase, our method observes interactions, collects batches of input data, and attempts to estimate their distributions (Box-Cox transformation is adopted here to normalize rough estimates). In the online training phase, our method incrementally improves classifiers' knowledge of the face space and updates it continuously with incremental eigenspace analysis. The performance achieved by our method shows its great potential in video stream processing.

  9. Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation

    NASA Astrophysics Data System (ADS)

    Bedi, Amrit Singh; Rajawat, Ketan

    2018-05-01

    Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.

  10. Data update in a land information network

    NASA Astrophysics Data System (ADS)

    Mullin, Robin C.

    1988-01-01

    The on-going update of data exchanged in a land information network is examined. In the past, major developments have been undertaken to enable the exchange of data between land information systems. A model of a land information network and the data update process have been developed. Based on these, a functional description of the database and software to perform data updating is presented. A prototype of the data update process was implemented using the ARC/INFO geographic information system. This was used to test four approaches to data updating, i.e., bulk, block, incremental, and alert updates. A bulk update is performed by replacing a complete file with an updated file. A block update requires that the data set be partitioned into blocks. When an update occurs, only the blocks which are affected need to be transferred. An incremental update approach records each feature which is added or deleted and transmits only the features needed to update the copy of the file. An alert is a marker indicating that an update has occurred. It can be placed in a file to warn a user that if he is active in an area containing markers, updated data is available. The four approaches have been tested using a cadastral data set.

  11. Support vector machine incremental learning triggered by wrongly predicted samples

    NASA Astrophysics Data System (ADS)

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  12. Logistics Modernization Program Increment 2 (LMP Inc 2)

    DTIC Science & Technology

    2016-03-01

    Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal Year IA...Documentation within the LMP Increment 2 MS C ADM, the LMP Increment 2 Business Case was updated for the FDD using change pages to remove information...following approval of the Army Cost Position being developed for the FDD . The LMP Increment 2 Business Case Change Pages were approved and signed by the

  13. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  14. Field Programmable Gate Array Based Parallel Strapdown Algorithm Design for Strapdown Inertial Navigation Systems

    PubMed Central

    Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua

    2011-01-01

    A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058

  15. Multi-Model Ensemble Approaches to Data Assimilation Using the 4D-Local Ensemble Transform Kalman Filter

    DTIC Science & Technology

    2013-09-30

    accuracy of the analysis . Root mean square difference ( RMSD ) is much smaller for RIP than for either Simple Ocean Data Assimilation or Incremental... Analysis Update globally for temperature as well as salinity. Regionally the same results were found, with only one exception in which the salinity RMSD ...short-term forecast using a numerical model with the observations taken within the forecast time window. The resulting state is the so-called “ analysis

  16. Construction and updating of event models in auditory event processing.

    PubMed

    Huff, Markus; Maurer, Annika E; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-02-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from 5 experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material, allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in light of current event cognition theories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Modelling and Prediction of Spark-ignition Engine Power Performance Using Incremental Least Squares Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke

    2010-05-01

    Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.

  18. Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.

    PubMed

    Gijsberts, Arjan; Metta, Giorgio

    2013-05-01

    Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Content-Aware DataGuide with Incremental Index Update using Frequently Used Paths

    NASA Astrophysics Data System (ADS)

    Sharma, A. K.; Duhan, Neelam; Khattar, Priyanka

    2010-11-01

    Size of the WWW is increasing day by day. Due to the absence of structured data on the Web, it becomes very difficult for information retrieval tools to fully utilize the Web information. As a solution to this problem, XML pages come into play, which provide structural information to the users to some extent. Without efficient indexes, query processing can be quite inefficient due to an exhaustive traversal on XML data. In this paper an improved content-centric approach of Content-Aware DataGuide, which is an indexing technique for XML databases, is being proposed that uses frequently used paths from historical query logs to improve query performance. The index can be updated incrementally according to the changes in query workload and thus, the overhead of reconstruction can be minimized. Frequently used paths are extracted using any Sequential Pattern mining algorithm on subsequent queries in the query workload. After this, the data structures are incrementally updated. This indexing technique proves to be efficient as partial matching queries can be executed efficiently and users can now get the more relevant documents in results.

  20. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGES

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  1. Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.

    PubMed

    Zhao, Dongfang; Yang, Li

    2009-01-01

    Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.

  2. New and incremental FDA black box warnings from 2008 to 2015.

    PubMed

    Solotke, Michael T; Dhruva, Sanket S; Downing, Nicholas S; Shah, Nilay D; Ross, Joseph S

    2018-02-01

    The boxed warning (also known as 'black box warning [BBW]') is one of the strongest drug safety actions that the U.S. Food & Drug Administration (FDA) can implement, and often warns of serious risks. The objective of this study was to comprehensively characterize BBWs issued for drugs after FDA approval. We identified all post-marketing BBWs from January 2008 through June 2015 listed on FDA's MedWatch and Drug Safety Communications websites. We used each drug's prescribing information to classify its BBW as new, major update to a preexisting BBW, or minor update. We then characterized these BBWs with respect to pre-specified BBW-specific and drug-specific features. There were 111 BBWs issued to drugs on the US market, of which 29% (n = 32) were new BBWs, 32% (n = 35) were major updates, and 40% (n = 44) were minor updates. New BBWs and major updates were most commonly issued for death (51%) and cardiovascular risk (27%). The new BBWs and major updates impacted 200 drug formulations over the study period, of which 64% were expected to be used chronically and 58% had available alternatives without a BBW. New BBWs and incremental updates to existing BBWs are frequently added to drug labels after regulatory approval.

  3. An ocean data assimilation system and reanalysis of the World Ocean hydrophysical fields

    NASA Astrophysics Data System (ADS)

    Zelenko, A. A.; Vil'fand, R. M.; Resnyanskii, Yu. D.; Strukov, B. S.; Tsyrulnikov, M. D.; Svirenko, P. I.

    2016-07-01

    A new version of the ocean data assimilation system (ODAS) developed at the Hydrometcentre of Russia is presented. The assimilation is performed following the sequential scheme analysis-forecast-analysis. The main components of the ODAS are procedures for operational observation data processing, a variational analysis scheme, and an ocean general circulation model used to estimate the first guess fields involved in the analysis. In situ observations of temperature and salinity in the upper 1400-m ocean layer obtained from various observational platforms are used as input data. In the new ODAS version, the horizontal resolution of the assimilating model and of the output products is increased, the previous 2D-Var analysis scheme is replaced by a more general 3D-Var scheme, and a more flexible incremental analysis updating procedure is introduced to correct the model calculations. A reanalysis of the main World Ocean hydrophysical fields over the 2005-2015 period has been performed using the updated ODAS. The reanalysis results are compared with data from independent sources.

  4. Guidance system operations plan for manned LM earth orbital and lunar missions using program luminary 1E. Section 2: Data links

    NASA Technical Reports Server (NTRS)

    Hamilton, M. H.

    1972-01-01

    Data links for the guidance system of manned lunar module orbital and lunar missions are presented. Subjects discussed are: (1) digital uplink to lunar module, (2) lunar module liftoff time increment, (3) lunar module contiguous block update, (4) lunar module scatter update, (5) lunar module digital downlink, and (6) absolute addresses for update program.

  5. Incremental Testing of the Community Multiscale Air Quality (CMAQ) Modeling System Version 4.7

    EPA Science Inventory

    This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to obse...

  6. 2005 Tri-Service Infrastructure Systems Conference and Exhibition. Volume 9, Tracks 9-11

    DTIC Science & Technology

    2005-08-04

    Walls ETL 1110-2-563, by John D. Clarkson and Robert C. Patev Belleville Locks & Dam Barge Accident on 6 Jan 05, by John Clarkson Portugues Dam Project...Update, by Alberto Gonzalez, Jim Mangold and Dave Dollar Portugues Dam: RCC Materials Investigation, by Jim Hinds Nonlinear Incremental Thermal Stress...Strain Analysis Portugues Dam, by David Dollar, Ahmed Nisar, Paul Jacob and Charles Logie Seismic Isolation of Mission-Critical Infrastructure to

  7. Sustained mahogany (Swietenia macrophylla) plantation heartwood increment.

    Treesearch

    Frank H. Wadsworth; Edgardo. Gonzalez

    2008-01-01

    In a search for an increment-based rotation for plantation mahogany(Swietenia macrophylla King), heartwood volume per tree was regressed on DBH (trunk diameter outside bark at 1.4 m above the ground) and merchantable height measurements. We updated a previous study [Wadsworth, F.H., González González, E., Figuera Colón, J.C., Lugo P...

  8. Percolator: Scalable Pattern Discovery in Dynamic Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhury, Sutanay; Purohit, Sumit; Lin, Peng

    We demonstrate Percolator, a distributed system for graph pattern discovery in dynamic graphs. In contrast to conventional mining systems, Percolator advocates efficient pattern mining schemes that (1) support pattern detection with keywords; (2) integrate incremental and parallel pattern mining; and (3) support analytical queries such as trend analysis. The core idea of Percolator is to dynamically decide and verify a small fraction of patterns and their in- stances that must be inspected in response to buffered updates in dynamic graphs, with a total mining cost independent of graph size. We demonstrate a) the feasibility of incremental pattern mining by walkingmore » through each component of Percolator, b) the efficiency and scalability of Percolator over the sheer size of real-world dynamic graphs, and c) how the user-friendly GUI of Percolator inter- acts with users to support keyword-based queries that detect, browse and inspect trending patterns. We also demonstrate two user cases of Percolator, in social media trend analysis and academic collaboration analysis, respectively.« less

  9. Information distribution in distributed microprocessor based flight control systems

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1977-01-01

    This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.

  10. 2005 Tri-Service Infrastructure Systems Conference and Exhibition. Volume 11, Tracks 13 and 14

    DTIC Science & Technology

    2005-08-04

    Walls ETL 1110-2-563, by John D. Clarkson and Robert C. Patev Belleville Locks & Dam Barge Accident on 6 Jan 05, by John Clarkson Portugues Dam Project...Update, by Alberto Gonzalez, Jim Mangold and Dave Dollar Portugues Dam: RCC Materials Investigation, by Jim Hinds Nonlinear Incremental Thermal Stress...Strain Analysis Portugues Dam, by David Dollar, Ahmed Nisar, Paul Jacob and Charles Logie Seismic Isolation of Mission-Critical Infrastructure to

  11. Attitude Determination Algorithm based on Relative Quaternion Geometry of Velocity Incremental Vectors for Cost Efficient AHRS Design

    NASA Astrophysics Data System (ADS)

    Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung

    2018-05-01

    A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.

  12. Sustained change blindness to incremental scene rotation: a dissociation between explicit change detection and visual memory.

    PubMed

    Hollingworth, Andrew; Henderson, John M

    2004-07-01

    In a change detection paradigm, the global orientation of a natural scene was incrementally changed in 1 degree intervals. In Experiments 1 and 2, participants demonstrated sustained change blindness to incremental rotation, often coming to consider a significantly different scene viewpoint as an unchanged continuation of the original view. Experiment 3 showed that participants who failed to detect the incremental rotation nevertheless reliably detected a single-step rotation back to the initial view. Together, these results demonstrate an important dissociation between explicit change detection and visual memory. Following a change, visual memory is updated to reflect the changed state of the environment, even if the change was not detected.

  13. What is your neural function, visual narrative conjunction? Grammar, meaning, and fluency in sequential image processing.

    PubMed

    Cohn, Neil; Kutas, Marta

    2017-01-01

    Visual narratives sometimes depict successive images with different characters in the same physical space; corpus analysis has revealed that this occurs more often in Japanese manga than American comics. We used event-related brain potentials to determine whether comprehension of "visual narrative conjunctions" invokes not only incremental mental updating as traditionally assumed, but also, as we propose, "grammatical" combinatoric processing. We thus crossed (non)/conjunction sequences with character (in)/congruity. Conjunctions elicited a larger anterior negativity (300-500 ms) than nonconjunctions, regardless of congruity, implicating "grammatical" processes. Conjunction and incongruity both elicited larger P600s (500-700 ms), indexing updating. Both conjunction effects were modulated by participants' frequency of reading manga while growing up. Greater anterior negativity in frequent manga readers suggests more reliance on combinatoric processing; larger P600 effects in infrequent manga readers suggest more resources devoted to mental updating. As in language comprehension, it seems that processing conjunctions in visual narratives is not just mental updating but also partly grammatical, conditioned by comic readers' experience with specific visual narrative structures.

  14. A computational procedure for multibody systems including flexible beam dynamics

    NASA Technical Reports Server (NTRS)

    Downer, J. D.; Park, K. C.; Chiou, J. C.

    1990-01-01

    A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.

  15. Review of USACE Institutional Information Related to Evaluation of Incremental Changes in Water Resources Planning

    DTIC Science & Technology

    2011-03-01

    The Corps will deliver a more holistic approach to solving water resources chal- lenges that effectively considers the broad variety of economic ...scales, and standards for a balanced evaluation of economic , social, and environmental factors, should be updated and expanded to a level of detail...comparable to cur- rent standards for traditional benefit-cost analysis of economic objec- tives of a project” (pp 5–6). • “The Corps should ensure that

  16. KSC 50-MHz Doppler Radar Wind Profiler (DRWP) Operational Acceptance Test (OAT) Report

    NASA Technical Reports Server (NTRS)

    Barbre, Robert E.

    2015-01-01

    This report documents analysis results of the Kennedy Space Center updated 50-MHz Doppler Radar Wind Profiler (DRWP) Operational Acceptance Test (OAT). This test was designed to demonstrate that the new DRWP operates in a similar manner to the previous DRWP for use as a situational awareness asset for mission operations at the Eastern Range to identify rapid changes in the wind environment that weather balloons cannot depict. Data examination and two analyses showed that the updated DRWP meets the specifications in the OAT test plan and performs at least as well as the previous DRWP. Data examination verified that the DRWP provides complete profiles every five minutes from 1.8-19.5 km in vertical increments of 150 m. Analysis of 5,426 wind component reports from 49 concurrent DRWP and balloon profiles presented root mean square (RMS) wind component differences around 2.0 m/s. The DRWP's effective vertical resolution (EVR) was found to be 300 m for both the westerly and southerly wind component, which the best EVR possible given the DRWP's vertical sampling interval. A third analysis quantified the sensitivity to rejecting data that do not have adequate signal by assessing the number of first-guess propagations at each altitude. This report documents the data, quality control procedures, methodology, and results of each analysis. It also shows that analysis of the updated DRWP produced results that were at least as good as the previous DRWP with proper rationale. The report recommends acceptance of the updated DRWP for situational awareness usage as per the OAT's intent.

  17. Coffee and caffeine intake and breast cancer risk: an updated dose-response meta-analysis of 37 published studies.

    PubMed

    Jiang, Wenjie; Wu, Yili; Jiang, Xiubo

    2013-06-01

    We conducted an updated meta-analysis to summarize the evidence from published studies regarding the association of coffee and caffeine intake with breast cancer risk. Pertinent studies were identified by a search of PubMed and by reviewing the reference lists of retrieved articles. The fixed or random effect model was used based on heterogeneity test. The dose-response relationship was assessed by restricted cubic spline model and multivariate random-effect meta-regression. 37 published articles, involving 59,018 breast cancer cases and 966,263 participants, were included in the meta-analysis. No significant association was found between breast cancer risk and coffee (RR=0.97, P=0.09), decaffeinated coffee (RR=0.98, P=0.55) and caffeine (RR=0.99, P=0.73), respectively. And the association was still not significant when combining coffee and caffeine (coffee/caffeine) (RR=0.97, P=0.09). However, an inverse association of coffee/caffeine with breast cancer risk was found for postmenopausal women (RR=0.94, P=0.02), and a strong and significant association of coffee with breast cancer risk was found for BRCA1 mutation carriers (RR=0.69, P<0.01). A linear dose-response relationship was found for breast cancer risk with coffee and caffeine, and the risk of breast cancer decreased by 2% (P=0.05) for every 2 cups/day increment in coffee intake, and 1% (P=0.52) for every 200mg/day increment in caffeine intake, respectively. Findings from this meta-analysis suggested that coffee/caffeine might be weakly associated with breast cancer risk for postmenopausal women, and the association for BRCA1 mutation carriers deserves further investigation. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.

    NASA Astrophysics Data System (ADS)

    Sasaki, Hironori

    This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically investigated using the photorefractive dynamics described in previous chapters of the dissertation. The implementation of the feed-forward image compression network with 900 input and 9 output neurons with 6-bit interconnection accuracy is experimentally demonstrated. Learning of the Perceptron network that determines sex based on input face images of 900 pixels is also successfully demonstrated.

  19. Context and Content Aware Routing of Managed Information Objects

    DTIC Science & Technology

    2014-05-01

    datatype . Siena, however, does not support incremental updates (i.e., subscription posting and deletion) and so updates must be done in batch mode...Although the present implementation of PUBSUB does not support the string datatype , its architecture is sufficiently versatile to accommodate this... datatype with the inclusion of additional data structures as de- scribed in Section 3. 3. PUBSUB Section 3.1 describes how PUBSUB organizes its database of

  20. Evaluation of Potential LSST Spatial Indexing Strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolaev, S; Abdulla, G; Matzke, R

    2006-10-13

    The LSST requirement for producing alerts in near real-time, and the fact that generating an alert depends on knowing the history of light variations for a given sky position, both imply that the clustering information for all detections is available at any time during the survey. Therefore, any data structure describing clustering of detections in LSST needs to be continuously updated, even as new detections are arriving from the pipeline. We call this use case ''incremental clustering'', to reflect this continuous updating of clustering information. This document describes the evaluation results for several potential LSST incremental clustering strategies, using: (1)more » Neighbors table and zone optimization to store spatial clusters (a.k.a. Jim Grey's, or SDSS algorithm); (2) MySQL built-in R-tree implementation; (3) an external spatial index library which supports a query interface.« less

  1. MBGD update 2013: the microbial genome database for exploring the diversity of microbial world.

    PubMed

    Uchiyama, Ikuo; Mihara, Motohiro; Nishide, Hiroyo; Chiba, Hirokazu

    2013-01-01

    The microbial genome database for comparative analysis (MBGD, available at http://mbgd.genome.ad.jp/) is a platform for microbial genome comparison based on orthology analysis. As its unique feature, MBGD allows users to conduct orthology analysis among any specified set of organisms; this flexibility allows MBGD to adapt to a variety of microbial genomic study. Reflecting the huge diversity of microbial world, the number of microbial genome projects now becomes several thousands. To efficiently explore the diversity of the entire microbial genomic data, MBGD now provides summary pages for pre-calculated ortholog tables among various taxonomic groups. For some closely related taxa, MBGD also provides the conserved synteny information (core genome alignment) pre-calculated using the CoreAligner program. In addition, efficient incremental updating procedure can create extended ortholog table by adding additional genomes to the default ortholog table generated from the representative set of genomes. Combining with the functionalities of the dynamic orthology calculation of any specified set of organisms, MBGD is an efficient and flexible tool for exploring the microbial genome diversity.

  2. Model parameter estimation approach based on incremental analysis for lithium-ion batteries without using open circuit voltage

    NASA Astrophysics Data System (ADS)

    Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui

    2015-08-01

    To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.

  3. An empiric estimate of the value of life: updating the renal dialysis cost-effectiveness standard.

    PubMed

    Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A

    2009-01-01

    Proposals to make decisions about coverage of new technology by comparing the technology's incremental cost-effectiveness with the traditional benchmark of dialysis imply that the incremental cost-effectiveness ratio of dialysis is seen a proxy for the value of a statistical year of life. The frequently used ratio for dialysis has, however, not been updated to reflect more recently available data on dialysis. We developed a computer simulation model for the end-stage renal disease population and compared cost, life expectancy, and quality adjusted life expectancy of current dialysis practice relative to three less costly alternatives and to no dialysis. We estimated incremental cost-effectiveness ratios for these alternatives relative to the next least costly alternative and no dialysis and analyzed the population distribution of the ratios. Model parameters and costs were estimated using data from the Medicare population and a large integrated health-care delivery system between 1996 and 2003. The sensitivity of results to model assumptions was tested using 38 scenarios of one-way sensitivity analysis, where parameters informing the cost, utility, mortality and morbidity, etc. components of the model were by perturbed +/-50%. The incremental cost-effectiveness ratio of dialysis of current practice relative to the next least costly alternative is on average $129,090 per quality-adjusted life-year (QALY) ($61,294 per year), but its distribution within the population is wide; the interquartile range is $71,890 per QALY, while the 1st and 99th percentiles are $65,496 and $488,360 per QALY, respectively. Higher incremental cost-effectiveness ratios were associated with older age and more comorbid conditions. Sensitivity to model parameters was comparatively small, with most of the scenarios leading to a change of less than 10% in the ratio. The value of a statistical year of life implied by dialysis practice currently averages $129,090 per QALY ($61,294 per year), but is distributed widely within the dialysis population. The spread suggests that coverage decisions using dialysis as the benchmark may need to incorporate percentile values (which are higher than the average) to be consistent with the Rawlsian principles of justice of preserving the rights and interests of society's most vulnerable patient groups.

  4. Incremental Costs and Cost Effectiveness of Intensive Treatment in Individuals with Type 2 Diabetes Detected by Screening in the ADDITION-UK Trial: An Update with Empirical Trial-Based Cost Data.

    PubMed

    Laxy, Michael; Wilson, Edward C F; Boothby, Clare E; Griffin, Simon J

    2017-12-01

    There is uncertainty about the cost effectiveness of early intensive treatment versus routine care in individuals with type 2 diabetes detected by screening. To derive a trial-informed estimate of the incremental costs of intensive treatment as delivered in the Anglo-Danish-Dutch Study of Intensive Treatment in People with Screen-Detected Diabetes in Primary Care-Europe (ADDITION) trial and to revisit the long-term cost-effectiveness analysis from the perspective of the UK National Health Service. We analyzed the electronic primary care records of a subsample of the ADDITION-Cambridge trial cohort (n = 173). Unit costs of used primary care services were taken from the published literature. Incremental annual costs of intensive treatment versus routine care in years 1 to 5 after diagnosis were calculated using multilevel generalized linear models. We revisited the long-term cost-utility analyses for the ADDITION-UK trial cohort and reported results for ADDITION-Cambridge using the UK Prospective Diabetes Study Outcomes Model and the trial-informed cost estimates according to a previously developed evaluation framework. Incremental annual costs of intensive treatment over years 1 to 5 averaged £29.10 (standard error = £33.00) for consultations with general practitioners and nurses and £54.60 (standard error = £28.50) for metabolic and cardioprotective medication. For ADDITION-UK, over the 10-, 20-, and 30-year time horizon, adjusted incremental quality-adjusted life-years (QALYs) were 0.014, 0.043, and 0.048, and adjusted incremental costs were £1,021, £1,217, and £1,311, resulting in incremental cost-effectiveness ratios of £71,232/QALY, £28,444/QALY, and £27,549/QALY, respectively. Respective incremental cost-effectiveness ratios for ADDITION-Cambridge were slightly higher. The incremental costs of intensive treatment as delivered in the ADDITION-Cambridge trial were lower than expected. Given UK willingness-to-pay thresholds in patients with screen-detected diabetes, intensive treatment is of borderline cost effectiveness over a time horizon of 20 years and more. Copyright © 2017. Published by Elsevier Inc.

  5. Analysis and forecast experiments incorporating satellite soundings and cloud and water vapor drift wind information

    NASA Technical Reports Server (NTRS)

    Goodman, Brian M.; Diak, George R.; Mills, Graham A.

    1986-01-01

    A system for assimilating conventional meteorological data and satellite-derived data in order to produce four-dimensional gridded data sets of the primary atmospheric variables used for updating limited area forecast models is described. The basic principles of a data assimilation scheme as proposed by Lorenc (1984) are discussed. The design of the system and its incremental assimilation cycles are schematically presented. The assimilation system was tested using radiosonde, buoy, VAS temperature, dew point, gradient wind data, cloud drift, and water vapor motion data. The rms vector errors for the data are analyzed.

  6. Cost-effectiveness analysis of quadrivalent influenza vaccination in at-risk adults and the elderly: an updated analysis in the U.K.

    PubMed

    Meier, G; Gregg, M; Poulsen Nautrup, B

    2015-01-01

    To update an earlier evaluation estimating the cost-effectiveness of quadrivalent influenza vaccination (QIV) compared with trivalent influenza vaccination (TIV) in the adult population currently recommended for influenza vaccination in the UK (all people aged ≥65 years and people aged 18-64 years with clinical risk conditions). This analysis takes into account updated vaccine prices, reference costs, influenza strain circulation, and burden of illness data. A lifetime, multi-cohort, static Markov model was constructed with seven age groups. The model was run in 1-year cycles for a lifetime, i.e., until the youngest patients at entry reached the age of 100 years. The base-case analysis was from the perspective of the UK National Health Service, with a secondary analysis from the societal perspective. Costs and benefits were discounted at 3.5%. Herd effects were not included. Inputs were derived from systematic reviews, peer-reviewed articles, and government publications and databases. One-way and probabilistic sensitivity analyses were performed. In the base-case, QIV would be expected to avoid 1,413,392 influenza cases, 41,780 hospitalizations, and 19,906 deaths over the lifetime horizon, compared with TIV. The estimated incremental cost-effectiveness ratio (ICER) was £14,645 per quality-adjusted life-year (QALY) gained. From the societal perspective, the estimated ICER was £13,497/QALY. A strategy of vaccinating only people aged ≥65 years had an estimated ICER of £11,998/QALY. Sensitivity analysis indicated that only two parameters, seasonal variation in influenza B matching and influenza A circulation, had a substantial effect on the ICER. QIV would be likely to be cost-effective compared with TIV in 68% of simulations with a willingness-to-pay threshold of <£20,000/QALY and 87% with a willingness-to-pay threshold of <£30,000/QALY. In this updated analysis, QIV was estimated to be cost-effective compared with TIV in the U.K.

  7. Evolution of cooperation driven by incremental learning

    NASA Astrophysics Data System (ADS)

    Li, Pei; Duan, Haibin

    2015-02-01

    It has been shown that the details of microscopic rules in structured populations can have a crucial impact on the ultimate outcome in evolutionary games. So alternative formulations of strategies and their revision processes exploring how strategies are actually adopted and spread within the interaction network need to be studied. In the present work, we formulate the strategy update rule as an incremental learning process, wherein knowledge is refreshed according to one's own experience learned from the past (self-learning) and that gained from social interaction (social-learning). More precisely, we propose a continuous version of strategy update rules, by introducing the willingness to cooperate W, to better capture the flexibility of decision making behavior. Importantly, the newly gained knowledge including self-learning and social learning is weighted by the parameter ω, establishing a strategy update rule involving innovative element. Moreover, we quantify the macroscopic features of the emerging patterns to inspect the underlying mechanisms of the evolutionary process using six cluster characteristics. In order to further support our results, we examine the time evolution course for these characteristics. Our results might provide insights for understanding cooperative behaviors and have several important implications for understanding how individuals adjust their strategies under real-life conditions.

  8. Updates to Enhanced Geothermal System Resource Potential Estimate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad

    The deep EGS electricity generation resource potential estimate maintained by the National Renewable Energy Laboratory was updated using the most recent temperature-at-depth maps available from the Southern Methodist University Geothermal Laboratory. The previous study dates back to 2011 and was developed using the original temperature-at-depth maps showcased in the 2006 MIT Future of Geothermal Energy report. The methodology used to update the deep EGS resource potential is the same as in the previous study and is summarized in the paper. The updated deep EGS resource potential estimate was calculated for depths between 3 and 7 km and is binned inmore » 25 degrees C increments. The updated deep EGS electricity generation resource potential estimate is 4,349 GWe. A comparison of the estimates from the previous and updated studies shows a net increase of 117 GWe in the 3-7 km depth range, due mainly to increases in the underlying temperature-at-depth estimates from the updated maps.« less

  9. Update to Enhanced Geothermal System Resource Potential Estimate: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad

    2016-10-01

    The deep EGS electricity generation resource potential estimate maintained by the National Renewable Energy Laboratory was updated using the most recent temperature-at-depth maps available from the Southern Methodist University Geothermal Laboratory. The previous study dates back to 2011 and was developed using the original temperature-at-depth maps showcased in the 2006 MIT Future of Geothermal Energy report. The methodology used to update the deep EGS resource potential is the same as in the previous study and is summarized in the paper. The updated deep EGS resource potential estimate was calculated for depths between 3 and 7 km and is binned inmore » 25 degrees C increments. The updated deep EGS electricity generation resource potential estimate is 4,349 GWe. A comparison of the estimates from the previous and updated studies shows a net increase of 117 GWe in the 3-7 km depth range, due mainly to increases in the underlying temperature-at-depth estimates from the updated maps.« less

  10. Interactive Machine Learning at Scale with CHISSL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Dustin L.; Grace, Emily A.; Volkova, Svitlana

    We demonstrate CHISSL, a scalable client-server system for real-time interactive machine learning. Our system is capa- ble of incorporating user feedback incrementally and imme- diately without a structured or pre-defined prediction task. Computation is partitioned between a lightweight web-client and a heavyweight server. The server relies on representation learning and agglomerative clustering to learn a dendrogram, a hierarchical approximation of a representation space. The client uses only this dendrogram to incorporate user feedback into the model via transduction. Distances and predictions for each unlabeled instance are updated incrementally and deter- ministically, with O(n) space and time complexity. Our al- gorithmmore » is implemented in a functional prototype, designed to be easy to use by non-experts. The prototype organizes the large amounts of data into recommendations. This allows the user to interact with actual instances by dragging and drop- ping to provide feedback in an intuitive manner. We applied CHISSL to several domains including cyber, social media, and geo-temporal analysis.« less

  11. Investigation of the Dominant Factors Influencing the ERA15 Temperature Increments at the Subtropical and Temperate Belts with a Focus over the Eastern Mediterranean Region

    NASA Astrophysics Data System (ADS)

    Alpert, Pinhas; Hirsch-Eshkol, Tali; Baharad, Anat

    2015-04-01

    A Stepwise Multi Regression-based statistics was employed for prioritizing the influence of several factors, both anthropogenic and natural, on the ERA15 temperature increments. The 5 factors which are defined as predictors are;topography, aerosol index (TOMS-AI), atmospheric vertical velocity along with two anthropogenic factors population density and land use changes (LUCI and NDVI trends). The seismic hazard assessment factor was also chosen as the "dummy variable", for validity. Special focus was given to the land use change factor, which was based on two different data sets; HITE data of historical land use/ land cover data and of NDVI trends during 1982- 1991. The Increment Analysis Updates of temperature (IAU(T)), the predicted data, was obtained from the ERA15 (1979-1993) reanalysis. The research consists of both spatial and vertical analyses as well as potential synergies of the selected variables. The spatial geographic analysis is divided into three categories; (a) Coarse region (b) Sub regions analysis and (c) A "small cell" of 4°X4° analysis. It is shown that the following three factors;Topography, TOMS-AI and NDVI are statistically significant (at p<0.05 level) in being the most effective predictors of IAU(T), especially at the 700mb level during March - June. In contrast, the 850mb presents the weakest contribution to IAU(T)probably due to contradictive influence of the various variables at this level. The land use as expressed by the NDVI trends factor, shows a very clear dependency with height, i.e. decreasing, and is one of the most influential factors over the Eastern Mediterranean, which explains up to 20% of the temperature increments in January at 700mb. Moreover, its influence is significant (p<0.05) through all research stages and the different combinations of the multiple regression runs. A major finding not quantified earlier. Reference: T. Hirsch-Eshkol, A. Baharad and P. Alpert, "Investigation of the dominant factors influencing the ERA15 temperature increments at the subtropical and temperate belts with a focus over the Eastern Mediterranean region", Land, 3, 1015-1036; doi:10.3390/land3031015, 2014.

  12. Maintaining Atmospheric Mass and Water Balance Within Reanalysis

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.; Suarez, Max; Todling, Ricardo

    2015-01-01

    This report describes the modifications implemented into the Goddard Earth Observing System Version-5 (GEOS-5) Atmospheric Data Assimilation System (ADAS) to maintain global conservation of dry atmospheric mass as well as to preserve the model balance of globally integrated precipitation and surface evaporation during reanalysis. Section 1 begins with a review of these global quantities from four current reanalysis efforts. Section 2 introduces the modifications necessary to preserve these constraints within the atmospheric general circulation model (AGCM), the Gridpoint Statistical Interpolation (GSI) analysis procedure, and the Incremental Analysis Update (IAU) algorithm. Section 3 presents experiments quantifying the impact of the new procedure. Section 4 shows preliminary results from its use within the GMAO MERRA-2 Reanalysis project. Section 5 concludes with a summary.

  13. GAMBIT: the global and modular beyond-the-standard-model inference tool. Addendum for GAMBIT 1.1: Mathematica backends, SUSYHD interface and updated likelihoods

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian

    2018-02-01

    In Ref. (GAMBIT Collaboration: Athron et. al., Eur. Phys. J. C. arXiv:1705.07908, 2017) we introduced the global-fitting framework GAMBIT. In this addendum, we describe a new minor version increment of this package. GAMBIT 1.1 includes full support for Mathematica backends, which we describe in some detail here. As an example, we backend SUSYHD (Vega and Villadoro, JHEP 07:159, 2015), which calculates the mass of the Higgs boson in the MSSM from effective field theory. We also describe updated likelihoods in PrecisionBit and DarkBit, and updated decay data included in DecayBit.

  14. [Cost analysis for navigation in knee endoprosthetics].

    PubMed

    Cerha, O; Kirschner, S; Günther, K-P; Lützner, J

    2009-12-01

    Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to 300-395 depending on the navigation system. Computer-assisted TKA is associated with additional costs. From an economical point of view an amount of more than 50 procedures per year appears to be favourable. The cost-effectiveness could be estimated if long-term results will show a reduction of revisions or a better clinical outcome.

  15. Product Quality Modelling Based on Incremental Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Wang, J.; Zhang, W.; Qin, B.; Shi, W.

    2012-05-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  16. Technical report series on global modeling and data assimilation. Volume 4: Documentation of the Goddard Earth Observing System (GEOS) data assimilation system, version 1

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Pfaendtner, James; Bloom, Stephen; Lamich, David; Seablom, Michael; Sienkiewicz, Meta; Stobie, James; Dasilva, Arlindo

    1995-01-01

    This report describes the analysis component of the Goddard Earth Observing System, Data Assimilation System, Version 1 (GEOS-1 DAS). The general features of the data assimilation system are outlined, followed by a thorough description of the statistical interpolation algorithm, including specification of error covariances and quality control of observations. We conclude with a discussion of the current status of development of the GEOS data assimilation system. The main components of GEOS-1 DAS are an atmospheric general circulation model and an Optimal Interpolation algorithm. The system is cycled using the Incremental Analysis Update (IAU) technique in which analysis increments are introduced as time independent forcing terms in a forecast model integration. The system is capable of producing dynamically balanced states without the explicit use of initialization, as well as a time-continuous representation of non- observables such as precipitation and radiational fluxes. This version of the data assimilation system was used in the five-year reanalysis project completed in April 1994 by Goddard's Data Assimilation Office (DAO) Data from this reanalysis are available from the Goddard Distributed Active Center (DAAC), which is part of NASA's Earth Observing System Data and Information System (EOSDIS). For information on how to obtain these data sets, contact the Goddard DAAC at (301) 286-3209, EMAIL daac@gsfc.nasa.gov.

  17. SMOS brightness temperature assimilation into the Community Land Model

    NASA Astrophysics Data System (ADS)

    Rains, Dominik; Han, Xujun; Lievens, Hans; Montzka, Carsten; Verhoest, Niko E. C.

    2017-11-01

    SMOS (Soil Moisture and Ocean Salinity mission) brightness temperatures at a single incident angle are assimilated into the Community Land Model (CLM) across Australia to improve soil moisture simulations. Therefore, the data assimilation system DasPy is coupled to the local ensemble transform Kalman filter (LETKF) as well as to the Community Microwave Emission Model (CMEM). Brightness temperature climatologies are precomputed to enable the assimilation of brightness temperature anomalies, making use of 6 years of SMOS data (2010-2015). Mean correlation R with in situ measurements increases moderately from 0.61 to 0.68 (11 %) for upper soil layers if the root zone is included in the updates. A reduced improvement of 5 % is achieved if the assimilation is restricted to the upper soil layers. Root-zone simulations improve by 7 % when updating both the top layers and root zone, and by 4 % when only updating the top layers. Mean increments and increment standard deviations are compared for the experiments. The long-term assimilation impact is analysed by looking at a set of quantiles computed for soil moisture at each grid cell. Within hydrological monitoring systems, extreme dry or wet conditions are often defined via their relative occurrence, adding great importance to assimilation-induced quantile changes. Although still being limited now, longer L-band radiometer time series will become available and make model output improved by assimilating such data that are more usable for extreme event statistics.

  18. 77 FR 73456 - Update to the TR-12 Fuel Related Rate Adjustment Policy (SDDC Fuel Surcharge Policy)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-10

    ... Time Only (OTO) personal property movements, regardless of mode. SDDC will not pay a FRA for any type... less-than-truckload (LTL) and Personal Property (PP) shipments. The percentage of line-haul increment... property movements within the United States. This policy provides the transportation industry, including...

  19. Self-adaptive Solution Strategies

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1984-01-01

    The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.

  20. Pseudo-updated constrained solution algorithm for nonlinear heat conduction

    NASA Technical Reports Server (NTRS)

    Tovichakchaikul, S.; Padovan, J.

    1983-01-01

    This paper develops efficiency and stability improvements in the incremental successive substitution (ISS) procedure commonly used to generate the solution to nonlinear heat conduction problems. This is achieved by employing the pseudo-update scheme of Broyden, Fletcher, Goldfarb and Shanno in conjunction with the constrained version of the ISS. The resulting algorithm retains the formulational simplicity associated with ISS schemes while incorporating the enhanced convergence properties of slope driven procedures as well as the stability of constrained approaches. To illustrate the enhanced operating characteristics of the new scheme, the results of several benchmark comparisons are presented.

  1. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    PubMed

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  2. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG) Pattern Recognition

    PubMed Central

    Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi

    2017-01-01

    Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle). PMID:28608824

  3. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG) Pattern Recognition.

    PubMed

    Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi

    2017-06-13

    Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle).

  4. Online blind source separation using incremental nonnegative matrix factorization with volume constraint.

    PubMed

    Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei

    2011-04-01

    Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.

  5. An incremental knowledge assimilation system (IKAS) for mine detection

    NASA Astrophysics Data System (ADS)

    Porway, Jake; Raju, Chaitanya; Varadarajan, Karthik Mahesh; Nguyen, Hieu; Yadegar, Joseph

    2010-04-01

    In this paper we present an adaptive incremental learning system for underwater mine detection and classification that utilizes statistical models of seabed texture and an adaptive nearest-neighbor classifier to identify varied underwater targets in many different environments. The first stage of processing uses our Background Adaptive ANomaly detector (BAAN), which identifies statistically likely target regions using Gabor filter responses over the image. Using this information, BAAN classifies the background type and updates its detection using background-specific parameters. To perform classification, a Fully Adaptive Nearest Neighbor (FAAN) determines the best label for each detection. FAAN uses an extremely fast version of Nearest Neighbor to find the most likely label for the target. The classifier perpetually assimilates new and relevant information into its existing knowledge database in an incremental fashion, allowing improved classification accuracy and capturing concept drift in the target classes. Experiments show that the system achieves >90% classification accuracy on underwater mine detection tasks performed on synthesized datasets provided by the Office of Naval Research. We have also demonstrated that the system can incrementally improve its detection accuracy by constantly learning from new samples.

  6. Presenting Big Data in Google Earth with KML

    NASA Astrophysics Data System (ADS)

    Hagemark, B.

    2006-12-01

    KML 2.1 and Google Earth 4 provides support to enable streaming of very large datasets, with "smart" loading of data at multiple levels of resolution and incremental update to previously loaded data. This presentation demonstrates this technology for use with the Google Earth KML geometry and image primitives and shows some techniques and tools for creating this KML.

  7. Cargo Movement Operations System (CMOS). System Segment Specification, Updated, Increment II

    DTIC Science & Technology

    1990-05-02

    CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: ACCEPT [ ] REJECT [ 3 COMMENT STATUS: OPEN [ ] CLOSED...ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL...the LAN. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED

  8. Implicit Theories of Intelligence and Academic Achievement: A Meta-Analytic Review

    PubMed Central

    Costa, Ana; Faria, Luísa

    2018-01-01

    The current study intended to model the link between implicit theories of intelligence (ITI) and students' academic achievement, within a meta-analytic review procedure. To assess studies' effect size, the Pearson's correlation coefficient (r) was used. The review of 46 studies (94 effect sizes) with 412,022 students presented a low-to-moderate association between the ITI and students' academic achievement. The results indicated that incremental theorists are more likely to have higher grades in specific subjects (verbal and quantitative) and in overall achievement. The entity beliefs were positively associated with students' specific verbal and quantitative domains but at a lower magnitude than incremental beliefs. Moreover, the moderator effect analyses results indicated that the link between ITI and students' achievement was not moderated by gender, but there was a moderate association in student's middle school grade. Additionally, the ITI assessment based on the most recent versions of Dweck's scales, the use of specific academic scales instead of general ITI scales, and the use of the original measures rather than adapted versions strongly moderated the link between ITI and achievement. Moreover, students from Eastern continents (Asia and Oceania) reported a positive association between incremental beliefs and achievement, Europe displayed a positive link between entity beliefs and achievement, whereas North America presented negative correlations between entity perspectives and academic achievement. This meta-analysis updates the current evidence supporting the direct link of ITI and students' academic achievement and acknowledges specific effects that ITI could have in different academic outcomes. PMID:29922195

  9. The advantage of flexible neuronal tunings in neural network models for motor learning

    PubMed Central

    Marongelli, Ellisha N.; Thoroughman, Kurt A.

    2013-01-01

    Human motor adaptation to novel environments is often modeled by a basis function network that transforms desired movement properties into estimated forces. This network employs a layer of nodes that have fixed broad tunings that generalize across the input domain. Learning is achieved by updating the weights of these nodes in response to training experience. This conventional model is unable to account for rapid flexibility observed in human spatial generalization during motor adaptation. However, added plasticity in the widths of the basis function tunings can achieve this flexibility, and several neurophysiological experiments have revealed flexibility in tunings of sensorimotor neurons. We found a model, Locally Weighted Projection Regression (LWPR), which uniquely possesses the structure of a basis function network in which both the weights and tuning widths of the nodes are updated incrementally during adaptation. We presented this LWPR model with training functions of different spatial complexities and monitored incremental updates to receptive field widths. An inverse pattern of dependence of receptive field adaptation on experienced error became evident, underlying both a relationship between generalization and complexity, and a unique behavior in which generalization always narrows after a sudden switch in environmental complexity. These results implicate a model that is flexible in both basis function widths and weights, like LWPR, as a viable alternative model for human motor adaptation that can account for previously observed plasticity in spatial generalization. This theory can be tested by using the behaviors observed in our experiments as novel hypotheses in human studies. PMID:23888141

  10. Coffee and cancer risk: a summary overview.

    PubMed

    Alicandro, Gianfranco; Tavani, Alessandra; La Vecchia, Carlo

    2017-09-01

    We reviewed available evidence on coffee drinking and the risk of all cancers and selected cancers updated to May 2016. Coffee consumption is not associated with overall cancer risk. A meta-analysis reported a pooled relative risk (RR) for an increment of 1 cup of coffee/day of 1.00 [95% confidence interval (CI): 0.99-1.01] for all cancers. Coffee drinking is associated with a reduced risk of liver cancer. A meta-analysis of cohort studies found an RR for an increment of consumption of 1 cup/day of 0.85 (95% CI: 0.81-0.90) for liver cancer and a favorable effect on liver enzymes and cirrhosis. Another meta-analysis showed an inverse relation for endometrial cancer risk, with an RR of 0.92 (95% CI: 0.88-0.96) for an increment of 1 cup/day. A possible decreased risk was found in some studies for oral/pharyngeal cancer and for advanced prostate cancer. Although data are mixed, overall, there seems to be some favorable effect of coffee drinking on colorectal cancer in case-control studies, in the absence of a consistent relation in cohort studies. For bladder cancer, the results are not consistent; however, any possible direct association is not dose and duration related, and might depend on a residual confounding effect of smoking. A few studies suggest an increased risk of childhood leukemia after maternal coffee drinking during pregnancy, but data are limited and inconsistent. Although the results of studies are mixed, the overall evidence suggests no association of coffee intake with cancers of the stomach, pancreas, lung, breast, ovary, and prostate overall. Data are limited, with RR close to unity for other neoplasms, including those of the esophagus, small intestine, gallbladder and biliary tract, skin, kidney, brain, thyroid, as well as for soft tissue sarcoma and lymphohematopoietic cancer.

  11. Cargo Movement Operations System (CMOS). Updated Draft Software User’s Manual. Increment I

    DTIC Science & Technology

    1991-03-22

    ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ...shall be combined. Therefore, the menu structure should reflect that change. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO ... COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SUM-0002 PROGRAM OFFICE CONTROL NUMBER: DATA ITEM DISCREPANCY

  12. Study of the influence of selected anisotropic parameter in the Barlat's model on the drawpiece shape

    NASA Astrophysics Data System (ADS)

    Kaldunski, Pawel; Kukielka, Leon; Patyk, Radoslaw; Kulakowska, Agnieszka; Bohdal, Lukasz; Chodor, Jaroslaw; Kukielka, Krzysztof

    2018-05-01

    In this paper, the numerical analysis and computer simulation of deep drawing process has been presented. The incremental model of the process in updated Lagrangian formulation with the regard of the geometrical and physical nonlinearity has been evaluated by variational and the finite element methods. The Frederic Barlat's model taking into consideration the anisotropy of materials in three main and six tangents directions has been used. The work out application in Ansys/Ls-Dyna program allows complex step by step analysis and prognoses: the shape, dimensions and state stress and strains of drawpiece. The paper presents the influence of selected anisotropic parameter in the Barlat's model on the drawpiece shape, which includes: height, sheet thickness and maximum drawing force. The important factors determining the proper formation of drawpiece and the ways of their determination have been described.

  13. Mars Science Laboratory Frame Manager for Centralized Frame Tree Database and Target Pointing

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Leger, Chris; Peters, Stephen; Carsten, Joseph; Diaz-Calderon, Antonio

    2013-01-01

    The FM (Frame Manager) flight software module is responsible for maintaining the frame tree database containing coordinate transforms between frames. The frame tree is a proper tree structure of directed links, consisting of surface and rover subtrees. Actual frame transforms are updated by their owner. FM updates site and saved frames for the surface tree. As the rover drives to a new area, a new site frame with an incremented site index can be created. Several clients including ARM and RSM (Remote Sensing Mast) update their related rover frames that they own. Through the onboard centralized FM frame tree database, client modules can query transforms between any two frames. Important applications include target image pointing for RSM-mounted cameras and frame-referenced arm moves. The use of frame tree eliminates cumbersome, error-prone calculations of coordinate entries for commands and thus simplifies flight operations significantly.

  14. Updating representations of learned scenes.

    PubMed

    Finlay, Cory A; Motes, Michael A; Kozhevnikov, Maria

    2007-05-01

    Two experiments were designed to compare scene recognition reaction time (RT) and accuracy patterns following observer versus scene movement. In Experiment 1, participants memorized a scene from a single perspective. Then, either the scene was rotated or the participants moved (0 degrees -360 degrees in 36 degrees increments) around the scene, and participants judged whether the objects' positions had changed. Regardless of whether the scene was rotated or the observer moved, RT increased with greater angular distance between judged and encoded views. In Experiment 2, we varied the delay (0, 6, or 12 s) between scene encoding and locomotion. Regardless of the delay, however, accuracy decreased and RT increased with angular distance. Thus, our data show that observer movement does not necessarily update representations of spatial layouts and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.

  15. Interactive Scripting for Analysis and Visualization of Arbitrarily Large, Disparately Located Climate Data Ensembles Using a Progressive Runtime Server

    NASA Astrophysics Data System (ADS)

    Christensen, C.; Summa, B.; Scorzelli, G.; Lee, J. W.; Venkat, A.; Bremer, P. T.; Pascucci, V.

    2017-12-01

    Massive datasets are becoming more common due to increasingly detailed simulations and higher resolution acquisition devices. Yet accessing and processing these huge data collections for scientific analysis is still a significant challenge. Solutions that rely on extensive data transfers are increasingly untenable and often impossible due to lack of sufficient storage at the client side as well as insufficient bandwidth to conduct such large transfers, that in some cases could entail petabytes of data. Large-scale remote computing resources can be useful, but utilizing such systems typically entails some form of offline batch processing with long delays, data replications, and substantial cost for any mistakes. Both types of workflows can severely limit the flexible exploration and rapid evaluation of new hypotheses that are crucial to the scientific process and thereby impede scientific discovery. In order to facilitate interactivity in both analysis and visualization of these massive data ensembles, we introduce a dynamic runtime system suitable for progressive computation and interactive visualization of arbitrarily large, disparately located spatiotemporal datasets. Our system includes an embedded domain-specific language (EDSL) that allows users to express a wide range of data analysis operations in a simple and abstract manner. The underlying runtime system transparently resolves issues such as remote data access and resampling while at the same time maintaining interactivity through progressive and interruptible processing. Computations involving large amounts of data can be performed remotely in an incremental fashion that dramatically reduces data movement, while the client receives updates progressively thereby remaining robust to fluctuating network latency or limited bandwidth. This system facilitates interactive, incremental analysis and visualization of massive remote datasets up to petabytes in size. Our system is now available for general use in the community through both docker and anaconda.

  16. Cargo Movement Operations System (CMOS). Software Requirements Specification (Applications CSCI) Increment 1, Update

    DTIC Science & Technology

    1990-05-31

    12. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED...ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ 3 ORIGINATOR CONTROL NUMBER: SRS1-0004 PROGRAM OFFICE...operational state of the SBSS. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN

  17. Integrated use of spatial and semantic relationships for extracting road networks from floating car data

    NASA Astrophysics Data System (ADS)

    Li, Jun; Qin, Qiming; Xie, Chao; Zhao, Yue

    2012-10-01

    The update frequency of digital road maps influences the quality of road-dependent services. However, digital road maps surveyed by probe vehicles or extracted from remotely sensed images still have a long updating circle and their cost remain high. With GPS technology and wireless communication technology maturing and their cost decreasing, floating car technology has been used in traffic monitoring and management, and the dynamic positioning data from floating cars become a new data source for updating road maps. In this paper, we aim to update digital road maps using the floating car data from China's National Commercial Vehicle Monitoring Platform, and present an incremental road network extraction method suitable for the platform's GPS data whose sampling frequency is low and which cover a large area. Based on both spatial and semantic relationships between a trajectory point and its associated road segment, the method classifies each trajectory point, and then merges every trajectory point into the candidate road network through the adding or modifying process according to its type. The road network is gradually updated until all trajectories have been processed. Finally, this method is applied in the updating process of major roads in North China and the experimental results reveal that it can accurately derive geometric information of roads under various scenes. This paper provides a highly-efficient, low-cost approach to update digital road maps.

  18. Comparison of different assimilation schemes in an operational assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre

    2016-04-01

    In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The three IAU schemes differ from each other in the position of the increment update window that has the same size as the assimilation window. 0, 50 and 100 correspond to the degree of superposition of the increment update window on the current assimilation window. Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments The relevance of each assimilation scheme is evaluated through analyses on thermohaline variables and the current velocities. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semi-independent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system.

  19. Cost-effectiveness of superficial femoral artery endovascular interventions in the UK and Germany: a modelling study

    PubMed Central

    Kearns, Benjamin C; Thomas, Steven M

    2017-01-01

    Objectives To assess the lifetime costs and cost-effectiveness of 5 endovascular interventions to treat superficial femoral arterial disease. Design A model-based health economic evaluation. An existing decision analytical model was used, with updated effectiveness data taken from the literature, and updated costs based on purchasing prices. Setting UK and German healthcare perspectives were considered. Participants Patients with intermittent claudication of the femoropopliteal arteries eligible for endovascular treatment. Methods UK and German healthcare perspectives were considered, as were different strategies for re-intervention. Interventions Percutaneous transluminal angioplasty (PTA) with bail-out bare metal stenting (assumed to represent the existing standard of care, and 4 alternatives: primary bare metal stents, drug-eluting stents, drug-eluting balloons (DEBs) and biomimetic stents). Primary outcome measures The incremental cost-effectiveness ratio between 2 treatments, defined as the incremental costs divided by the incremental quality-adjusted life years (QALYs). Results Use of a biomimetic stent, BioMimics 3D, was always estimated to dominate the other interventions, having lower lifetime costs and greater effectiveness, as measured by QALYs. Of the remaining interventions, DEBs were always the most effective, and PTA the least effective. There was uncertainty in the cost-effectiveness results, with key drivers being the costs and effectiveness of the biomimetic stent along with the costs of DEBs. Conclusions All 4 of the alternatives to PTA were more effective, with the biomimetic stent being the most cost-effective. As there was uncertainty in the results, and all of the interventions have different mechanisms of action, all 4 may be considered to be alternatives to PTA. PMID:28087551

  20. Experimental investigation of distinguishable and non-distinguishable grayscales applicable in active-matrix organic light-emitting diodes for quality engineering

    NASA Astrophysics Data System (ADS)

    Yang, Henglong; Chang, Wen-Cheng; Lin, Yu-Hsuan; Chen, Ming-Hong

    2017-08-01

    The distinguishable and non-distinguishable 6-bit (64) grayscales of green and red organic light-emitting diode (OLED) were experimentally investigated by using high-sensitive photometric instrument. The feasibility of combining external detection system for quality engineering to compensate the grayscale loss based on preset grayscale tables was also investigated by SPICE simulation. The degradation loss of OLED deeply affects image quality as grayscales become inaccurate. The distinguishable grayscales are indicated as those brightness differences and corresponding current increments are differentiable by instrument. The grayscales of OLED in 8-bit (256) or higher may become nondistinguishable as current or voltage increments are in the same order of noise level in circuitry. The distinguishable grayscale tables for individual red, green, blue, and white colors can be experimentally established as preset reference for quality engineering (QE) in which the degradation loss is compensated by corresponding grayscale numbers shown in preset table. The degradation loss of each OLED colors is quantifiable by comparing voltage increments to those in preset grayscale table if precise voltage increments are detectable during operation. The QE of AMOLED can be accomplished by applying updated grayscale tables. Our preliminary simulation result revealed that it is feasible to quantify degradation loss in terms of grayscale numbers by using external detector circuitry.

  1. Technical Report Series on Global Modeling and Data Assimilation, Volume 43. MERRA-2; Initial Evaluation of the Climate

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Bosilovich, Michael G.; Akella, Santha; Lawrence, Coy; Cullather, Richard; Draper, Clara; Gelaro, Ronald; Kovach, Robin; Liu, Qing; Molod, Andrea; hide

    2015-01-01

    The years since the introduction of MERRA have seen numerous advances in the GEOS-5 Data Assimilation System as well as a substantial decrease in the number of observations that can be assimilated into the MERRA system. To allow continued data processing into the future, and to take advantage of several important innovations that could improve system performance, a decision was made to produce MERRA-2, an updated retrospective analysis of the full modern satellite era. One of the many advances in MERRA-2 is a constraint on the global dry mass balance; this allows the global changes in water by the analysis increment to be near zero, thereby minimizing abrupt global interannual variations due to changes in the observing system. In addition, MERRA-2 includes the assimilation of interactive aerosols into the system, a feature of the Earth system absent from previous reanalyses. Also, in an effort to improve land surface hydrology, observations-corrected precipitation forcing is used instead of model-generated precipitation. Overall, MERRA-2 takes advantage of numerous updates to the global modeling and data assimilation system. In this document, we summarize an initial evaluation of the climate in MERRA-2, from the surface to the stratosphere and from the tropics to the poles. Strengths and weaknesses of the MERRA-2 climate are accordingly emphasized.

  2. An Event-driven, Value-based, Pull Systems Engineering Scheduling Approach

    DTIC Science & Technology

    2012-03-01

    engineering in rapid response environments has been difficult, particularly those where large, complex brownfield systems or systems of systems exist and...where large, complex brownfield systems or systems of systems exist and are constantly being updated with both short and long term software enhancements...2004. [13] B. Boehm, “Applying the Incremental Commitment Model to Brownfield System Development,” Proceedings, CSER, 2009. [14] A. Borshchev and A

  3. Cargo Movement Operations System (CMOS) Updated Draft Software Requirements Specification (Applications CSCI) Increment II

    DTIC Science & Technology

    1990-11-29

    appropriate to combine them into one paragraph. CMOS PMO ACCEPTS COY24ENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT...COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SRS1-0004...ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SRS1-0005 PROGRAM OFFICE

  4. Cargo Movement Operations System (CMOS). Final Software Design Document. Increment III. (PC Unix - Air Force Configuration)

    DTIC Science & Technology

    1991-07-03

    required changes to this matrix. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN...this appendix should be updated to include all necessary changes. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION...ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SDD3-0004 PROGRAM OFFICE

  5. Global Energy and Water Budgets in MERRA

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Robertson, Franklin R.; Chen, Junye

    2010-01-01

    Reanalyses, retrospectively analyzing observations over climatological time scales, represent a merger between satellite observations and models to provide globally continuous data and have improved over several generations. Balancing the Earth s global water and energy budgets has been a focus of research for more than two decades. Models tend to their own climate while remotely sensed observations have had varying degrees of uncertainty. This study evaluates the latest NASA reanalysis, called the Modern Era Retrospective-analysis for Research and Applications (MERRA), from a global water and energy cycles perspective. MERRA was configured to provide complete budgets in its output diagnostics, including the Incremental Analysis Update (IAU), the term that represents the observations influence on the analyzed states, alongside the physical flux terms. Precipitation in reanalyses is typically sensitive to the observational analysis. For MERRA, the global mean precipitation bias and spatial variability are more comparable to merged satellite observations (GPCP and CMAP) than previous generations of reanalyses. Ocean evaporation also has a much lower value which is comparable to observed data sets. The global energy budget shows that MERRA cloud effects may be generally weak, leading to excess shortwave radiation reaching the ocean surface. Evaluating the MERRA time series of budget terms, a significant change occurs, which does not appear to be represented in observations. In 1999, the global analysis increments of water vapor changes sign from negative to positive, and primarily lead to more oceanic precipitation. This change is coincident with the beginning of AMSU radiance assimilation. Previous and current reanalyses all exhibit some sensitivity to perturbations in the observation record, and this remains a significant research topic for reanalysis development. The effect of the changing observing system is evaluated for MERRA water and energy budget terms.

  6. Air Quality Modeling Using the NASA GEOS-5 Multispecies Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Keller, Christoph A.; Pawson, Steven; Wargan, Krzysztof; Weir, Brad

    2018-01-01

    The NASA Goddard Earth Observing System (GEOS) data assimilation system (DAS) has been expanded to include chemically reactive tropospheric trace gases including ozone (O3), nitrogen dioxide (NO2), and carbon monoxide (CO). This system combines model analyses from the GEOS-5 model with detailed atmospheric chemistry and observations from MLS (O3), OMI (O3 and NO2), and MOPITT (CO). We show results from a variety of assimilation test experiments, highlighting the improvements in the representation of model species concentrations by up to 50% compared to an assimilation-free control experiment. Taking into account the rapid chemical cycling of NO2 when applying the assimilation increments greatly improves assimilation skills for NO2 and provides large benefits for model concentrations near the surface. Analysis of the geospatial distribution of the assimilation increments suggest that the free-running model overestimates biomass burning emissions but underestimates lightning NOx emissions by 5-20%. We discuss the capability of the chemical data assimilation system to improve atmospheric composition forecasts through improved initial value and boundary condition inputs, particularly during air pollution events. We find that the current assimilation system meaningfully improves short-term forecasts (1-3 day). For longer-term forecasts more emphasis on updating the emissions instead of initial concentration fields is needed.

  7. First incremental buy for Increment 2 of the Space Transportation System (STS)

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Thiokol manufactured and delivered 9 flight motors to KSC on schedule. All test flights were successful. All spent SRMs were recovered. Design, development, manufacture, and delivery of required transportation, handling, and checkout equipment to MSFC and to KSC were completed on schedule. All items of data required by DPD 400 were prepared and delivered as directed. In the system requirements and analysis area, the point of departure from Buy 1 to the operational phase was developed in significant detail with a complete set of transition documentation available. The documentation prepared during the Buy 1 program was maintained and updated where required. The following flight support activities should be continued through other production programs: as-built materials usage tracking on all flight hardware; mass properties reporting for all flight hardware until sample size is large enough to verify that the weight limit requirements were met; ballistic predictions and postflight performance assessments for all production flights; and recovered SRM hardware inspection and anomaly identification. In the safety, reliability, and quality assurance area, activities accomplished were assurance oriented in nature and specifically formulated to prevent problems and hardware failures. The flight program to date has adequately demonstrated the success of this assurance approach. The attention focused on details of design, analysis, manufacture, and inspection to assure the production of high-quality hardware has resulted in the absence of flight failures. The few anomalies which did occur were evaluated, design or manufacturing changes incorporated, and corrective actions taken to preclude recurrence.

  8. Mission Operations with an Autonomous Agent

    NASA Technical Reports Server (NTRS)

    Pell, Barney; Sawyer, Scott R.; Muscettola, Nicola; Smith, Benjamin; Bernard, Douglas E.

    1998-01-01

    The Remote Agent (RA) is an Artificial Intelligence (AI) system which automates some of the tasks normally reserved for human mission operators and performs these tasks autonomously on-board the spacecraft. These tasks include activity generation, sequencing, spacecraft analysis, and failure recovery. The RA will be demonstrated as a flight experiment on Deep Space One (DSI), the first deep space mission of the NASA's New Millennium Program (NMP). As we moved from prototyping into actual flight code development and teamed with ground operators, we made several major extensions to the RA architecture to address the broader operational context in which PA would be used. These extensions support ground operators and the RA sharing a long-range mission profile with facilities for asynchronous ground updates; support ground operators monitoring and commanding the spacecraft at multiple levels of detail simultaneously; and enable ground operators to provide additional knowledge to the RA, such as parameter updates, model updates, and diagnostic information, without interfering with the activities of the RA or leaving the system in an inconsistent state. The resulting architecture supports incremental autonomy, in which a basic agent can be delivered early and then used in an increasingly autonomous manner over the lifetime of the mission. It also supports variable autonomy, as it enables ground operators to benefit from autonomy when L'@ey want it, but does not inhibit them from obtaining a detailed understanding and exercising tighter control when necessary. These issues are critical to the successful development and operation of autonomous spacecraft.

  9. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Enabling Incremental Query Re-Optimization.

    PubMed

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  11. Enabling Incremental Query Re-Optimization

    PubMed Central

    Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau

    2017-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658

  12. Fast Context Switching in Real-Time Propositional Reasoning

    NASA Technical Reports Server (NTRS)

    Nayak, P. Pandurang; Williams, Brian C.

    1997-01-01

    The trend to increasingly capable and affordable control processors has generated an explosion of embedded real-time gadgets that serve almost every function imaginable. The daunting task of programming these gadgets is greatly alleviated with real-time deductive engines that perform all execution and monitoring functions from a single core model, Fast response times are achieved using an incremental propositional deductive database (an LTMS). Ideally the cost of an LTMS's incremental update should be linear in the number of labels that change between successive contexts. Unfortunately an LTMS can expend a significant percentage of its time working on labels that remain constant between contexts. This is caused by the LTMS's conservative approach: a context switch first removes all consequences of deleted clauses, whether or not those consequences hold in the new context. This paper presents a more aggressive incremental TMS, called the ITMS, that avoids processing a significant number of these consequences that are unchanged. Our empirical evaluation for spacecraft control shows that the overhead of processing unchanged consequences can be reduced by a factor of seven.

  13. Avoiding drift related to linear analysis update with Lagrangian coordinate models

    NASA Astrophysics Data System (ADS)

    Wang, Yiguo; Counillon, Francois; Bertino, Laurent

    2015-04-01

    When applying data assimilation to Lagrangian coordinate models, it is profitable to correct its grid (position, volume). In isopycnal ocean coordinate model, such information is provided by the layer thickness that can be massless but must remains positive (truncated Gaussian distribution). A linear gaussian analysis does not ensure positivity for such variable. Existing methods have been proposed to handle this issue - e.g. post processing, anamorphosis or resampling - but none ensures conservation of the mean, which is imperative in climate application. Here, a framework is introduced to test a new method, which proceed as following. First, layers for which analysis yields negative values are iteratively grouped with neighboring layers, resulting in a probability density function with a larger mean and smaller standard deviation that prevent appearance of negative values. Second, analysis increments of the grouped layer are uniformly distributed, which prevent massless layers to become filled and vice-versa. The new method is proved fully conservative with e.g. OI or 3DVAR but a small drift remains with ensemble-based methods (e.g. EnKF, DEnKF, …) during the update of the ensemble anomaly. However, the resulting drift with the latter is small (an order of magnitude smaller than with post-processing) and the increase of the computational cost moderate. The new method is demonstrated with a realistic application in the Norwegian Climate Prediction Model (NorCPM) that provides climate prediction by assimilating sea surface temperature with the Ensemble Kalman Filter in a fully coupled Earth System model (NorESM) with an isopycnal ocean model (MICOM). Over 25-year analysis period, the new method does not impair the predictive skill of the system but corrects the artificial steric drift introduced by data assimilation, and provide estimate in good agreement with IPCC AR5.

  14. PCG: A prototype incremental compilation facility for the SAGA environment, appendix F

    NASA Technical Reports Server (NTRS)

    Kimball, Joseph John

    1985-01-01

    A programming environment supports the activity of developing and maintaining software. New environments provide language-oriented tools such as syntax-directed editors, whose usefulness is enhanced because they embody language-specific knowledge. When syntactic and semantic analysis occur early in the cycle of program production, that is, during editing, the use of a standard compiler is inefficient, for it must re-analyze the program before generating code. Likewise, it is inefficient to recompile an entire file, when the editor can determine that only portions of it need updating. The pcg, or Pascal code generation, facility described here generates code directly from the syntax trees produced by the SAGA syntax directed Pascal editor. By preserving the intermediate code used in the previous compilation, it can limit recompilation to the routines actually modified by editing.

  15. Analysis of electrophoresis performance

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.

    1984-01-01

    The SAMPLE computer code models electrophoresis separation in a wide range of conditions. Results are included for steady three dimensional continuous flow electrophoresis (CFE), time dependent gel and acetate film experiments in one or two dimensions and isoelectric focusing in one dimension. The code evolves N two dimensional radical concentration distributions in time, or distance down a CFE chamber. For each time or distance increment, there are six stages, successively obtaining the pH distribution, the corresponding degrees of ionization for each radical, the conductivity, the electric field and current distribution, and the flux components in each direction for each separate radical. The final stage is to update the radical concentrations. The model formulation for ion motion in an electric field ignores activity effects, and is valid only for low concentrations; for larger concentrations the conductivity is, therefore, also invalid.

  16. Army Communicator. Volume 33, Number 3, Summer 2008. WIN-T: Increasing the Power of Battlefield Communications

    DTIC Science & Technology

    2008-01-01

    Increment 2 will provide unprecedented network connectivity for Brigade Combat Teams while they quickly traverse the battlefield during fast moving... fast . As you know, just working with the Directorate of Information Management with your computer, you need constant patches, up- grades, updates...designed for fast acquisition and requisition. The modem will reacquire bursts within a second for short duration or intermittent blockages. Should the user

  17. The Voronoi spatio-temporal data structure

    NASA Astrophysics Data System (ADS)

    Mioc, Darka

    2002-04-01

    Current GIS models cannot integrate the temporal dimension of spatial data easily. Indeed, current GISs do not support incremental (local) addition and deletion of spatial objects, and they can not support the temporal evolution of spatial data. Spatio-temporal facilities would be very useful in many GIS applications: harvesting and forest planning, cadastre, urban and regional planning, and emergency planning. The spatio-temporal model that can overcome these problems is based on a topological model---the Voronoi data structure. Voronoi diagrams are irregular tessellations of space, that adapt to spatial objects and therefore they are a synthesis of raster and vector spatial data models. The main advantage of the Voronoi data structure is its local and sequential map updates, which allows us to automatically record each event and performed map updates within the system. These map updates are executed through map construction commands that are composed of atomic actions (geometric algorithms for addition, deletion, and motion of spatial objects) on the dynamic Voronoi data structure. The formalization of map commands led to the development of a spatial language comprising a set of atomic operations or constructs on spatial primitives (points and lines), powerful enough to define the complex operations. This resulted in a new formal model for spatio-temporal change representation, where each update is uniquely characterized by the numbers of newly created and inactivated Voronoi regions. This is used for the extension of the model towards the hierarchical Voronoi data structure. In this model, spatio-temporal changes induced by map updates are preserved in a hierarchical data structure that combines events and corresponding changes in topology. This hierarchical Voronoi data structure has an implicit time ordering of events visible through changes in topology, and it is equivalent to an event structure that can support temporal data without precise temporal information. This formal model of spatio-temporal change representation is currently applied to retroactive map updates and visualization of map evolution. It offers new possibilities in the domains of temporal GIS, transaction processing, spatio-temporal queries, spatio-temporal analysis, map animation and map visualization.

  18. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    PubMed Central

    Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua

    2014-01-01

    To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252

  19. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap.

    PubMed

    Elliott, Julian H; Turner, Tari; Clavisi, Ornella; Thomas, James; Higgins, Julian P T; Mavergames, Chris; Gruen, Russell L

    2014-02-01

    The current difficulties in keeping systematic reviews up to date leads to considerable inaccuracy, hampering the translation of knowledge into action. Incremental advances in conventional review updating are unlikely to lead to substantial improvements in review currency. A new approach is needed. We propose living systematic review as a contribution to evidence synthesis that combines currency with rigour to enhance the accuracy and utility of health evidence. Living systematic reviews are high quality, up-to-date online summaries of health research, updated as new research becomes available, and enabled by improved production efficiency and adherence to the norms of scholarly communication. Together with innovations in primary research reporting and the creation and use of evidence in health systems, living systematic review contributes to an emerging evidence ecosystem.

  20. Network control processor for a TDMA system

    NASA Astrophysics Data System (ADS)

    Suryadevara, Omkarmurthy; Debettencourt, Thomas J.; Shulman, R. B.

    Two unique aspects of designing a network control processor (NCP) to monitor and control a demand-assigned, time-division multiple-access (TDMA) network are described. The first involves the implementation of redundancy by synchronizing the databases of two geographically remote NCPs. The two sets of databases are kept in synchronization by collecting data on both systems, transferring databases, sending incremental updates, and the parallel updating of databases. A periodic audit compares the checksums of the databases to ensure synchronization. The second aspect involves the use of a tracking algorithm to dynamically reallocate TDMA frame space. This algorithm detects and tracks current and long-term load changes in the network. When some portions of the network are overloaded while others have excess capacity, the algorithm automatically calculates and implements a new burst time plan.

  1. Finite element methods for the biomechanics of soft hydrated tissues: nonlinear analysis and adaptive control of meshes.

    PubMed

    Spilker, R L; de Almeida, E S; Donzelli, P S

    1992-01-01

    This chapter addresses computationally demanding numerical formulations in the biomechanics of soft tissues. The theory of mixtures can be used to represent soft hydrated tissues in the human musculoskeletal system as a two-phase continuum consisting of an incompressible solid phase (collagen and proteoglycan) and an incompressible fluid phase (interstitial water). We first consider the finite deformation of soft hydrated tissues in which the solid phase is represented as hyperelastic. A finite element formulation of the governing nonlinear biphasic equations is presented based on a mixed-penalty approach and derived using the weighted residual method. Fluid and solid phase deformation, velocity, and pressure are interpolated within each element, and the pressure variables within each element are eliminated at the element level. A system of nonlinear, first-order differential equations in the fluid and solid phase deformation and velocity is obtained. In order to solve these equations, the contributions of the hyperelastic solid phase are incrementally linearized, a finite difference rule is introduced for temporal discretization, and an iterative scheme is adopted to achieve equilibrium at the end of each time increment. We demonstrate the accuracy and adequacy of the procedure using a six-node, isoparametric axisymmetric element, and we present an example problem for which independent numerical solution is available. Next, we present an automated, adaptive environment for the simulation of soft tissue continua in which the finite element analysis is coupled with automatic mesh generation, error indicators, and projection methods. Mesh generation and updating, including both refinement and coarsening, for the two-dimensional examples examined in this study are performed using the finite quadtree approach. The adaptive analysis is based on an error indicator which is the L2 norm of the difference between the finite element solution and a projected finite element solution. Total stress, calculated as the sum of the solid and fluid phase stresses, is used in the error indicator. To allow the finite difference algorithm to proceed in time using an updated mesh, solution values must be transferred to the new nodal locations. This rezoning is accomplished using a projected field for the primary variables. The accuracy and effectiveness of this adaptive finite element analysis is demonstrated using a linear, two-dimensional, axisymmetric problem corresponding to the indentation of a thin sheet of soft tissue. The method is shown to effectively capture the steep gradients and to produce solutions in good agreement with independent, converged, numerical solutions.

  2. Department of Defense Command and Control Implementation Plan, Version 1.0

    DTIC Science & Technology

    2009-10-01

    needs • Be maintained by the C2 Capability Portfolio Manager ( CPM ) and updated every 2 years to address emerging C2 operational concepts, changing and...including sources managed by the C2 CPM as well as other CPMs (e.g., Net- Centric, Battlespace Awareness, and Logistics). C2 platforms and facilities with...incremental delivery of improved capabilities over time. Implementation planning is a C2 CPM -facilitated activity that involves identifying and

  3. A Third-Generation Evidence Base for Human Spaceflight Risks

    NASA Technical Reports Server (NTRS)

    Kundrot, Craig E.; Lumpkins, Sarah; Steil, Jennifer; Pellis, Neal; Charles, John

    2014-01-01

    NASA's Human Research Program seeks to understand and mitigate risks to crew health and performance in exploration missions center dot HRP's evidence base consists of an Evidence Report for each HRP risk center dot Three generations of Evidence Reports 1) Review articles + Good content - Limited authorship, infrequent updates 2) Wikipedia articles + Viewed often, very open to contributions - Summary of reviews, very few contributions 3) HRP-controlled wiki articles + Incremental additions to review articles with editorial control

  4. EnOI-IAU Initialization Scheme Designed for Decadal Climate Prediction System IAP-DecPreS

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zhou, Tianjun; Zheng, Fei

    2018-02-01

    A decadal climate prediction system named as IAP-DecPreS was constructed in the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences, based on a fully coupled model FGOALS-s2 and a newly developed initialization scheme, referred to as EnOI-IAU. In this paper, we introduce the design of the EnOI-IAU scheme, assess the accuracies of initialization integrations using the EnOI-IAU and preliminarily evaluate hindcast skill of the IAP-DecPreS. The EnOI-IAU scheme integrates two conventional assimilation approaches, ensemble optimal interpolation (EnOI) and incremental analysis update (IAU). The EnOI and IAU were applied to calculate analysis increments and incorporate them into the model, respectively. Three continuous initialization (INIT) runs were conducted for the period of 1950-2015, in which observational sea surface temperature (SST) from the HadISST1.1 and subsurface ocean temperature profiles from the EN4.1.1 data set were assimilated. Then nine-member 10 year long hindcast runs initiated from the INIT runs were conducted for each year in the period of 1960-2005. The accuracies of the INIT runs are evaluated from the following three aspects: upper 700 m ocean temperature, temporal evolution of SST anomalies, and dominant interdecadal variability modes, Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO). Finally, preliminary evaluation of the ensemble mean of the hindcast runs suggests that the IAP-DecPreS has skill in the prediction of the PDO-related SST anomalies in the midlatitude North Pacific and AMO-related SST anomalies in the tropical North Atlantic.

  5. Incremental update of electrostatic interactions in adaptively restrained particle simulations.

    PubMed

    Edorh, Semeho Prince A; Redon, Stéphane

    2018-04-06

    The computation of long-range potentials is one of the demanding tasks in Molecular Dynamics. During the last decades, an inventive panoply of methods was developed to reduce the CPU time of this task. In this work, we propose a fast method dedicated to the computation of the electrostatic potential in adaptively restrained systems. We exploit the fact that, in such systems, only some particles are allowed to move at each timestep. We developed an incremental algorithm derived from a multigrid-based alternative to traditional Fourier-based methods. Our algorithm was implemented inside LAMMPS, a popular molecular dynamics simulation package. We evaluated the method on different systems. We showed that the new algorithm's computational complexity scales with the number of active particles in the simulated system, and is able to outperform the well-established Particle Particle Particle Mesh (P3M) for adaptively restrained simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  6. Filament wound data base development, revision 1

    NASA Technical Reports Server (NTRS)

    Sharp, R. Scott; Braddock, William F.

    1985-01-01

    The objective was to update the present Space Shuttle Solid Rocket Booster (SRB) baseline reentry aerodynamic data base and to develop a new reentry data base for the filament wound case SRB along with individual protuberance increments. Lockheed's procedures for performing these tasks are discussed. Free fall of the SRBs after separation from the Space Shuttle Launch Vehicle is completely uncontrolled. However, the SRBs must decelerate to a velocity and attitude that is suitable for parachute deployment. To determine the SRB reentry trajectory parameters, including the rate of deceleration and attitude history during free-fall, engineers at Marshall Space Flight Center are using a six-degree-of-freedom computer program to predict dynamic behavior. Static stability aerodynamic coefficients are part of the information required for input into this computer program. Lockheed analyzed the existing reentry aerodynamic data tape (Data Tape 5) for the current steel case SRB. This analysis resulted in the development of Data Tape 7.

  7. Stroke Treatment Academic Industry Roundtable Recommendations for Individual Data Pooling Analyses in Stroke.

    PubMed

    Lees, Kennedy R; Khatri, Pooja

    2016-08-01

    Pooled analysis of individual patient data from stroke trials can deliver more precise estimates of treatment effect, enhance power to examine prespecified subgroups, and facilitate exploration of treatment-modifying influences. Analysis plans should be declared, and preferably published, before trial results are known. For pooling trials that used diverse analytic approaches, an ordinal analysis is favored, with justification for considering deaths and severe disability jointly. Because trial pooling is an incremental process, analyses should follow a sequential approach, with statistical adjustment for iterations. Updated analyses should be published when revised conclusions have a clinical implication. However, caution is recommended in declaring pooled findings that may prejudice ongoing trials, unless clinical implications are compelling. All contributing trial teams should contribute to leadership, data verification, and authorship of pooled analyses. Development work is needed to enable reliable inferences to be drawn about individual drug or device effects that contribute to a pooled analysis, versus a class effect, if the treatment strategy combines ≥2 such drugs or devices. Despite the practical challenges, pooled analyses are powerful and essential tools in interpreting clinical trial findings and advancing clinical care. © 2016 American Heart Association, Inc.

  8. Sequential change detection and monitoring of temporal trends in random-effects meta-analysis.

    PubMed

    Dogo, Samson Henry; Clark, Allan; Kulinskaya, Elena

    2017-06-01

    Temporal changes in magnitude of effect sizes reported in many areas of research are a threat to the credibility of the results and conclusions of meta-analysis. Numerous sequential methods for meta-analysis have been proposed to detect changes and monitor trends in effect sizes so that meta-analysis can be updated when necessary and interpreted based on the time it was conducted. The difficulties of sequential meta-analysis under the random-effects model are caused by dependencies in increments introduced by the estimation of the heterogeneity parameter τ 2 . In this paper, we propose the use of a retrospective cumulative sum (CUSUM)-type test with bootstrap critical values. This method allows retrospective analysis of the past trajectory of cumulative effects in random-effects meta-analysis and its visualization on a chart similar to CUSUM chart. Simulation results show that the new method demonstrates good control of Type I error regardless of the number or size of the studies and the amount of heterogeneity. Application of the new method is illustrated on two examples of medical meta-analyses. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Incorporation of real-time component information using equipment condition assessment (ECA) through the developmentof enhanced risk monitors (ERM) for active components in advanced reactor (AR) and advanced small modular reactor (SMR) designs. We incorporate time-dependent failure probabilities from prognostic health management (PHM) systems to dynamically update the risk metric of interest. This information is used to augment data used for supervisory control and plant-wide coordination of multiple modules by providing the incremental risk incurred due to aging and demands placed on components that support mission requirements.

  10. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  11. Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.

    2016-06-01

    Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.

  12. Fish consumption and CHD mortality: an updated meta-analysis of seventeen cohort studies.

    PubMed

    Zheng, Jusheng; Huang, Tao; Yu, Yinghua; Hu, Xiaojie; Yang, Bin; Li, Duo

    2012-04-01

    Results of studies on fish consumption and CHD mortality are inconsistent. The present updated meta-analysis was conducted to investigate the up-to-date pooling effects. A random-effects model was used to pool the risk estimates. Generalized least-squares regression and restricted cubic splines were used to assess the possible dose-response relationship. Subgroup analyses were conducted to examine the sources of heterogeneity. PubMed and ISI Web of Science databases up to September 2010 were searched and secondary referencing qualified for inclusion in the study. Seventeen cohorts with 315,812 participants and average follow-up period of 15·9 years were identified. Compared with the lowest fish intake (<1 serving/month or 1-3 servings/month), the pooled relative risk (RR) of fish intake on CHD mortality was 0·84 (95% CI 0·75, 0·95) for low fish intake (1 serving/week), 0·79 (95% CI 0·67, 0·92) for moderate fish intake (2-4 servings/week) and 0·83 (95% CI 0·68, 1·01) for high fish intake (>5 servings/week). The dose-response analysis indicated that every 15 g/d increment of fish intake decreased the risk of CHD mortality by 6% (RR = 0·94; 95% CI 0·90, 0·98). The method of dietary assessment, gender and energy adjustment affected the results remarkably. Our results indicate that either low (1 serving/week) or moderate fish consumption (2-4 servings/week) has a significantly beneficial effect on the prevention of CHD mortality. High fish consumption (>5 servings/week) possesses only a marginally protective effect on CHD mortality, possibly due to the limited studies included in this group.

  13. Cost Effectiveness of the Angiotensin Receptor Neprilysin Inhibitor Sacubitril/Valsartan for Patients with Chronic Heart Failure and Reduced Ejection Fraction in the Netherlands: A Country Adaptation Analysis Under the Former and Current Dutch Pharmacoeconomic Guidelines.

    PubMed

    Ramos, Isaac Corro; Versteegh, Matthijs M; de Boer, Rudolf A; Koenders, Jolanda M A; Linssen, Gerard C M; Meeder, Joan G; Rutten-van Mölken, Maureen P M H

    2017-12-01

    To describe the adaptation of a global health economic model to determine whether treatment with the angiotensin receptor neprilysin inhibitor LCZ696 is cost effective compared with the angiotensin-converting enzyme inhibitor enalapril in adult patients with chronic heart failure with reduced left ventricular ejection fraction in the Netherlands; and to explore the effect of performing the cost-effectiveness analyses according to the new pharmacoeconomic Dutch guidelines (updated during the submission process of LCZ696), which require a value-of-information analysis and the inclusion of indirect medical costs of life-years gained. We adapted a UK model to reflect the societal perspective in the Netherlands by including travel expenses, productivity loss, informal care costs, and indirect medical costs during the life-years gained and performed a preliminary value-of-information analysis. The incremental cost-effectiveness ratio obtained was €17,600 per quality-adjusted life-year (QALY) gained. This was robust to changes in most structural assumptions and across different subgroups of patients. Probability sensitivity analysis results showed that the probability that LCZ696 is cost-effective at a €50,000 per QALY threshold is 99.8%, with a population expected value of perfect information of €297,128. On including indirect medical costs of life-years gained, the incremental cost-effectiveness ratio was €26,491 per QALY gained, and LCZ696 was 99.46% cost effective at €50,000 per QALY, with a population expected value of perfect information of €2,849,647. LCZ696 is cost effective compared with enalapril under the former and current Dutch guidelines. However, the (monetary) consequences of making a wrong decision were considerably different in both scenarios. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Self-supervised online metric learning with low rank constraint for scene categorization.

    PubMed

    Cong, Yang; Liu, Ji; Yuan, Junsong; Luo, Jiebo

    2013-08-01

    Conventional visual recognition systems usually train an image classifier in a bath mode with all training data provided in advance. However, in many practical applications, only a small amount of training samples are available in the beginning and many more would come sequentially during online recognition. Because the image data characteristics could change over time, it is important for the classifier to adapt to the new data incrementally. In this paper, we present an online metric learning method to address the online scene recognition problem via adaptive similarity measurement. Given a number of labeled data followed by a sequential input of unseen testing samples, the similarity metric is learned to maximize the margin of the distance among different classes of samples. By considering the low rank constraint, our online metric learning model not only can provide competitive performance compared with the state-of-the-art methods, but also guarantees convergence. A bi-linear graph is also defined to model the pair-wise similarity, and an unseen sample is labeled depending on the graph-based label propagation, while the model can also self-update using the more confident new samples. With the ability of online learning, our methodology can well handle the large-scale streaming video data with the ability of incremental self-updating. We evaluate our model to online scene categorization and experiments on various benchmark datasets and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our algorithm.

  15. VISCEL: A general-purpose computer program for analysis of linear viscoelastic structures (user's manual), volume 1

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Akyuz, F. A.; Heer, E.

    1972-01-01

    This program, an extension of the linear equilibrium problem solver ELAS, is an updated and extended version of its earlier form (written in FORTRAN 2 for the IBM 7094 computer). A synchronized material property concept utilizing incremental time steps and the finite element matrix displacement approach has been adopted for the current analysis. A special option enables employment of constant time steps in the logarithmic scale, thereby reducing computational efforts resulting from accumulative material memory effects. A wide variety of structures with elastic or viscoelastic material properties can be analyzed by VISCEL. The program is written in FORTRAN 5 language for the Univac 1108 computer operating under the EXEC 8 system. Dynamic storage allocation is automatically effected by the program, and the user may request up to 195K core memory in a 260K Univac 1108/EXEC 8 machine. The physical program VISCEL, consisting of about 7200 instructions, has four distinct links (segments), and the compiled program occupies a maximum of about 11700 words decimal of core storage.

  16. Modeling of Rolling Element Bearing Mechanics: Computer Program Updates

    NASA Technical Reports Server (NTRS)

    Ryan, S. G.

    1997-01-01

    The Rolling Element Bearing Analysis System (REBANS) extends the capability available with traditional quasi-static bearing analysis programs by including the effects of bearing race and support flexibility. This tool was developed under contract for NASA-MSFC. The initial version delivered at the close of the contract contained several errors and exhibited numerous convergence difficulties. The program has been modified in-house at MSFC to correct the errors and greatly improve the convergence. The modifications consist of significant changes in the problem formulation and nonlinear convergence procedures. The original approach utilized sequential convergence for nested loops to achieve final convergence. This approach proved to be seriously deficient in robustness. Convergence was more the exception than the rule. The approach was changed to iterate all variables simultaneously. This approach has the advantage of using knowledge of the effect of each variable on each other variable (via the system Jacobian) when determining the incremental changes. This method has proved to be quite robust in its convergence. This technical memorandum documents the changes required for the original Theoretical Manual and User's Manual due to the new approach.

  17. Implementation of Coupled Skin Temperature Analysis and Bias Correction in a Global Atmospheric Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Radakovich, Jon; Bosilovich, M.; Chern, Jiun-dar; daSilva, Arlindo

    2004-01-01

    The NASA/NCAR Finite Volume GCM (fvGCM) with the NCAR CLM (Community Land Model) version 2.0 was integrated into the NASA/GMAO Finite Volume Data Assimilation System (fvDAS). A new method was developed for coupled skin temperature assimilation and bias correction where the analysis increment and bias correction term is passed into the CLM2 and considered a forcing term in the solution to the energy balance. For our purposes, the fvDAS CLM2 was run at 1 deg. x 1.25 deg. horizontal resolution with 55 vertical levels. We assimilate the ISCCP-DX (30 km resolution) surface temperature product. The atmospheric analysis was performed 6-hourly, while the skin temperature analysis was performed 3-hourly. The bias correction term, which was updated at the analysis times, was added to the skin temperature tendency equation at every timestep. In this presentation, we focus on the validation of the surface energy budget at the in situ reference sites for the Coordinated Enhanced Observation Period (CEOP). We will concentrate on sites that include independent skin temperature measurements and complete energy budget observations for the month of July 2001. In addition, MODIS skin temperature will be used for validation. Several assimilations were conducted and preliminary results will be presented.

  18. A comparison between EDA-EnVar and ETKF-EnVar data assimilation techniques using radar observations at convective scales through a case study of Hurricane Ike (2008)

    NASA Astrophysics Data System (ADS)

    Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong

    2017-07-01

    This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.

  19. Using discharge data to reduce structural deficits in a hydrological model with a Bayesian inference approach and the implications for the prediction of critical source areas

    NASA Astrophysics Data System (ADS)

    Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.

    2011-12-01

    A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.

  20. Personal Computer Transport Analysis Program

    NASA Technical Reports Server (NTRS)

    DiStefano, Frank, III; Wobick, Craig; Chapman, Kirt; McCloud, Peter

    2012-01-01

    The Personal Computer Transport Analysis Program (PCTAP) is C++ software used for analysis of thermal fluid systems. The program predicts thermal fluid system and component transients. The output consists of temperatures, flow rates, pressures, delta pressures, tank quantities, and gas quantities in the air, along with air scrubbing component performance. PCTAP s solution process assumes that the tubes in the system are well insulated so that only the heat transfer between fluid and tube wall and between adjacent tubes is modeled. The system described in the model file is broken down into its individual components; i.e., tubes, cold plates, heat exchangers, etc. A solution vector is built from the components and a flow is then simulated with fluid being transferred from one component to the next. The solution vector of components in the model file is built at the initiation of the run. This solution vector is simply a list of components in the order of their inlet dependency on other components. The component parameters are updated in the order in which they appear in the list at every time step. Once the solution vectors have been determined, PCTAP cycles through the components in the solution vector, executing their outlet function for each time-step increment.

  1. Cost-effectiveness of supervised exercise therapy in heart failure patients.

    PubMed

    Kühr, Eduardo M; Ribeiro, Rodrigo A; Rohde, Luis Eduardo P; Polanczyk, Carisi A

    2011-01-01

    Exercise therapy in heart failure (HF) patients is considered safe and has demonstrated modest reduction in hospitalization rates and death in recent trials. Previous cost-effectiveness analysis described favorable results considering long-term supervised exercise intervention and significant effectiveness of exercise therapy; however, these evidences are now no longer supported. To evaluate the cost-effectiveness of supervised exercise therapy in HF patients under the perspective of the Brazilian Public Healthcare System. We developed a Markov model to evaluate the incremental cost-effectiveness ratio of supervised exercise therapy compared to standard treatment in patients with New York Heart Association HF class II and III. Effectiveness was evaluated in quality-adjusted life years in a 10-year time horizon. We searched PUBMED for published clinical trials to estimate effectiveness, mortality, hospitalization, and utilities data. Treatment costs were obtained from published cohort updated to 2008 values. Exercise therapy intervention costs were obtained from a rehabilitation center. Model robustness was assessed through Monte Carlo simulation and sensitivity analysis. Cost were expressed as international dollars, applying the purchasing-power-parity conversion rate. Exercise therapy showed small reduction in hospitalization and mortality at a low cost, an incremental cost-effectiveness ratio of Int$26,462/quality-adjusted life year. Results were more sensitive to exercise therapy costs, standard treatment total costs, exercise therapy effectiveness, and medications costs. Considering a willingness-to-pay of Int$27,500, 55% of the trials fell below this value in the Monte Carlo simulation. In a Brazilian scenario, exercise therapy shows reasonable cost-effectiveness ratio, despite current evidence of limited benefit of this intervention. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  2. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  3. Costs And Savings Associated With Community Water Fluoridation In The United States.

    PubMed

    O'Connell, Joan; Rockell, Jennifer; Ouellet, Judith; Tomar, Scott L; Maas, William

    2016-12-01

    The most comprehensive study of US community water fluoridation program benefits and costs was published in 2001. This study provides updated estimates using an economic model that includes recent data on program costs, dental caries increments, and dental treatments. In 2013 more than 211 million people had access to fluoridated water through community water systems serving 1,000 or more people. Savings associated with dental caries averted in 2013 as a result of fluoridation were estimated to be $32.19 per capita for this population. Based on 2013 estimated costs ($324 million), net savings (savings minus costs) from fluoridation systems were estimated to be $6,469 million and the estimated return on investment, 20.0. While communities should assess their specific costs for continuing or implementing a fluoridation program, these updated findings indicate that program savings are likely to exceed costs. Project HOPE—The People-to-People Health Foundation, Inc.

  4. An Imaging System for Satellite Hypervelocity Impact Debris Characterization

    NASA Astrophysics Data System (ADS)

    Moraguez, M.; Liou, J.; Fitz-Coy, N.; Patankar, K.; Cowardin, H.

    This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.

  5. Evaluation of the North Stanley Polymer Demonstration Project. [Tertiary oil recovery; polymer-enhanced waterflooding; Oklahoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harpole, K.J.; Hill, C.J.

    1983-02-01

    A review of the performance of the North Stanley Polymer Demonstration Project has been completed. The objective of the cost-project was to evaluate the technical efficiency and economic feasibility of polymer-enhanced waterflooding as a tertiary recovery process in a highly heterogeneous and vertically fractured sandstone reservoir that has been successfully waterflooded and is approaching the economic limits of conventional waterflooding recovery. The ultimate incremental oil recovery from the project is estimated to be about 570,000 barrels (or approximately 1.4% of the original oil-in-place). This is significantly less than the original recovery predictions but does demonstrate that the project was technicallymore » successful. The lower-than-anticipated recovery is attributed principally to the extremely heterogeneous nature of the reservoir. One of the major objectives of this evaluation is to present an updated economic anlaysis of the North Stanley Polymer Demonstration Project. The updated economic analysis under current (mid-1982) economic conditions indicates that the North Stanley project would be commercially feasible if polymer injection had begun in 1982, rather than in 1976. Overall project operations were conducted efficiently, with a minimum of operational problems. The North Stanley polymer project provides a well-documented example of an actual field-scale tertiary application of polymer-augmented waterflooding in a highly heterogeneous reservoir.« less

  6. An Imaging System for Satellite Hypervelocity Impact Debris Characterization

    NASA Technical Reports Server (NTRS)

    Moraguez, Matthew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Cowardin, Heather

    2015-01-01

    This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.

  7. Flight Rules Critical Readiness Review

    NASA Technical Reports Server (NTRS)

    Kim, E.; Knudsen, F.; Rice, S.

    2010-01-01

    The increment 23/24 Critical Readiness Review (CRR) flight rules are presented. The topics include: 1) B13-152 Acoustic Constraints; 2) B13-113 IFM/Corrective Action Prioritization Due to Loss of Exercise Capability; 3) B13-116 Constraints on Treadmill VIS Failure; 4) B13-201 Medical Management of ISS Fire/Smoke Response; 5) ARED and T2 Exercise constraints Flight rules (flight and stage specific); 6) FYI: B14 FR to be updated with requirement to sample crew sleep locations prior to receiving a "recommendation" from SRAG on where to sleep.

  8. Distributed information system architecture for Primary Health Care.

    PubMed

    Grammatikou, M; Stamatelopoulos, F; Maglaris, B

    2000-01-01

    We present a distributed architectural framework for Primary Health Care (PHC) Centres. Distribution is handled through the introduction of the Roaming Electronic Health Care Record (R-EHCR) and the use of local caching and incremental update of a global index. The proposed architecture is designed to accommodate a specific PHC workflow model. Finally, we discuss a pilot implementation in progress, which is based on CORBA and web-based user interfaces. However, the conceptual architecture is generic and open to other middleware approaches like the DHE or HL7.

  9. Incremental soil sampling root water uptake, or be great through others

    USDA-ARS?s Scientific Manuscript database

    Ray Allmaras pursued several research topics in relation to residue and tillage research. He looked for new tools to help explain soil responses to tillage, including disk permeameters and image analysis. The incremental sampler developed by Pikul and Allmaras allowed small-depth increment, volumetr...

  10. Dynamic Risk Assessment of Sexual Offenders: Validity and Dimensional Structure of the Stable-2007.

    PubMed

    Etzler, Sonja; Eher, Reinhard; Rettenberger, Martin

    2018-02-01

    In this study, the predictive and incremental validity of the Stable-2007 beyond the Static-99 was evaluated in an updated sample of N = 638 adult male sexual offenders followed-up for an average of M = 8.2 years. Data were collected at the Federal Evaluation Center for Violent and Sexual Offenders (FECVSO) in Austria within a prospective-longitudinal research design. Scores and risk categories of the Static-99 (AUC = .721; p < .001) and of the Stable-2007 (AUC = .623, p = .005) were found to be significantly related to sexual recidivism. The Stable-2007 risk categories contributed incrementally to the prediction of sexual recidivism beyond the Static-99. Analyzing the dimensional structure of the Stable-2007 yielded three factors, named Antisociality, Sexual Deviance, and Hypersexuality. Antisociality and Sexual Deviance were significant predictors for sexual recidivism. Sexual Deviance was negatively associated with non-sexual violent recidivism. Comparisons with latent dimensions of other risk assessment instruments are made and implications for applied risk assessment are discussed.

  11. The SIMS trial: adjustable anchored single-incision mini-slings versus standard tension-free midurethral slings in the surgical management of female stress urinary incontinence. A study protocol for a pragmatic, multicentre, non-inferiority randomised controlled trial

    PubMed Central

    Abdel-Fattah, Mohamed; MacLennan, Graeme; Kilonzo, Mary; Assassa, R Phil; McCormick, Kirsty; Davidson, Tracey; McDonald, Alison; N’Dow, James; Wardle, Judith; Norrie, John

    2017-01-01

    Introduction Single-incision mini-slings (SIMS) represent the third generation of midurethral slings. They have been developed with the aim of offering a true ambulatory procedure for treatment of female stress urinary incontinence (SUI) with reduced morbidity and earlier recovery while maintaining similar efficacy to standard midurethral slings (SMUS). The aim of this study is to determine the clinical and cost-effectiveness of adjustable anchored SIMS compared with tension-free SMUS in the surgical management of female SUI, with 3-year follow-up. Methods and analysis A pragmatic, multicentre, non-inferiority randomised controlled trial. Primary outcome measure The primary outcome measure is the patient-reported success rate measured by the Patient Global Impression of Improvement at 12 months. The primary economic outcome will be incremental cost per quality-adjusted life year gained at 12 months. Secondary outcome measures The secondary outcomes measures include adverse events, objective success rates, impact on other lower urinary tract symptoms, health-related quality of life profile and sexual function, and reoperation rates for SUI. Secondary economic outcomes include National Health Service and patient primary and secondary care resource use and costs, incremental cost-effectiveness and incremental net benefit. Statistical analysis The statistical analysis of the primary outcome will be by intention-to-treat and also a per-protocol analysis. Results will be displayed as estimates and 95% CIs. CIs around observed differences will then be compared with the prespecified non-inferiority margin. Secondary outcomes will be analysed similarly. Ethics and dissemination The North of Scotland Research Ethics Committee has approved this study (13/NS/0143). The dissemination plans include HTA monograph, presentation at international scientific meetings and publications in high-impact, open-access journals. The results will be included in the updates of the National Institute for Health and Care Excellence and the European Association of Urology guidelines; these two specific guidelines directly influence practice in the UK and worldwide specialists, respectively. In addition, plain English-language summary of the main findings/results will be presented for relevant patient organisations. Trial registration number ISRCTN93264234. The SIMS study is currently recruiting in 20 UK research centres. The first patient was randomised on 4 February 2014, with follow-up to be completed at the end of February 2020. PMID:28801396

  12. NEQAIRv14.0 Release Notes: Nonequilibrium and Equilibrium Radiative Transport Spectra Program

    NASA Technical Reports Server (NTRS)

    Brandis, Aaron Michael; Cruden, Brett A.

    2014-01-01

    NEQAIR v14.0 is the first parallelized version of NEQAIR. Starting from the last version of the code that went through the internal software release process at NASA Ames (NEQAIR 2008), there have been significant updates to the physics in the code and the computational efficiency. NEQAIR v14.0 supersedes NEQAIR v13.2, v13.1 and the suite of NEQAIR2009 versions. These updates have predominantly been performed by Brett Cruden and Aaron Brandis from ERC Inc at NASA Ames Research Center in 2013 and 2014. A new naming convention is being adopted with this current release. The current and future versions of the code will be named NEQAIR vY.X. The Y will refer to a major release increment. Minor revisions and update releases will involve incrementing X. This is to keep NEQAIR more in line with common software release practices. NEQAIR v14.0 is a standalone software tool for line-by-line spectral computation of radiative intensities and/or radiative heat flux, with one-dimensional transport of radiation. In order to accomplish this, NEQAIR v14.0, as in previous versions, requires the specification of distances (in cm), temperatures (in K) and number densities (in parts/cc) of constituent species along lines of sight. Therefore, it is assumed that flow quantities have been extracted from flow fields computed using other tools, such as CFD codes like DPLR or LAURA, and that lines of sight have been constructed and written out in the format required by NEQAIR v14.0. There are two principal modes for running NEQAIR v14.0. In the first mode NEQAIR v14.0 is used as a tool for creating synthetic spectra of any desired resolution (including convolution with a specified instrument/slit function). The first mode is typically exercised in simulating/interpreting spectroscopic measurements of different sources (e.g. shock tube data, plasma torches, etc.). In the second mode, NEQAIR v14.0 is used as a radiative heat flux prediction tool for flight projects. Correspondingly, NEQAIR has also been used to simulate the radiance measured on previous flight missions. This report summarizes the database updates, corrections that have been made to the code, changes to input files, parallelization, the current usage recommendations, including test cases, and an indication of the performance enhancements achieved.

  13. A cost-effectiveness analysis of celecoxib compared with diclofenac in the treatment of pain in osteoarthritis (OA) within the Swedish health system using an adaptation of the NICE OA model.

    PubMed

    Brereton, Nicholas; Pennington, Becky; Ekelund, Mats; Akehurst, Ronald

    2014-09-01

    Celecoxib for the treatment of pain resulting from osteoarthritis (OA) was reviewed by the Tandvårds- och läkemedelsförmånsverket-Dental and Pharmaceutical Benefits Board (TLV) in Sweden in late 2010. This study aimed to evaluate the incremental cost-effectiveness ratio (ICER) of celecoxib plus a proton pump inhibitor (PPI) compared to diclofenac plus a PPI in a Swedish setting. The National Institute for Health and Care Excellence (NICE) in the UK developed a health economic model as part of their 2008 assessment of treatments for OA. In this analysis, the model was reconstructed and adapted to a Swedish perspective. Drug costs were updated using the TLV database. Adverse event costs were calculated using the regional price list of Southern Sweden and the standard treatment guidelines from the county council of Stockholm. Costs for treating cardiovascular (CV) events were taken from the Swedish DRG codes and the literature. Over a patient's lifetime treatment with celecoxib plus a PPI was associated with a quality-adjusted life year (QALY) gain of 0.006 per patient when compared to diclofenac plus a PPI. There was an increase in discounted costs of 529 kr per patient, which resulted in an incremental cost-effectiveness ratio (ICER) of 82,313 kr ($12,141). Sensitivity analysis showed that treatment was more cost effective in patients with an increased risk of bleeding or gastrointestinal (GI) complications. The results suggest that celecoxib plus a PPI is a cost effective treatment for OA when compared to diclofenac plus a PPI. Treatment is shown to be more cost effective in Sweden for patients with a high risk of bleeding or GI complications. It was in this population that the TLV gave a positive recommendation. There are known limitations on efficacy in the original NICE model.

  14. Solar Market Research and Analysis Email Updates | Solar Research | NREL

    Science.gov Websites

    Analysis Email Updates Solar Market Research and Analysis Email Updates NREL's Solar Market email updates, please provide and submit the following information. Email Address

  15. Cost-effectiveness analysis alongside clinical trials II-An ISPOR Good Research Practices Task Force report.

    PubMed

    Ramsey, Scott D; Willke, Richard J; Glick, Henry; Reed, Shelby D; Augustovski, Federico; Jonsson, Bengt; Briggs, Andrew; Sullivan, Sean D

    2015-03-01

    Clinical trials evaluating medicines, medical devices, and procedures now commonly assess the economic value of these interventions. The growing number of prospective clinical/economic trials reflects both widespread interest in economic information for new technologies and the regulatory and reimbursement requirements of many countries that now consider evidence of economic value along with clinical efficacy. As decision makers increasingly demand evidence of economic value for health care interventions, conducting high-quality economic analyses alongside clinical studies is desirable because they broaden the scope of information available on a particular intervention, and can efficiently provide timely information with high internal and, when designed and analyzed properly, reasonable external validity. In 2005, ISPOR published the Good Research Practices for Cost-Effectiveness Analysis Alongside Clinical Trials: The ISPOR RCT-CEA Task Force report. ISPOR initiated an update of the report in 2014 to include the methodological developments over the last 9 years. This report provides updated recommendations reflecting advances in several areas related to trial design, selecting data elements, database design and management, analysis, and reporting of results. Task force members note that trials should be designed to evaluate effectiveness (rather than efficacy) when possible, should include clinical outcome measures, and should obtain health resource use and health state utilities directly from study subjects. Collection of economic data should be fully integrated into the study. An incremental analysis should be conducted with an intention-to-treat approach, complemented by relevant subgroup analyses. Uncertainty should be characterized. Articles should adhere to established standards for reporting results of cost-effectiveness analyses. Economic studies alongside trials are complementary to other evaluations (e.g., modeling studies) as information for decision makers who consider evidence of economic value along with clinical efficacy when making resource allocation decisions. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. Real-Time Noise Removal for Line-Scanning Hyperspectral Devices Using a Minimum Noise Fraction-Based Approach

    PubMed Central

    Bjorgan, Asgeir; Randeberg, Lise Lyngsnes

    2015-01-01

    Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717

  17. Non-Deterministic Dynamic Instability of Composite Shells

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2004-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.

  18. Dynamic Probabilistic Instability of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2009-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.

  19. Incremental Feeding High-Pressure Sliding for Grain Refinement of Large-Scale Sheets: Application to Inconel 718

    NASA Astrophysics Data System (ADS)

    Takizawa, Yoichi; Sumikawa, Kosei; Watanabe, Kyohei; Masuda, Takahiro; Yumoto, Manabu; Kanai, Yuta; Otagiri, Yoshiharu; Horita, Zenji

    2018-03-01

    This study updates a process of high-pressure sliding (HPS) recently developed as a severe plastic deformation process under high pressure for grain refinement of sheet samples. The updated version, which we call the incremental feeding HPS (IF-HPS), consists of sliding for SPD and feeding for upsizing the SPD-processed area so that, without increasing the capacity of processing facility, it is possible to cover a much larger area with an SPD-processed ultrafine-grained structure with a grain size of 120 nm. For the IF-HPS processing, anvils with flat surfaces but without grooves are used in an unconstrained condition, and the feeding distance is set equal to the deformed width. A Ni-based superalloy (Inconel 718) is processed by the IF-HPS under 4 GPa at room temperature, and it is possible to obtain an SPD-processed sheet with dimensions of approximately 100 × 100 × 1 mm3. Strain distribution and evolution were examined by hardness measurement and simulation using a finite element method. Tensile tests were conducted using tensile specimens extracted from the IF-HPS-processed sheet. Advent of high strain rate superplasticity with the total elongation of more than 400 pct was confirmed by pulling the tensile specimens with an initial strain rate of 2.0 × 10-2 s-1 at a temperature as low as 1073 K. The formability of the IF-HPS-processed sheet was confirmed by successful cup forming. It was also confirmed that the restoration after the superplastic deformation was feasible by subjecting to conventional heat treatment used for Inconel 718.

  20. Flood loss model transfer: on the value of additional data

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Lüdtke, Stefan; Vogel, Kristin; Kreibich, Heidi; Thieken, Annegret; Merz, Bruno

    2017-04-01

    The transfer of models across geographical regions and flood events is a key challenge in flood loss estimation. Variations in local characteristics and continuous system changes require regional adjustments and continuous updating with current evidence. However, acquiring data on damage influencing factors is expensive and therefore assessing the value of additional data in terms of model reliability and performance improvement is of high relevance. The present study utilizes empirical flood loss data on direct damage to residential buildings available from computer aided telephone interviews that were carried out after the floods in 2002, 2005, 2006, 2010, 2011 and 2013 mainly in the Elbe and Danube catchments in Germany. Flood loss model performance is assessed for incrementally increased numbers of loss data which are differentiated according to region and flood event. Two flood loss modeling approaches are considered: (i) a multi-variable flood loss model approach using Random Forests and (ii) a uni-variable stage damage function. Both model approaches are embedded in a bootstrapping process which allows evaluating the uncertainty of model predictions. Predictive performance of both models is evaluated with regard to mean bias, mean absolute and mean squared errors, as well as hit rate and sharpness. Mean bias and mean absolute error give information about the accuracy of model predictions; mean squared error and sharpness about precision and hit rate is an indicator for model reliability. The results of incremental, regional and temporal updating demonstrate the usefulness of additional data to improve model predictive performance and increase model reliability, particularly in a spatial-temporal transfer setting.

  1. Precipitable water vapor budget associated with MJO represented in newly-released JRA-55 reanalysis data

    NASA Astrophysics Data System (ADS)

    Yokoi, S.

    2013-12-01

    The Japan Meteorological Agency (JMA) recently released a new reanalysis dataset JRA-55 with the use of a JMA operational prediction model and 4D-VAR data assimilation. To evaluate merit in utilizing the JRA-55 dataset to investigate dynamics of the tropical intraseasonal variability (ISV) including the Madden-Julian Oscillation (MJO), this study examines ISV-scale precipitable water vapor (PWV) budget over the period 1989-2012. The ISV-scale PWV anomaly related to the boreal-winter MJO propagates eastward along with precipitation, consistent with the SSM/I PWV product. Decomposition of the PWV tendency into that simulated by the model and the analysis increment estimated by the data assimilation reveals that the model makes the PWV anomaly move eastward. On the other hand, the analysis increment exhibits positive values over the area where the PWV anomaly is positive, indicating that the model tends to damp the MJO signal. Note that the analysis increment over the Maritime Continent has comparable magnitude to the model tendency. The positive analysis increment may mainly be caused by an excess of precipitation anomaly with respect to the magnitude of PWV anomaly. In addition to the boreal-winter MJO, this study also examines the PWV budget associated with northward-propagating ISV during the boreal summer and find similar relationship between the PWV anomaly and analysis increment.

  2. Constructing increment-decrement life tables.

    PubMed

    Schoen, R

    1975-05-01

    A life table model which can recognize increments (or entrants) as well as decrements has proven to be of considerable value in the analysis of marital status patterns, labor force participation patterns, and other areas of substantive interest. Nonetheless, relatively little work has been done on the methodology of increment-decrement (or combined) life tables. The present paper reviews the general, recursive solution of Schoen and Nelson (1974), develops explicit solutions for three cases of particular interest, and compares alternative approaches to the construction of increment-decrement tables.

  3. Aerodynamic Analyses and Database Development for Ares I Vehicle First Stage Separation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Pei, Jing; Pinier, Jeremy T.; Klopfer, Goetz H.; Holland, Scott D.; Covell, Peter F.

    2011-01-01

    This paper presents the aerodynamic analysis and database development for first stage separation of Ares I A106 crew launch vehicle configuration. Separate 6-DOF databases were created for the first stage and upper stage and each database consists of three components: (a) isolated or freestream coefficients, (b) power-off proximity increments, and (c) power-on proximity increments. The isolated and power-off incremental databases were developed using data from 1% scaled model tests in AEDC VKF Tunnel A. The power-on proximity increments were developed using OVERFLOW CFD solutions. The database also includes incremental coefficients for one BDM and one USM failure scenarios.

  4. Incremental analysis of large elastic deformation of a rotating cylinder

    NASA Technical Reports Server (NTRS)

    Buchanan, G. R.

    1976-01-01

    The effect of finite deformation upon a rotating, orthotropic cylinder was investigated using a general incremental theory. The incremental equations of motion are developed using the variational principle. The governing equations are derived using the principle of virtual work for a body with initial stress. The governing equations are reduced to those for the title problem and a numerical solution is obtained using finite difference approximations. Since the problem is defined in terms of one independent space coordinate, the finite difference grid can be modified as the incremental deformation occurs without serious numerical difficulties. The nonlinear problem is solved incrementally by totaling a series of linear solutions.

  5. Evolution of cooperative behavior in simulation agents

    NASA Astrophysics Data System (ADS)

    Stroud, Phillip D.

    1998-03-01

    A simulated automobile factory paint shop is used as a testbed for exploring the emulation of human decision-making behavior. A discrete-events simulation of the paint shop as a collection of interacting Java actors is described. An evolutionary cognitive architecture is under development for building software actors to emulate humans in simulations of human- dominated complex systems. In this paper, the cognitive architecture is extended by implementing a persistent population of trial behaviors with an incremental fitness valuation update strategy, and by allowing a group of cognitive actors to share information. A proof-of-principle demonstration is presented.

  6. Cargo Movement Operations System (CMOS) Updated Software Test Report. Increment I

    DTIC Science & Technology

    1991-02-19

    NO [ ] COMMENT DISPOSITION: ACCEPT [ J REJECT ( ] COMMENT STATUS: OPEN [ ] CLOSED [ ] Cmnt Page Paragraph No. No. Number Comment 1. C-14 TD1251.03 Change "price" to "piece". 2. C-19 TD1323.04 Change "requried" to "required". 3. D-53 TD1322.03 Change the SPCR number to 90122064. ORIGINATOR CONTROL NUMBER: STR1-0002 PROGRAM OFFICE CONTROL NUMBER: DATA ITEM DISCREPANCY WORKSHEET CDRL NUMBER: A010-02 DATE: 02/19/91 ORIGINATOR NAME: Gerald T. Love OFFICE SYMBOL: SAIC TELEPHONE NUMBER: 272-2999 SUBSTANTIVE: X EDITORIAL: PAGE

  7. A scalable and practical one-pass clustering algorithm for recommender system

    NASA Astrophysics Data System (ADS)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  8. Size, Stability and Incremental Budgeting Outcomes in Public Universities.

    ERIC Educational Resources Information Center

    Schick, Allen G.; Hills, Frederick S.

    1982-01-01

    Examined the influence of relative size in the analysis of total dollar and workforce budgets, and changes in total dollar and workforce budgets when correlational/regression methods are used. Data suggested that size dominates the analysis of total budgets, and is not a factor when discretionary dollar increments are analyzed. (JAC)

  9. Who will have health insurance in the future? An updated projection.

    PubMed

    Young, Richard A; DeVoe, Jennifer E

    2012-01-01

    The passage of the 2010 Patient Protection and Affordable Care Act (PPACA) in the United States put the issues of health care reform and health care costs back in the national spotlight. DeVoe and colleagues previously estimated that the cost of a family health insurance premium would equal the median household income by the year 2025. A slowdown in health care spending tied to the recent economic downturn and the passage of the PPACA occurred after this model was published. In this updated model, we estimate that this threshold will be crossed in 2033, and under favorable assumptions the PPACA may extend this date only to 2037. Continuing to make incremental changes in US health policy will likely not bend the cost curve, which has eluded policy makers for the past 50 years. Private health insurance will become increasingly unaffordable to low-to-middle-income Americans unless major changes are made in the US health care system.

  10. Sculling Compensation Algorithm for SINS Based on Two-Time Scale Perturbation Model of Inertial Measurements

    PubMed Central

    Wang, Lingling; Fu, Li

    2018-01-01

    In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323

  11. Combining Multi-Source Remotely Sensed Data and a Process-Based Model for Forest Aboveground Biomass Updating.

    PubMed

    Lu, Xiaoman; Zheng, Guang; Miller, Colton; Alvarado, Ernesto

    2017-09-08

    Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% ( n = 35, p < 0.05, RMSE = 2.20 kg/m²) and 85% ( n = 100, p < 0.01, RMSE = 1.71 kg/m²) of variation in field- and ALS-based forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB.

  12. Combining Multi-Source Remotely Sensed Data and a Process-Based Model for Forest Aboveground Biomass Updating

    PubMed Central

    Lu, Xiaoman; Zheng, Guang; Miller, Colton

    2017-01-01

    Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% (n = 35, p < 0.05, RMSE = 2.20 kg/m2) and 85% (n = 100, p < 0.01, RMSE = 1.71 kg/m2) of variation in field- and ALS-based forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB. PMID:28885556

  13. LANDFIRE 2015 Remap – Utilization of Remotely Sensed Data to Classify Existing Vegetation Type and Structure to Support Strategic Planning and Tactical Response

    USGS Publications Warehouse

    Picotte, Joshua J.; Long, Jordan; Peterson, Birgit; Nelson, Kurtis

    2017-01-01

    The LANDFIRE Program produces national scale vegetation, fuels, fire regimes, and landscape disturbance data for the entire U.S. These data products have been used to model the potential impacts of fire on the landscape [1], the wildfire risks associated with land and resource management [2, 3], and those near population centers and accompanying Wildland Urban Interface zones [4], as well as many other applications. The initial LANDFIRE National Existing Vegetation Type (EVT) and vegetation structure layers, including vegetation percent cover and height, were mapped circa 2001 and released in 2009 [5]. Each EVT is representative of the dominant plant community within a given area. The EVT layer has since been updated by identifying areas of landscape change and modifying the vegetation types utilizing a series of rules that consider the disturbance type, severity of disturbance, and time since disturbance [6, 7]. Non-disturbed areas were adjusted for vegetation growth and succession. LANDFIRE vegetation structure layers also have been updated by using data modeling techniques [see 6 for a full description]. The subsequent updated versions of LANDFIRE include LANDFIRE 2008, 2010, 2012, and LANDFIRE 2014 is being incrementally released, with all data being released in early 2017. Additionally, a comprehensive remap of the baseline data, LANDFIRE 2015 Remap, is being prototyped, and production is tentatively planned to begin in early 2017 to provide a more current baseline for future updates.

  14. Inventory-based sensitivity analysis of the Large Tree Diameter Growth Submodel of the Southern Variant of the FVS

    Treesearch

    Giorgio Vacchiano; John D. Shaw; R. Justin DeRose; James N. Long

    2008-01-01

    Diameter increment is an important variable in modeling tree growth. Most facets of predicted tree development are dependent in part on diameter or diameter increment, the most commonly measured stand variable. The behavior of the Forest Vegetation Simulator (FVS) largely relies on the performance of the diameter increment model and the subsequent use of predicted dbh...

  15. Analysis and application of two-current-source circuit as a signal conditioner for resistive sensors

    NASA Astrophysics Data System (ADS)

    Idzkowski, Adam; Gołębiowski, Jerzy; Walendziuk, Wojciech

    2017-05-01

    The article presents the analysis of metrological properties of a two-current-source supplied circuit. It includes such data as precise and simplified equations for two circuit output voltages in the function of relative resistance increments of sensors. Moreover, graphs showing nonlinearity coefficients of both output voltages for two resistance increments varying widely are presented. Graphs of transfer resistances, depending on relative increments of sensors resistance were also created. The article also contains a description of bridge-based circuit realization with the use of a computer and a data acquisition (DAQ) card. Laboratory measurement of the difference and sum of relative resistance increments of two resistance decade boxes were carried out indirectly with the use of the created measurement system. Measurement errors were calculated and included in the article, as well.

  16. Martingales, nonstationary increments, and the efficient market hypothesis

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-06-01

    We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.

  17. Real-time probabilistic covariance tracking with efficient model update.

    PubMed

    Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li

    2012-05-01

    The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.

  18. A hybrid neural network model for noisy data regression.

    PubMed

    Lee, Eric W M; Lim, Chee Peng; Yuen, Richard K K; Lo, S M

    2004-04-01

    A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Formulas such as these, are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearlymore » the full representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  1. International Space Station Internal Thermal Control System Cold Plate/Fluid-Stability Test: Two Year Update

    NASA Technical Reports Server (NTRS)

    Wieland, Paul; Holt, Mike; Roman, Monsi; Cole, Harold; Daugherty, Steve

    2003-01-01

    Operation of the Internal Thermal Control System (ITCS) Cold Plate/Fluid-Stability Test Facility commenced on September 5, 2000. The facility was intended to provide advance indication of potential problems on board the International Space Station (ISS) and was designed: 1) To be materially similar to the flight ITCS. 2) To allow for monitoring during operation. 3) To run continuously for three years. During the first two years of operation the conditions of the coolant and components were remarkably stable. During this same period of time, the conditions of the ISS ITCS significantly diverged from the desired state. Due to this divergence, the test facility has not been providing information useful for predicting the flight ITCS condition. Results of the first two years are compared with flight conditions over the same time period, showing the similarities and divergences. To address the divergences, the test facility was modified incrementally to more closely match the flight conditions, and to gain insight into the reasons for the divergence. Results of these incremental changes are discussed and provide insight into the development of the conditions on orbit.

  2. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    PubMed

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Cost-effectiveness of an Evidence-Based Childhood Asthma Intervention in Real-World Primary Care Settings.

    PubMed

    Dor, Avi; Luo, Qian; Gerstein, Maya Tuchman; Malveaux, Floyd; Mitchell, Herman; Markus, Anne Rossier

    We present an incremental cost-effectiveness analysis of an evidence-based childhood asthma intervention (Community Healthcare for Asthma Management and Prevention of Symptoms [CHAMPS]) to usual management of childhood asthma in community health centers. Data used in the analysis include household surveys, Medicaid insurance claims, and community health center expenditure reports. We combined our incremental cost-effectiveness analysis with a difference-in-differences multivariate regression framework. We found that CHAMPS reduced symptom days by 29.75 days per child-year and was cost-effective (incremental cost-effectiveness ratio: $28.76 per symptom-free days). Most of the benefits were due to reductions in direct medical costs. Indirect benefits from increased household productivity were relatively small.

  4. Clinical effectiveness and cost-effectiveness of laparoscopic surgery for colorectal cancer: systematic reviews and economic evaluation.

    PubMed

    Murray, A; Lourenco, T; de Verteuil, R; Hernandez, R; Fraser, C; McKinley, A; Krukowski, Z; Vale, L; Grant, A

    2006-11-01

    The aim of this study was to determine the clinical effectiveness and cost-effectiveness of laparoscopic, laparoscopically assisted (hereafter together described as laparoscopic surgery) and hand-assisted laparoscopic surgery (HALS) in comparison with open surgery for the treatment of colorectal cancer. Electronic databases were searched from 2000 to May 2005. A review of economic evaluations was undertaken by the National Institute for Health and Clinical Excellence in 2001. This review was updated from 2000 until July 2005. Data from selected studies were extracted and assessed. Dichotomous outcome data from individual trials were combined using the relative risk method and continuous outcomes were combined using the Mantel-Haenszel weighted mean difference method. Summaries of the results from individual patient data (IPD) meta-analyses were also presented. An economic evaluation was also carried out using a Markov model incorporating the data from the systematic review. The results were first presented as a balance sheet for comparison of the surgical techniques. It was then used to estimate cost-effectiveness measured in terms of incremental cost per life-year gained and incremental cost per quality-adjusted life-year (QALY) for a time horizon up to 25 years. Forty-six reports on 20 studies [19 randomised controlled trials (RCTs) and one IPD meta-analysis] were included in the review of clinical effectiveness. The RCTs were of generally moderate quality with the number of participants varying between 16 and 1082, with 10 having less than 100 participants. The total numbers of trial participants who underwent laparoscopic or open surgery were 2429 and 2139, respectively. A systematic review of four papers suggested that laparoscopic surgery is more costly than open surgery. However, the data they provided on effectiveness was poorer than the evidence from the review of effectiveness. The estimates from the systematic review of clinical effectiveness were incorporated into a Markov model used to estimate cost-effectiveness for a time horizon of up to 25 years. In terms of incremental cost per life-year, laparoscopic surgery was found to be more costly and no more effective than open surgery. With respect to incremental cost per QALY, few data were available to differentiate between laparoscopic and open surgery. The results of the base-case analysis indicate that there is an approximately 40% chance that laparoscopic surgery is the more cost-effective intervention at a threshold willingness to pay for a QALY of pound 30,000. A second analysis assuming equal mortality and disease-free survival found that there was an approximately 50% likelihood at a similar threshold value. Broadly similar results were found in the sensitivity analyses. A threshold analysis was performed to investigate the magnitude of QALY gain associated with quicker recovery following laparoscopic surgery required to provide an incremental cost per QALY of pound 30,000. The implied number of additional QALYs required would be 0.009-0.010 compared with open surgery. Laparoscopic resection is associated with a quicker recovery (shorter time to return to usual activities and length of hospitalisation) and no evidence of a difference in mortality or disease-free survival up to 3 years following surgery. However, operation times are longer and a significant number of procedures initiated laparoscopically may need to be converted to open surgery. The rate of conversion may be dependent on experience in terms of both patient selection and performing the technique. Laparoscopic resection appears more costly to the health service than open resection, with an estimated extra total cost of between pound 250 and pound 300 per patient. In terms of relative cost-effectiveness, laparoscopic resection is associated with a modest additional cost, short-term benefits associated with more rapid recovery and similar long-term outcomes in terms of survival and cure rates up to 3 years. Assuming equivalence of long-term outcomes, a judgement is required as to whether the benefits associated with earlier recovery are worth this extra cost. The long-term follow-up of the RCT cohorts would be very useful further research and ideally these data should be incorporated into a wider IPD meta-analysis. Data on the long-term complications of surgery such as incisional hernias and differences in outcomes such as persisting pain would also be valuable. Once available, further data on both costs and utilities should be included in an updated model. At this point, further consideration should then be given as to whether additional data should be collected within ongoing trials. Few data were available to assess the relative merits of HALS. Ideally, there should be more data from methodologically sound RCTs. Further research is needed on whether the balance of advantages and disadvantages of laparoscopic surgery varies within subgroups based on the different stages and locations of disease. Research relating to the effect of experience on performance is also required.

  5. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  6. Longitudinal Aerodynamic Modeling of the Adaptive Compliant Trailing Edge Flaps on a GIII Airplane and Comparisons to Flight Data

    NASA Technical Reports Server (NTRS)

    Smith, Mark S.; Bui, Trong T.; Garcia, Christian A.; Cumming, Stephen B.

    2016-01-01

    A pair of compliant trailing edge flaps was flown on a modified GIII airplane. Prior to flight test, multiple analysis tools of various levels of complexity were used to predict the aerodynamic effects of the flaps. Vortex lattice, full potential flow, and full Navier-Stokes aerodynamic analysis software programs were used for prediction, in addition to another program that used empirical data. After the flight-test series, lift and pitching moment coefficient increments due to the flaps were estimated from flight data and compared to the results of the predictive tools. The predicted lift increments matched flight data well for all predictive tools for small flap deflections. All tools over-predicted lift increments for large flap deflections. The potential flow and Navier-Stokes programs predicted pitching moment coefficient increments better than the other tools.

  7. Parkinson Symptoms and Health Related Quality of Life as Predictors of Costs: A Longitudinal Observational Study with Linear Mixed Model Analysis

    PubMed Central

    Martinez-Martín, Pablo; Rodriguez-Blazquez, Carmen; Paz, Silvia; Forjaz, Maria João; Frades-Payo, Belén; Cubo, Esther; de Pedro-Cuesta, Jesús; Lizán, Luis

    2015-01-01

    Objective To estimate the magnitude in which Parkinson’s disease (PD) symptoms and health- related quality of life (HRQoL) determined PD costs over a 4-year period. Materials and Methods Data collected during 3-month, each year, for 4 years, from the ELEP study, included sociodemographic, clinical and use of resources information. Costs were calculated yearly, as mean 3-month costs/patient and updated to Spanish €, 2012. Mixed linear models were performed to analyze total, direct and indirect costs based on symptoms and HRQoL. Results One-hundred and seventy four patients were included. Mean (SD) age: 63 (11) years, mean (SD) disease duration: 8 (6) years. Ninety-three percent were HY I, II or III (mild or moderate disease). Forty-nine percent remained in the same stage during the study period. Clinical evaluation and HRQoL scales showed relatively slight changes over time, demonstrating a stable group overall. Mean (SD) PD total costs augmented 92.5%, from €2,082.17 (€2,889.86) in year 1 to €4,008.6 (€7,757.35) in year 4. Total, direct and indirect cost incremented 45.96%, 35.63%, and 69.69% for mild disease, respectively, whereas increased 166.52% for total, 55.68% for direct and 347.85% for indirect cost in patients with moderate PD. For severe patients, cost remained almost the same throughout the study. For each additional point in the SCOPA-Motor scale total costs increased €75.72 (p = 0.0174); for each additional point on SCOPA-Motor and the SCOPA-COG, direct costs incremented €49.21 (p = 0.0094) and €44.81 (p = 0.0404), respectively; and for each extra point on the pain scale, indirect costs increased €16.31 (p = 0.0228). Conclusions PD is an expensive disease in Spain. Disease progression and severity as well as motor and cognitive dysfunctions are major drivers of costs increments. Therapeutic measures aimed at controlling progression and symptoms could help contain disease expenses. PMID:26698860

  8. Parkinson Symptoms and Health Related Quality of Life as Predictors of Costs: A Longitudinal Observational Study with Linear Mixed Model Analysis.

    PubMed

    Martinez-Martín, Pablo; Rodriguez-Blazquez, Carmen; Paz, Silvia; Forjaz, Maria João; Frades-Payo, Belén; Cubo, Esther; de Pedro-Cuesta, Jesús; Lizán, Luis

    2015-01-01

    To estimate the magnitude in which Parkinson's disease (PD) symptoms and health- related quality of life (HRQoL) determined PD costs over a 4-year period. Data collected during 3-month, each year, for 4 years, from the ELEP study, included sociodemographic, clinical and use of resources information. Costs were calculated yearly, as mean 3-month costs/patient and updated to Spanish €, 2012. Mixed linear models were performed to analyze total, direct and indirect costs based on symptoms and HRQoL. One-hundred and seventy four patients were included. Mean (SD) age: 63 (11) years, mean (SD) disease duration: 8 (6) years. Ninety-three percent were HY I, II or III (mild or moderate disease). Forty-nine percent remained in the same stage during the study period. Clinical evaluation and HRQoL scales showed relatively slight changes over time, demonstrating a stable group overall. Mean (SD) PD total costs augmented 92.5%, from € 2,082.17 (€ 2,889.86) in year 1 to € 4,008.6 (€ 7,757.35) in year 4. Total, direct and indirect cost incremented 45.96%, 35.63%, and 69.69% for mild disease, respectively, whereas increased 166.52% for total, 55.68% for direct and 347.85% for indirect cost in patients with moderate PD. For severe patients, cost remained almost the same throughout the study. For each additional point in the SCOPA-Motor scale total costs increased € 75.72 (p = 0.0174); for each additional point on SCOPA-Motor and the SCOPA-COG, direct costs incremented € 49.21 (p = 0.0094) and € 44.81 (p = 0.0404), respectively; and for each extra point on the pain scale, indirect costs increased € 16.31 (p = 0.0228). PD is an expensive disease in Spain. Disease progression and severity as well as motor and cognitive dysfunctions are major drivers of costs increments. Therapeutic measures aimed at controlling progression and symptoms could help contain disease expenses.

  9. A General Interface Method for Aeroelastic Analysis of Aircraft

    NASA Technical Reports Server (NTRS)

    Tzong, T.; Chen, H. H.; Chang, K. C.; Wu, T.; Cebeci, T.

    1996-01-01

    The aeroelastic analysis of an aircraft requires an accurate and efficient procedure to couple aerodynamics and structures. The procedure needs an interface method to bridge the gap between the aerodynamic and structural models in order to transform loads and displacements. Such an interface method is described in this report. This interface method transforms loads computed by any aerodynamic code to a structural finite element (FE) model and converts the displacements from the FE model to the aerodynamic model. The approach is based on FE technology in which virtual work is employed to transform the aerodynamic pressures into FE nodal forces. The displacements at the FE nodes are then converted back to aerodynamic grid points on the aircraft surface through the reciprocal theorem in structural engineering. The method allows both high and crude fidelities of both models and does not require an intermediate modeling. In addition, the method performs the conversion of loads and displacements directly between individual aerodynamic grid point and its corresponding structural finite element and, hence, is very efficient for large aircraft models. This report also describes the application of this aero-structure interface method to a simple wing and an MD-90 wing. The results show that the aeroelastic effect is very important. For the simple wing, both linear and nonlinear approaches are used. In the linear approach, the deformation of the structural model is considered small, and the loads from the deformed aerodynamic model are applied to the original geometry of the structure. In the nonlinear approach, the geometry of the structure and its stiffness matrix are updated in every iteration and the increments of loads from the previous iteration are applied to the new structural geometry in order to compute the displacement increments. Additional studies to apply the aero-structure interaction procedure to more complicated geometry will be conducted in the second phase of the present contract.

  10. Neural Basis of Strategic Decision Making

    PubMed Central

    Lee, Daeyeol; Seo, Hyojung

    2015-01-01

    Human choice behaviors during social interactions often deviate from the predictions of game theory. This might arise partly from the limitations in cognitive abilities necessary for recursive reasoning about the behaviors of others. In addition, during iterative social interactions, choices might change dynamically, as knowledge about the intentions of others and estimates for choice outcomes are incrementally updated via reinforcement learning. Some of the brain circuits utilized during social decision making might be general-purpose and contribute to isomorphic individual and social decision making. By contrast, regions in the medial prefrontal cortex and temporal parietal junction might be recruited for cognitive processes unique to social decision making. PMID:26688301

  11. Present opto-mechanical design status of NFIRAOS

    NASA Astrophysics Data System (ADS)

    Byrnes, Peter W. G.; Atwood, Jenny; Boucher, Marc-André; Fitzsimmons, Joeleff; Hill, Alexis; Herriot, Glen; Spanò, Paolo; Szeto, Kei; Wevers, Ivan

    2014-07-01

    This paper describes the current opto-mechanical design of NFIRAOS (Narrow Field InfraRed Adaptive Optics System) for the Thirty Meter Telescope (TMT). The preliminary design update review for NFIRAOS was successfully held in December 2011, and incremental design progress has since occurred on several fronts. The majority of NFIRAOS is housed within an insulated and cooled enclosure, and operates at -30 C to reduce background emissivity. The cold optomechanics are attached to a space-frame structure, kinematically supported by bipods that penetrate the insulated enclosure. The bipods are attached to an exo-structure at ambient temperature, which also supports up to three client science instruments and a science calibration unit.

  12. Cargo Movement Operations System (CMOS). Updated Software Requirements Specifications, Increment 2, (Communications CSCI)

    DTIC Science & Technology

    1990-11-14

    NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SRS2-0002 PROGRAM OFFICE CONTROL NUMBER: DATA ITEM...e. (1st and 3rd sentence), 3.2.7.21, and 3.2.8 b. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ COMMENT DISPOSITION...3rd 3.2.7.6 4th 3.2.7.22 4th 3.2.7.7 4th 3.2.8 d. 2nd & 3rd 3.2.7.9 4th 3.2.8 e. 2nd CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [

  13. Calculation of Growth Stress in SiO2 Scales Formed by Oxidation of SiC Fibers (PREPRINT)

    DTIC Science & Technology

    2012-07-01

    Poisson effect. Tensile hoop stresses can be >2 GPa for thick scales formed at 򒮨°C. Effects of different fiber radii on growth stresses are examined...original fiber radius and Ω is the ratio of SiC/SiO2 molar volume ratio . The outer radius of the SiO2 scale (c) is (Fig. 1): c = b+w...and νSiO2 are Poison’s ratio for the SiC fiber and the SiO2 scale. Stresses in older increments (j = i-2 to j = 0) are updated with the stress values

  14. Formal Semantics and Implementation of BPMN 2.0 Inclusive Gateways

    NASA Astrophysics Data System (ADS)

    Christiansen, David Raymond; Carbone, Marco; Hildebrandt, Thomas

    We present the first direct formalization of the semantics of inclusive gateways as described in the Business Process Modeling Notation (BPMN) 2.0 Beta 1 specification. The formal semantics is given for a minimal subset of BPMN 2.0 containing just the inclusive and exclusive gateways and the start and stop events. By focusing on this subset we achieve a simple graph model that highlights the particular non-local features of the inclusive gateway semantics. We sketch two ways of implementing the semantics using algorithms based on incrementally updated data structures and also discuss distributed communication-based implementations of the two algorithms.

  15. Theater Medical Information Program Joint Increment 2 (TMIP J Inc 2)

    DTIC Science & Technology

    2016-03-01

    Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY...the Full Deployment Decision ( FDD ), the TMIP-J Increment 2 Economic Analysis was approved on December 6, 2013. The USD(AT&L) signed an Acquisition...Decision Memorandum (ADM) on December 23, 2013 approving FDD for TMIP-J Increment 2 and establishing the Full Deployment Objective and Threshold dates as

  16. Cost-Effectiveness of Endovascular Stroke Therapy: A Patient Subgroup Analysis From a US Healthcare Perspective.

    PubMed

    Kunz, Wolfgang G; Hunink, M G Myriam; Sommer, Wieland H; Beyer, Sebastian E; Meinel, Felix G; Dorn, Franziska; Wirth, Stefan; Reiser, Maximilian F; Ertl-Wagner, Birgit; Thierfelder, Kolja M

    2016-11-01

    Endovascular therapy in addition to standard care (EVT+SC) has been demonstrated to be more effective than SC in acute ischemic large vessel occlusion stroke. Our aim was to determine the cost-effectiveness of EVT+SC depending on patients' initial National Institutes of Health Stroke Scale (NIHSS) score, time from symptom onset, Alberta Stroke Program Early CT Score (ASPECTS), and occlusion location. A decision model based on Markov simulations estimated lifetime costs and quality-adjusted life years (QALYs) associated with both strategies applied in a US setting. Model input parameters were obtained from the literature, including recently pooled outcome data of 5 randomized controlled trials (ESCAPE [Endovascular Treatment for Small Core and Proximal Occlusion Ischemic Stroke], EXTEND-IA [Extending the Time for Thrombolysis in Emergency Neurological Deficits-Intra-Arterial], MR CLEAN [Multicenter Randomized Clinical Trial of Endovascular Treatment for Acute Ischemic Stroke in the Netherlands], REVASCAT [Randomized Trial of Revascularization With Solitaire FR Device Versus Best Medical Therapy in the Treatment of Acute Stroke Due to Anterior Circulation Large Vessel Occlusion Presenting Within 8 Hours of Symptom Onset], and SWIFT PRIME [Solitaire With the Intention for Thrombectomy as Primary Endovascular Treatment]). Probabilistic sensitivity analysis was performed to estimate uncertainty of the model results. Net monetary benefits, incremental costs, incremental effectiveness, and incremental cost-effectiveness ratios were derived from the probabilistic sensitivity analysis. The willingness-to-pay was set to $50 000/QALY. Overall, EVT+SC was cost-effective compared with SC (incremental cost: $4938, incremental effectiveness: 1.59 QALYs, and incremental cost-effectiveness ratio: $3110/QALY) in 100% of simulations. In all patient subgroups, EVT+SC led to gained QALYs (range: 0.47-2.12), and mean incremental cost-effectiveness ratios were considered cost-effective. However, subgroups with ASPECTS ≤5 or with M2 occlusions showed considerably higher incremental cost-effectiveness ratios ($14 273/QALY and $28 812/QALY, respectively) and only reached suboptimal acceptability in the probabilistic sensitivity analysis (75.5% and 59.4%, respectively). All other subgroups had acceptability rates of 90% to 100%. EVT+SC is cost-effective in most subgroups. In patients with ASPECTS ≤5 or with M2 occlusions, cost-effectiveness remains uncertain based on current data. © 2016 American Heart Association, Inc.

  17. On the Discovery of Evolving Truth

    PubMed Central

    Li, Yaliang; Li, Qi; Gao, Jing; Su, Lu; Zhao, Bo; Fan, Wei; Han, Jiawei

    2015-01-01

    In the era of big data, information regarding the same objects can be collected from increasingly more sources. Unfortunately, there usually exist conflicts among the information coming from different sources. To tackle this challenge, truth discovery, i.e., to integrate multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. In many real world applications, however, the information may come sequentially, and as a consequence, the truth of objects as well as the reliability of sources may be dynamically evolving. Existing truth discovery methods, unfortunately, cannot handle such scenarios. To address this problem, we investigate the temporal relations among both object truths and source reliability, and propose an incremental truth discovery framework that can dynamically update object truths and source weights upon the arrival of new data. Theoretical analysis is provided to show that the proposed method is guaranteed to converge at a fast rate. The experiments on three real world applications and a set of synthetic data demonstrate the advantages of the proposed method over state-of-the-art truth discovery methods. PMID:26705502

  18. Possibilities of the particle finite element method for fluid-soil-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Oñate, Eugenio; Celigueta, Miguel Angel; Idelsohn, Sergio R.; Salazar, Fernando; Suárez, Benjamín

    2011-09-01

    We present some developments in the particle finite element method (PFEM) for analysis of complex coupled problems in mechanics involving fluid-soil-structure interaction (FSSI). The PFEM uses an updated Lagrangian description to model the motion of nodes (particles) in both the fluid and the solid domains (the later including soil/rock and structures). A mesh connects the particles (nodes) defining the discretized domain where the governing equations for each of the constituent materials are solved as in the standard FEM. The stabilization for dealing with an incompressibility continuum is introduced via the finite calculus method. An incremental iterative scheme for the solution of the non linear transient coupled FSSI problem is described. The procedure to model frictional contact conditions and material erosion at fluid-solid and solid-solid interfaces is described. We present several examples of application of the PFEM to solve FSSI problems such as the motion of rocks by water streams, the erosion of a river bed adjacent to a bridge foundation, the stability of breakwaters and constructions sea waves and the study of landslides.

  19. New frontiers for health information systems using Epi Info in developing countries: structured application framework for Epi Info (SAFE).

    PubMed

    Ma, J; Otten, M; Kamadjeu, R; Mir, R; Rosencrans, L; McLaughlin, S; Yoon, S

    2008-04-01

    For more than two decades, Epi Info software has been used to meet the data management, analysis, and mapping needs of public health professionals in more than 181 countries and 13 languages. Until now, most Epi Info systems have been relatively simple, mainly because of a lack of detailed and structured guidance for developing complex systems. We created the structured application framework for Epi Info (SAFE), which is a set of guidelines that allows developers to create both simple and complex information systems using accepted good programming practices. This has resulted in application code blocks that are re-useable and easy to maintain, modify, and enhance. The flexibility of SAFE allows various aggregate and case-based application modules to be rapidly created, combined, and updated to create health information systems or sub-systems enabling continuous, incremental enhancement as national and local capacity increases. SAFE and Epi Info are both cost-free and have low system requirements--characteristics that render this framework and software beneficial for developing countries.

  20. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2017-04-01

    With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.

  1. Fruit and vegetable consumption and risk of bladder cancer: an updated meta-analysis of observational studies.

    PubMed

    Liu, Huan; Wang, Xing-Chun; Hu, Guang-Hui; Guo, Zhui-Feng; Lai, Peng; Xu, Liang; Huang, Tian-Bao; Xu, Yun-Fei

    2015-11-01

    This meta-analysis was conducted to assess the association between fruit and vegetable intake and bladder cancer risk. Eligible studies published up to August 2014 were retrieved both through a computer search of PubMed, Embase and the Cochrane library and through a manual review of references. The summary relative risks with 95% confidence intervals (CIs) for the highest versus the lowest intakes of fruits and vegetables were calculated with random-effects models. Heterogeneity and publication bias were also evaluated. Potential sources of heterogeneity were detected with metaregression. Subgroup analyses and sensitivity analyses were also performed. A total of 27 studies (12 cohort and 15 case-control studies) were included in this meta-analysis. The summary relative risks for the highest versus lowest were 0.84 (95% CI: 0.72-0.96) for vegetable intake and 0.81 (95% CI: 0.73-0.89) for fruit intake. The dose-response analysis showed that the risk of bladder cancer decreased by 8% (relative risk=0.92; 95% CI: 0.87-0.97) and 9% (relative risk=0.91; 95% CI: 0.83-0.99) for every 200 g/day increment in vegetable and fruit consumption, respectively. Sensitivity analysis confirmed the stability of the results. Our findings suggest that intake of vegetables and fruits may significantly reduce the risk of bladder cancer. Further well-designed prospective studies are warranted to confirm these findings.

  2. Description and verification of a U.S. Naval Research Lab's loosely coupled data assimilation system for the Navy's Earth System Model

    NASA Astrophysics Data System (ADS)

    Barton, N. P.; Metzger, E. J.; Smedstad, O. M.; Ruston, B. C.; Wallcraft, A. J.; Whitcomb, T.; Ridout, J. A.; Zamudio, L.; Posey, P.; Reynolds, C. A.; Richman, J. G.; Phelps, M.

    2017-12-01

    The Naval Research Laboratory is developing an Earth System Model (NESM) to provide global environmental information to meet Navy and Department of Defense (DoD) operations and planning needs from the upper atmosphere to under the sea. This system consists of a global atmosphere, ocean, ice, wave, and land prediction models and the individual models include: atmosphere - NAVy Global Environmental Model (NAVGEM); ocean - HYbrid Coordinate Ocean Model (HYCOM); sea ice - Community Ice CodE (CICE); WAVEWATCH III™; and land - NAVGEM Land Surface Model (LSM). Data assimilation is currently loosely coupled between the atmosphere component using a 6-hour update cycle in the Naval Research Laboratory (NRL) Atmospheric Variational Data Assimilation System - Accelerated Representer (NAVDAS-AR) and the ocean/ice components using a 24-hour update cycle in the Navy Coupled Ocean Data Assimilation (NCODA) with 3 hours of incremental updating. This presentation will describe the US Navy's coupled forecast model, the loosely coupled data assimilation, and compare results against stand-alone atmosphere and ocean/ice models. In particular, we will focus on the unique aspects of this modeling system, which includes an eddy resolving ocean model and challenges associated with different update-windows and solvers for the data assimilation in the atmosphere and ocean. Results will focus on typical operational diagnostics for atmosphere, ocean, and ice analyses including 500 hPa atmospheric height anomalies, low-level winds, temperature/salinity ocean depth profiles, ocean acoustical proxies, sea ice edge, and sea ice drift. Overall, the global coupled system is performing with comparable skill to the stand-alone systems.

  3. DOSE ASSESSMENT OF THE FINAL INVENTORIES IN CENTER SLIT TRENCHES ONE THROUGH FIVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collard, L.; Hamm, L.; Smith, F.

    2011-05-02

    In response to a request from Solid Waste Management (SWM), this study evaluates the performance of waste disposed in Slit Trenches 1-5 by calculating exposure doses and concentrations. As of 8/19/2010, Slit Trenches 1-5 have been filled and are closed to future waste disposal in support of an ARRA-funded interim operational cover project. Slit Trenches 6 and 7 are currently in operation and are not addressed within this analysis. Their current inventory limits are based on the 2008 SA and are not being impacted by this study. This analysis considers the location and the timing of waste disposal in Slitmore » Trenches 1-5 throughout their operational life. In addition, the following improvements to the modeling approach have been incorporated into this analysis: (1) Final waste inventories from WITS are used for the base case analysis where variance in the reported final disposal inventories is addressed through a sensitivity analysis; (2) Updated K{sub d} values are used; (3) Area percentages of non-crushable containers are used in the analysis to determine expected infiltration flows for cases that consider collapse of these containers; (4) An updated representation of ETF carbon column vessels disposed in SLIT3-Unit F is used. Preliminary analyses indicated a problem meeting the groundwater beta-gamma dose limit because of high H-3 and I-129 release from the ETF vessels. The updated model uses results from a recent structural analysis of the ETF vessels indicating that water does not penetrate the vessels for about 130 years and that the vessels remain structurally intact throughout the 1130-year period of assessment; and (5) Operational covers are included with revised installation dates and sets of Slit Trenches that have a common cover. With the exception of the modeling enhancements noted above, the analysis follows the same methodology used in the 2008 PA (WSRC, 2008) and the 2008 SA (Collard and Hamm, 2008). Infiltration flows through the vadose zone are identical to the flows used in the 2008 PA, except for flows during the operational cover time period. The physical (i.e., non-geochemical) models of the vadose zone and aquifer are identical in most cases to the models used in the 2008 PA. However, the 2008 PA assumed a uniform distribution of waste within each Slit Trench (WITS Location) and assumed that the entire inventory of each trench was disposed of at the time the first Slit Trench was opened. The current analysis considers individual trench excavations (i.e., segments) and groups of segments (i.e., Inventory Groups also known as WITS Units) within Slit Trenches. Waste disposal is assumed to be spatially uniform in each Inventory Group and is distributed in time increments of six months or less between the time the Inventory Group was opened and closed.« less

  4. Temperature Dependent Modal Test/Analysis Correlation of X-34 Fastrac Composite Rocket Nozzle

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Brunty, Joseph A. (Technical Monitor)

    2001-01-01

    A unique high temperature modal test and model correlation/update program has been performed on the composite nozzle of the FASTRAC engine for the NASA X-34 Reusable Launch Vehicle. The program was required to provide an accurate high temperature model of the nozzle for incorporation into the engine system structural dynamics model for loads calculation; this model is significantly different from the ambient case due to the large decrease in composite stiffness properties due to heating. The high-temperature modal test was performed during a hot-fire test of the nozzle. Previously, a series of high fidelity modal tests and finite element model correlation of the nozzle in a free-free configuration had been performed. This model was then attached to a modal-test verified model of the engine hot-fire test stand and the ambient system mode shapes were identified. A reduced set of accelerometers was then attached to the nozzle, the engine fired full-duration, and the frequency peaks corresponding to the ambient nozzle modes individually isolated and tracked as they decreased during the test. To update the finite-element model of the nozzle to these frequency curves, the percentage differences of the anisotropic composite moduli due to temperature variation from ambient, which had been used in the initial modeling and which were obtained by small sample coupon testing, were multiplied by an iteratively determined constant factor. These new properties were used to create high-temperature nozzle models corresponding to 10 second engine operation increments and tied into the engine system model for loads determination.

  5. Aerosol Observability and Predictability: From Research to Operations for Chemical Weather Forecasting. Lagrangian Displacement Ensembles for Aerosol Data Assimilation

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo

    2010-01-01

    A challenge common to many constituent data assimilation applications is the fact that one observes a much smaller fraction of the phase space that one wishes to estimate. For example, remotely sensed estimates of the column average concentrations are available, while one is faced with the problem of estimating 3D concentrations for initializing a prognostic model. This problem is exacerbated in the case of aerosols because the observable Aerosol Optical Depth (AOD) is not only a column integrated quantity, but it also sums over a large number of species (dust, sea-salt, carbonaceous and sulfate aerosols. An aerosol transport model when driven by high-resolution, state-of-the-art analysis of meteorological fields and realistic emissions can produce skillful forecasts even when no aerosol data is assimilated. The main task of aerosol data assimilation is to address the bias arising from inaccurate emissions, and Lagrangian misplacement of plumes induced by errors in the driving meteorological fields. As long as one decouples the meteorological and aerosol assimilation as we do here, the classic baroclinic growth of error is no longer the main order of business. We will describe an aerosol data assimilation scheme in which the analysis update step is conducted in observation space, using an adaptive maximum-likelihood scheme for estimating background errors in AOD space. This scheme includes e explicit sequential bias estimation as in Dee and da Silva. Unlikely existing aerosol data assimilation schemes we do not obtain analysis increments of the 3D concentrations by scaling the background profiles. Instead we explore the Lagrangian characteristics of the problem for generating local displacement ensembles. These high-resolution state-dependent ensembles are then used to parameterize the background errors and generate 3D aerosol increments. The algorithm has computational complexity running at a resolution of 1/4 degree, globally. We will present the result of assimilating AOD retrievals from MODIS (on both Aqua and TERRA satellites) from AERONET for validation. The impact on the GEOS-5 Aerosol Forecasting will be fully documented.

  6. Cost Utility Analysis of the Cervical Artificial Disc vs Fusion for the Treatment of 2-Level Symptomatic Degenerative Disc Disease: 5-Year Follow-up.

    PubMed

    Ament, Jared D; Yang, Zhuo; Nunley, Pierce; Stone, Marcus B; Lee, Darrin; Kim, Kee D

    2016-07-01

    The cervical total disc replacement (cTDR) was developed to treat cervical degenerative disc disease while preserving motion. Cost-effectiveness of this intervention was established by looking at 2-year follow-up, and this update reevaluates our analysis over 5 years. Data were derived from a randomized trial of 330 patients. Data from the 12-Item Short Form Health Survey were transformed into utilities by using the SF-6D algorithm. Costs were calculated by extracting diagnosis-related group codes and then applying 2014 Medicare reimbursement rates. A Markov model evaluated quality-adjusted life years (QALYs) for both treatment groups. Univariate and multivariate sensitivity analyses were conducted to test the stability of the model. The model adopted both societal and health system perspectives and applied a 3% annual discount rate. The cTDR costs $1687 more than anterior cervical discectomy and fusion (ACDF) over 5 years. In contrast, cTDR had $34 377 less productivity loss compared with ACDF. There was a significant difference in the return-to-work rate (81.6% compared with 65.4% for cTDR and ACDF, respectively; P = .029). From a societal perspective, the incremental cost-effective ratio (ICER) for cTDR was -$165 103 per QALY. From a health system perspective, the ICER for cTDR was $8518 per QALY. In the sensitivity analysis, the ICER for cTDR remained below the US willingness-to-pay threshold of $50 000 per QALY in all scenarios (-$225 816 per QALY to $22 071 per QALY). This study is the first to report the comparative cost-effectiveness of cTDR vs ACDF for 2-level degenerative disc disease at 5 years. The authors conclude that, because of the negative ICER, cTDR is the dominant modality. ACDF, anterior cervical discectomy and fusionAWP, average wholesale priceCE, cost-effectivenessCEA, cost-effectiveness analysisCPT, Current Procedural TerminologycTDR, cervical total disc replacementCUA, cost-utility analysisDDD, degenerative disc diseaseDRG, diagnosis-related groupFDA, US Food and Drug AdministrationICER, incremental cost-effectiveness ratioIDE, Investigational Device ExemptionNDI, neck disability indexQALY, quality-adjusted life yearsRCT, randomized controlled trialRTW, return-to-workSF-12, 12-Item Short Form Health SurveyVAS, visual analog scaleWTP, willingness-to-pay.

  7. Aerodynamic flight evaluation analysis and data base update

    NASA Technical Reports Server (NTRS)

    Boyle, W. W.; Miller, M. S.; Wilder, G. O.; Reheuser, R. D.; Sharp, R. S.; Bridges, G. I.

    1989-01-01

    Research was conducted to determine the feasibility of replacing the Solid Rocket Boosters on the existing Space Shuttle Launch Vehicle (SSLV) with Liquid Rocket Boosters (LRB). As a part of the LRB selection process, a series of wind tunnel tests were conducted along with aero studies to determine the effects of different LRB configurations on the SSLV. Final results were tabulated into increments and added to the existing SSLV data base. The research conducted in this study was taken from a series of wind tunnel tests conducted at Marshall's 14-inch Trisonic Wind Tunnel. The effects on the axial force (CAF), normal force (CNF), pitching moment (CMF), side force (CY), wing shear force (CSR), wing torque moment (CTR), and wing bending moment (CBR) coefficients were investigated for a number of candidate LRB configurations. The aero effects due to LRB protuberances, ET/LRB separation distance, and aft skirts were also gathered from the tests. Analysis was also conducted to investigate the base pressure and plume effects due to the new booster geometries. The test results found in Phases 1 and 2 of wind tunnel testing are discussed and compared. Preliminary LRB lateral/directional data results and trends are given. The protuberance and gap/skirt effects are discussed. The base pressure/plume effects study is discussed and results are given.

  8. 10 CFR 72.248 - Safety analysis report updating.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Safety analysis report updating. 72.248 Section 72.248 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF... Approval of Spent Fuel Storage Casks § 72.248 Safety analysis report updating. (a) Each certificate holder...

  9. Ezetimibe therapy: mechanism of action and clinical update

    PubMed Central

    Phan, Binh An P; Dayspring, Thomas D; Toth, Peter P

    2012-01-01

    The lowering of low-density lipoprotein cholesterol (LDL-C) is the primary target of therapy in the primary and secondary prevention of cardiovascular events. Although statin therapy is the mainstay for LDL-C lowering, a significant percentage of patients prescribed these agents either do not achieve targets with statin therapy alone or have partial or complete intolerance to them. For such patients, the use of adjuvant therapy capable of providing incremental LDL-C reduction is advised. One such agent is ezetimibe, a cholesterol absorption inhibitor that targets uptake at the jejunal enterocyte brush border. Its primary target of action is the cholesterol transport protein Nieman Pick C1 like 1 protein. Ezetimibe is an effective LDL-C lowering agent and is safe and well tolerated. In response to significant controversy surrounding the use and therapeutic effectiveness of this drug, we provide an update on the biochemical mechanism of action for ezetimibe, its safety and efficacy, as well as the results of recent randomized studies that support its use in a variety of clinical scenarios. PMID:22910633

  10. Retroactive Operations: On "Increments" in Mandarin Chinese Conversations

    ERIC Educational Resources Information Center

    Lim, Ni Eng

    2014-01-01

    Conversation Analysis (CA) has established repair (Schegloff, Jefferson & Sacks 1977; Schegloff 1979; Kitzinger 2013) as a conversational mechanism for managing contingencies of talk-in-interaction. In this dissertation, I look at a particular sort of "repair" termed TCU-continuations (or otherwise known increments in other…

  11. Aerodynamic Analyses and Database Development for Ares I Vehicle First Stage Separation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Pei, Jing; Pinier, Jeremy T.; Holland, Scott D.; Covell, Peter F.; Klopfer, Goetz, H.

    2012-01-01

    This paper presents the aerodynamic analysis and database development for the first stage separation of the Ares I A106 Crew Launch Vehicle configuration. Separate databases were created for the first stage and upper stage. Each database consists of three components: isolated or free-stream coefficients, power-off proximity increments, and power-on proximity increments. The power-on database consists of three parts, all plumes firing at nominal conditions, the one booster deceleration motor out condition, and the one ullage settling motor out condition. The isolated and power-off incremental databases were developed using wind tunnel test data. The power-on proximity increments were developed using CFD solutions.

  12. 77 FR 24941 - Vantage Wind Energy LLC; Order Accepting Updated Market Power Analysis and Providing Direction on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-26

    .... 1. In this order, the Commission accepts an updated market power analysis filed by Vantage Wind.... Background 3. On December 20, 2010, Vantage Wind filed an updated market power analysis in compliance with... power analysis filed by Puget Sound Energy, Inc. (Puget).\\4\\ \\3\\ See Vantage Wind Energy LLC, Docket No...

  13. The SIMS trial: adjustable anchored single-incision mini-slings versus standard tension-free midurethral slings in the surgical management of female stress urinary incontinence. A study protocol for a pragmatic, multicentre, non-inferiority randomised controlled trial.

    PubMed

    Abdel-Fattah, Mohamed; MacLennan, Graeme; Kilonzo, Mary; Assassa, R Phil; McCormick, Kirsty; Davidson, Tracey; McDonald, Alison; N'Dow, James; Wardle, Judith; Norrie, John

    2017-08-11

    Single-incision mini-slings (SIMS) represent the third generation of midurethral slings. They have been developed with the aim of offering a true ambulatory procedure for treatment of female stress urinary incontinence (SUI) with reduced morbidity and earlier recovery while maintaining similar efficacy to standard midurethral slings (SMUS). The aim of this study is to determine the clinical and cost-effectiveness of adjustable anchored SIMS compared with tension-free SMUS in the surgical management of female SUI, with 3-year follow-up. A pragmatic, multicentre, non-inferiority randomised controlled trial. The primary outcome measure is the patient-reported success rate measured by the Patient Global Impression of Improvement at 12 months. The primary economic outcome will be incremental cost per quality-adjusted life year gained at 12 months. The secondary outcomes measures include adverse events, objective success rates, impact on other lower urinary tract symptoms, health-related quality of life profile and sexual function, and reoperation rates for SUI. Secondary economic outcomes include National Health Service and patient primary and secondary care resource use and costs, incremental cost-effectiveness and incremental net benefit. The statistical analysis of the primary outcome will be by intention-to-treat and also a per-protocol analysis. Results will be displayed as estimates and 95% CIs. CIs around observed differences will then be compared with the prespecified non-inferiority margin. Secondary outcomes will be analysed similarly. The North of Scotland Research Ethics Committee has approved this study (13/NS/0143). The dissemination plans include HTA monograph, presentation at international scientific meetings and publications in high-impact, open-access journals. The results will be included in the updates of the National Institute for Health and Care Excellence and the European Association of Urology guidelines; these two specific guidelines directly influence practice in the UK and worldwide specialists, respectively. In addition, plain English-language summary of the main findings/results will be presented for relevant patient organisations. ISRCTN93264234. The SIMS study is currently recruiting in 20 UK research centres. The first patient was randomised on 4 February 2014, with follow-up to be completed at the end of February 2020. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Real-Time Detection of Dust Devils from Pressure Readings

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri

    2009-01-01

    A method for real-time detection of dust devils at a given location is based on identifying the abrupt, temporary decreases in atmospheric pressure that are characteristic of dust devils as they travel through that location. The method was conceived for use in a study of dust devils on the Martian surface, where bandwidth limitations encourage the transmission of only those blocks of data that are most likely to contain information about features of interest, such as dust devils. The method, which is a form of intelligent data compression, could readily be adapted to use for the same purpose in scientific investigation of dust devils on Earth. In this method, the readings of an atmospheric- pressure sensor are repeatedly digitized, recorded, and processed by an algorithm that looks for extreme deviations from a continually updated model of the current pressure environment. The question in formulating the algorithm is how to model current normal observations and what minimum magnitude deviation can be considered sufficiently anomalous as to indicate the presence of a dust devil. There is no single, simple answer to this question: any answer necessarily entails a compromise between false detections and misses. For the original Mars application, the answer was sought through analysis of sliding time windows of digitized pressure readings. Windows of 5-, 10-, and 15-minute durations were considered. The windows were advanced in increments of 30 seconds. Increments of other sizes can also be used, but computational cost increases as the increment decreases and analysis is performed more frequently. Pressure models were defined using a polynomial fit to the data within the windows. For example, the figure depicts pressure readings from a 10-minute window wherein the model was defined by a third-degree polynomial fit to the readings and dust devils were identified as negative deviations larger than both 3 standard deviations (from the mean) and 0.05 mbar in magnitude. An algorithm embodying the detection scheme of this example was found to yield a miss rate of just 8 percent and a false-detection rate of 57 percent when evaluated on historical pressure-sensor data collected by the Mars Pathfinder lander. Since dust devils occur infrequently over the course of a mission, prioritizing observations that contain successful detections could greatly conserve bandwidth allocated to a given mission. This technique can be used on future Mars landers and rovers, such as Mars Phoenix and the Mars Science Laboratory.

  15. Numerically stable, scalable formulas for parallel and online computation of higher-order multivariate central moments with arbitrary weights

    DOE PAGES

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth; ...

    2016-03-29

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  16. Energy Metabolism Impairment in Migraine.

    PubMed

    Cevoli, Sabina; Favoni, Valentina; Cortelli, Pietro

    2018-06-22

    Migraine is a common disabling neurological disorder which is characterised by recurring headache associated with a variety of sensory and autonomic symptoms. The pathophysiology of migraine remains not entirely understood, although many mechanisms involving the central and peripheral nervous system are now becoming clear. In particular, it is widely accepted that migraine is associated with energy metabolic impairment of the brain. The purpose of this review is to present an update overview of the energy metabolism involvement in the migraine pathophysiology. Several biochemical, morphological and magnetic resonance spectroscopy studies have confirmed the presence of energy production deficiency together with an increment of energy consumption in migraine patients. An increment of energy demand over a certain threshold create metabolic and biochemical preconditions for the onset of the migraine attack. The defect of oxidative energy metabolism in migraine is generalized. It remains to be determined if the mitochondrial deficit in migraine is primary or secondary. Riboflavin and Co-Enzyme Q10, both physiologically implicated in mitochondrial respiratory chain functioning, are effective in migraine prophylaxis, supporting the hypothesis that improving brain energy metabolism may reduce the susceptibility to migraine. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. Cost-effectiveness of renal denervation therapy for the treatment of resistant hypertension in The Netherlands.

    PubMed

    Henry, Thea L; De Brouwer, Bonnie F E; Van Keep, Marjolijn M L; Blankestijn, Peter J; Bots, Michiel L; Koffijberg, Hendrik

    2015-01-01

    Safety and efficacy data for catheter-based renal denervation (RDN) in the treatment of resistant hypertension have been used to estimate the cost-effectiveness of this approach. However, there are no Dutch-specific analyses. This study examined the cost-effectiveness of RDN from the perspective of the healthcare payer in The Netherlands. A previously constructed Markov state-transition model was adapted and updated with costs and utilities relevant to the Dutch setting. The cost-effectiveness of RDN was compared with standard of care (SoC) for patients with resistant hypertension. The efficacy of RDN treatment was modeled as a reduction in the risk of cardiovascular events associated with a lower systolic blood pressure (SBP). Treatment with RDN compared to SoC gave an incremental quality-adjusted life year (QALY) gain of 0.89 at an additional cost of €1315 over a patient's lifetime, resulting in a base case incremental cost-effectiveness ratio (ICER) of €1474. Deterministic and probabilistic sensitivity analyses (PSA) showed that treatment with RDN therapy was cost-effective at conventional willingness-to-pay thresholds (€10,000-80,000/QALY). RDN is a cost-effective intervention for patients with resistant hypertension in The Netherlands.

  18. Polymerization shrinkage stresses in different restorative techniques for non-carious cervical lesions.

    PubMed

    de Oliveira Correia, Ayla Macyelle; Tribst, João Paulo Mendes; de Souza Matos, Felipe; Platt, Jeffrey A; Caneppele, Taciana Marco Ferraz; Borges, Alexandre Luiz Souto

    2018-06-20

    This study evaluated the effect of different restorative techniques for non-carious cervical lesions (NCCL) on polymerization shrinkage stress of resins using three-dimensional (3D) finite element analysis (FEA). 3D-models of a maxillary premolar with a NCCL restored with different filling techniques (bulk filling and incremental) were generated to be compared by nonlinear FEA. The bulk filling technique was used for groups B (NCCL restored with Filtek™ Bulk Fill) and C (Filtek™ Z350 XT). The incremental technique was subdivided according to mode of application: P (2 parallel increments of the Filtek™ Z350 XT), OI (2 oblique increments of the Filtek™ Z350 XT, with incisal first), OIV (2 oblique increments of the Filtek™ Z350 XT, with incisal first and increments with the same volume), OG (2 oblique increments of the Filtek™ Z350 XT, with gingival first) and OGV (2 oblique increments of the Filtek™ Z350 XT, with gingival first and increments with the same volume), resulting in 7 models. All materials were considered isotropic, elastic and linear. The results were expressed in maximum principal stress (MPS). The tension stress distribution was influenced by the restorative technique. The lowest stress concentration occurred in group B followed by OG, OGV, OI, OIV, P and C; the incisal interface was more affected than the gingival. The restoration of NCCLs with bulk fill composite resulted in lower shrinkage stress in the gingival and incisal areas, followed by incremental techniques with the initial increment placed on the gingival wall. The non-carious cervical lesions (NCCLs) restored with bulk fill composite have a more favorable biomechanical behavior. Copyright © 2018. Published by Elsevier Ltd.

  19. The Propulsive Small Expendable Deployer System (ProSEDS)

    NASA Technical Reports Server (NTRS)

    Lorenzini, Enrico C.; Cosmo, Mario L.; Estes, Robert D.; Sanmartin, Juan; Pelaez, Jesus; Ruiz, Manuel

    2003-01-01

    This Final Report covers the following main topics: 1) Brief Description of ProSEDS; 2) Mission Analysis; 3) Dynamics Reference Mission; 4) Dynamics Stability; 5) Deployment Control; 6) Updated System Performance; 7) Updated Mission Analysis; 8) Updated Dynamics Reference Mission; 9) Updated Deployment Control Profiles and Simulations; 10) Updated Reference Mission; 11) Evaluation of Power Delivered by the Tether; 12) Deployment Control Profile Ref. #78 and Simulations; 13) Kalman Filters for Mission Estimation; 14) Analysis/Estimation of Deployment Flight Data; 15) Comparison of ED Tethers and Electrical Thrusters; 16) Dynamics Analysis for Mission Starting at a Lower Altitude; 17) Deployment Performance at a Lower Altitude; 18) Satellite Orbit after a Tether Cut; 19) Deployment with Shorter Dyneema Tether Length; 20) Interactive Software for ED Tethers.

  20. MBGD update 2015: microbial genome database for flexible ortholog analysis utilizing a diverse set of genomic data.

    PubMed

    Uchiyama, Ikuo; Mihara, Motohiro; Nishide, Hiroyo; Chiba, Hirokazu

    2015-01-01

    The microbial genome database for comparative analysis (MBGD) (available at http://mbgd.genome.ad.jp/) is a comprehensive ortholog database for flexible comparative analysis of microbial genomes, where the users are allowed to create an ortholog table among any specified set of organisms. Because of the rapid increase in microbial genome data owing to the next-generation sequencing technology, it becomes increasingly challenging to maintain high-quality orthology relationships while allowing the users to incorporate the latest genomic data available into an analysis. Because many of the recently accumulating genomic data are draft genome sequences for which some complete genome sequences of the same or closely related species are available, MBGD now stores draft genome data and allows the users to incorporate them into a user-specific ortholog database using the MyMBGD functionality. In this function, draft genome data are incorporated into an existing ortholog table created only from the complete genome data in an incremental manner to prevent low-quality draft data from affecting clustering results. In addition, to provide high-quality orthology relationships, the standard ortholog table containing all the representative genomes, which is first created by the rapid classification program DomClust, is now refined using DomRefine, a recently developed program for improving domain-level clustering using multiple sequence alignment information. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Cost-effectiveness analysis of germ-line BRCA testing in women with breast cancer and cascade testing in family members of mutation carriers.

    PubMed

    Tuffaha, Haitham W; Mitchell, Andrew; Ward, Robyn L; Connelly, Luke; Butler, James R G; Norris, Sarah; Scuffham, Paul A

    2018-01-04

    PurposeTo evaluate the cost-effectiveness of BRCA testing in women with breast cancer, and cascade testing in family members of BRCA mutation carriers.MethodsA cost-effectiveness analysis was conducted using a cohort Markov model from a health-payer perspective. The model estimated the long-term benefits and costs of testing women with breast cancer who had at least a 10% pretest BRCA mutation probability, and the cascade testing of first- and second-degree relatives of women who test positive.ResultsCompared with no testing, BRCA testing of affected women resulted in an incremental cost per quality-adjusted life-year (QALY) gained of AU$18,900 (incremental cost AU$1,880; incremental QALY gain 0.10) with reductions of 0.04 breast and 0.01 ovarian cancer events. Testing affected women and cascade testing of family members resulted in an incremental cost per QALY gained of AU$9,500 compared with testing affected women only (incremental cost AU$665; incremental QALY gain 0.07) with additional reductions of 0.06 breast and 0.01 ovarian cancer events.ConclusionBRCA testing in women with breast cancer is cost-effective and is associated with reduced risk of cancer and improved survival. Extending testing to cover family members of affected women who test positive improves cost-effectiveness beyond restricting testing to affected women only.GENETICS in MEDICINE advance online publication, 4 January 2018; doi:10.1038/gim.2017.231.

  2. Analysis of Trajectory Parameters for Probe and Round-Trip Missions to Venus

    NASA Technical Reports Server (NTRS)

    Dugan, James F., Jr.; Simsic, Carl R.

    1960-01-01

    For one-way transfers between Earth and Venus, charts are obtained that show velocity, time, and angle parameters as functions of the eccentricity and semilatus rectum of the Sun-focused vehicle conic. From these curves, others are obtained that are useful in planning one-way and round-trip missions to Venus. The analysis is characterized by circular coplanar planetary orbits, successive two-body approximations, impulsive velocity changes, and circular parking orbits at 1.1 planet radii. For round trips the mission time considered ranges from 65 to 788 days, while wait time spent in the parking orbit at Venus ranges from 0 to 467 days. Individual velocity increments, one-way travel times, and departure dates are presented for round trips requiring the minimum total velocity increment. For both single-pass and orbiting Venusian probes, the time span available for launch becomes appreciable with only a small increase in velocity-increment capability above the minimum requirement. Velocity-increment increases are much more effective in reducing travel time for single-pass probes than they are for orbiting probes. Round trips composed of a direct route along an ellipse tangent to Earth's orbit and an aphelion route result in the minimum total velocity increment for wait times less than 100 days and mission times ranging from 145 to 612 days. Minimum-total-velocity-increment trips may be taken along perihelion-perihelion routes for wait times ranging from 300 to 467 days. These wait times occur during missions lasting from 640 to 759 days.

  3. Neural Basis of Strategic Decision Making.

    PubMed

    Lee, Daeyeol; Seo, Hyojung

    2016-01-01

    Human choice behaviors during social interactions often deviate from the predictions of game theory. This might arise partly from the limitations in the cognitive abilities necessary for recursive reasoning about the behaviors of others. In addition, during iterative social interactions, choices might change dynamically as knowledge about the intentions of others and estimates for choice outcomes are incrementally updated via reinforcement learning. Some of the brain circuits utilized during social decision making might be general-purpose and contribute to isomorphic individual and social decision making. By contrast, regions in the medial prefrontal cortex (mPFC) and temporal parietal junction (TPJ) might be recruited for cognitive processes unique to social decision making. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. The telerobot testbed: An architecture for remote servicing

    NASA Technical Reports Server (NTRS)

    Matijevic, J. R.

    1990-01-01

    The NASA/OAST Telerobot Testbed will reach its next increment in development by the end of FY-89. The testbed will have the capability for: force reflection in teleoperation, shared control, traded control, operator designate and relative update. These five capabilities will be shown in a module release and exchange operation using mockups of Orbital Replacement Units (ORU). This development of the testbed shows examples of the technologies needed for remote servicing, particularly under conditions of delay in transmissions to the servicing site. Here, the following topics are presented: the system architecture of the testbed which incorporates these telerobotic technologies for servicing, the implementation of the five capabilities and the operation of the ORU mockups.

  5. Adaptive fuzzy leader clustering of complex data sets in pattern recognition

    NASA Technical Reports Server (NTRS)

    Newton, Scott C.; Pemmaraju, Surya; Mitra, Sunanda

    1992-01-01

    A modular, unsupervised neural network architecture for clustering and classification of complex data sets is presented. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns on-line in a stable and efficient manner. The initial classification is performed in two stages: a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from fuzzy C-means system equations for the centroids and the membership values. The AFLC algorithm is applied to the Anderson Iris data and laser-luminescent fingerprint image data. It is concluded that the AFLC algorithm successfully classifies features extracted from real data, discrete or continuous.

  6. Toward disentangling the effect of hydrologic and nitrogen source changes from 1992 to 2001 on incremental nitrogen yield in the contiguous United States

    NASA Astrophysics Data System (ADS)

    Alam, Md Jahangir; Goodall, Jonathan L.

    2012-04-01

    The goal of this research was to quantify the relative impact of hydrologic and nitrogen source changes on incremental nitrogen yield in the contiguous United States. Using nitrogen source estimates from various federal data bases, remotely sensed land use data from the National Land Cover Data program, and observed instream loadings from the United States Geological Survey National Stream Quality Accounting Network program, we calibrated and applied the spatially referenced regression model SPARROW to estimate incremental nitrogen yield for the contiguous United States. We ran different model scenarios to separate the effects of changes in source contributions from hydrologic changes for the years 1992 and 2001, assuming that only state conditions changed and that model coefficients describing the stream water-quality response to changes in state conditions remained constant between 1992 and 2001. Model results show a decrease of 8.2% in the median incremental nitrogen yield over the period of analysis with the vast majority of this decrease due to changes in hydrologic conditions rather than decreases in nitrogen sources. For example, when we changed the 1992 version of the model to have nitrogen source data from 2001, the model results showed only a small increase in median incremental nitrogen yield (0.12%). However, when we changed the 1992 version of the model to have hydrologic conditions from 2001, model results showed a decrease of approximately 8.7% in median incremental nitrogen yield. We did, however, find notable differences in incremental yield estimates for different sources of nitrogen after controlling for hydrologic changes, particularly for population related sources. For example, the median incremental yield for population related sources increased by 8.4% after controlling for hydrologic changes. This is in contrast to a 2.8% decrease in population related sources when hydrologic changes are included in the analysis. Likewise we found that median incremental yield from urban watersheds increased by 6.8% after controlling for hydrologic changes—in contrast to the median incremental nitrogen yield from cropland watersheds, which decreased by 2.1% over the same time period. These results suggest that, after accounting for hydrologic changes, population related sources became a more significant contributor of nitrogen yield to streams in the contiguous United States over the period of analysis. However, this study was not able to account for the influence of human management practices such as improvements in wastewater treatment plants or Best Management Practices that likely improved water quality, due to a lack of data for quantifying the impact of these practices for the study area.

  7. International Space Station Increment-4/5 Microgravity Environment Summary Report

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Hrovat, Kenneth; Kelly, Eric; McPherson, Kevin; Reckart, Timothy

    2003-01-01

    This summary report presents the results of some of the processed acceleration data measured aboard the International Space Station during the period of December 2001 to December 2002. Unlike the past two ISS Increment reports, which were increment specific, this summary report covers two increments: Increments 4 and 5, hereafter referred to as Increment-4/5. Two accelerometer systems were used to measure the acceleration levels for the activities that took place during Increment-4/5. Due to time constraint and lack of precise timeline information regarding some payload operations and station activities, not a11 of the activities were analyzed for this report. The National Aeronautics and Space Administration sponsors the Microgravity Acceleration Measurement System and the Space Acceleration Microgravity System to support microgravity science experiments which require microgravity acceleration measurements. On April 19, 2001, both the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System units were launched on STS-100 from the Kennedy Space Center for installation on the International Space Station. The Microgravity Acceleration Measurement System supports science experiments requiring quasi-steady acceleration measurements, while the Space Acceleration Measurement System unit supports experiments requiring vibratory acceleration measurement. The International Space Station Increment-4/5 reduced gravity environment analysis presented in this report uses acceleration data collected by both sets of accelerometer systems: The Microgravity Acceleration Measurement System, which consists of two sensors: the low-frequency Orbital Acceleration Research Experiment Sensor Subsystem and the higher frequency High Resolution Accelerometer Package. The low frequency sensor measures up to 1 Hz, but is routinely trimmean filtered to yield much lower frequency acceleration data up to 0.01 Hz. This filtered data can be mapped to arbitrary locations for characterizing the quasi-steady environment for payloads and the vehicle. The high frequency sensor is used to characterize the vibratory environment up to 100 Hz at a single measurement location. The Space Acceleration Measurement System, which deploys high frequency sensors, measures vibratory acceleration data in the range of 0.01 to 400 Hz at multiple measurement locations. This summary report presents analysis of some selected quasi-steady and vibratory activities measured by these accelerometers during Increment- 4/5 from December 2001 to December 2002.

  8. Equity and Entrepreneurialism: The Impact of Tax Increment Financing on School Finance.

    ERIC Educational Resources Information Center

    Weber, Rachel

    2003-01-01

    Describes tax increment financing (TIF), an entrepreneurial strategy with significant fiscal implications for overlapping taxing jurisdictions that provide these functions. Statistical analysis of TIF's impact on the finances of one Illinois county's school districts indicates that municipal use of TIF depletes the property tax revenues of schools…

  9. Cost-effectiveness analysis of once-yearly injection of zoledronic acid for the treatment of osteoporosis in Japan.

    PubMed

    Moriwaki, K; Mouri, M; Hagino, H

    2017-06-01

    Model-based economic evaluation was performed to assess the cost-effectiveness of zoledronic acid. Although zoledronic acid was dominated by alendronate, the incremental quality-adjusted life year (QALY) was quite small in extent. Considering the advantage of once-yearly injection of zoledronic acid in persistence, zoledronic acid might be a cost-effective treatment option compared to once-weekly oral alendronate. The purpose of this study was to estimate the cost-effectiveness of once-yearly injection of zoledronic acid for the treatment of osteoporosis in Japan. A patient-level state-transition model was developed to predict the outcome of patients with osteoporosis who have experienced a previous vertebral fracture. The efficacy of zoledronic acid was derived from a published network meta-analysis. Lifetime cost and QALYs were estimated for patients who had received zoledronic acid, alendronate, or basic treatment alone. The incremental cost-effectiveness ratio (ICER) of zoledronic acid was estimated. For patients 70 years of age, zoledronic acid was dominated by alendronate with incremental QALY of -0.004 to -0.000 and incremental cost of 430 USD to 493 USD. Deterministic sensitivity analysis indicated that the relative risk of hip fracture and drug cost strongly affected the cost-effectiveness of zoledronic acid compared to alendronate. Scenario analysis considering treatment persistence showed that the ICER of zoledronic acid compared to alendronate was estimated to be 47,435 USD, 27,018 USD, and 10,749 USD per QALY gained for patients with a T-score of -2.0, -2.5, or -3.0, respectively. Although zoledronic acid is dominated by alendronate, the incremental QALY is quite small in extent. Considering the advantage of annual zoledronic acid treatment in compliance and persistence, zoledronic acid may be a cost-effective treatment option compared to alendronate.

  10. A Probabilistic Approach to Model Update

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.

    2001-01-01

    Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.

  11. The balanced scorecard: an incremental approach model to health care management.

    PubMed

    Pineno, Charles J

    2002-01-01

    The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.

  12. Complexity of the heart rhythm after heart transplantation by entropy of transition network for RR-increments of RR time intervals between heartbeats.

    PubMed

    Makowiec, Danuta; Struzik, Zbigniew; Graff, Beata; Wdowczyk-Szulc, Joanna; Zarczynska-Buchnowiecka, Marta; Gruchala, Marcin; Rynkiewicz, Andrzej

    2013-01-01

    Network models have been used to capture, represent and analyse characteristics of living organisms and general properties of complex systems. The use of network representations in the characterization of time series complexity is a relatively new but quickly developing branch of time series analysis. In particular, beat-to-beat heart rate variability can be mapped out in a network of RR-increments, which is a directed and weighted graph with vertices representing RR-increments and the edges of which correspond to subsequent increments. We evaluate entropy measures selected from these network representations in records of healthy subjects and heart transplant patients, and provide an interpretation of the results.

  13. The effect of narrow-band noise maskers on increment detection1

    PubMed Central

    Messersmith, Jessica J.; Patra, Harisadhan; Jesteadt, Walt

    2010-01-01

    It is often assumed that listeners detect an increment in the intensity of a pure tone by detecting an increase in the energy falling within the critical band centered on the signal frequency. A noise masker can be used to limit the use of signal energy falling outside of the critical band, but facets of the noise may impact increment detection beyond this intended purpose. The current study evaluated the impact of envelope fluctuation in a noise masker on thresholds for detection of an increment. Thresholds were obtained for detection of an increment in the intensity of a 0.25- or 4-kHz pedestal in quiet and in the presence of noise of varying bandwidth. Results indicate that thresholds for detection of an increment in the intensity of a pure tone increase with increasing bandwidth for an on-frequency noise masker, but are unchanged by an off-frequency noise masker. Neither a model that includes a modulation-filter-bank analysis of envelope modulation nor a model based on discrimination of spectral patterns can account for all aspects of the observed data. PMID:21110593

  14. Toward Capturing Momentary Changes of Heart Rate Variability by a Dynamic Analysis Method

    PubMed Central

    Zhang, Haoshi; Zhu, Mingxing; Zheng, Yue; Li, Guanglin

    2015-01-01

    The analysis of heart rate variability (HRV) has been performed on long-term electrocardiography (ECG) recordings (12~24 hours) and short-term recordings (2~5 minutes), which may not capture momentary change of HRV. In this study, we present a new method to analyze the momentary HRV (mHRV). The ECG recordings were segmented into a series of overlapped HRV analysis windows with a window length of 5 minutes and different time increments. The performance of the proposed method in delineating the dynamics of momentary HRV measurement was evaluated with four commonly used time courses of HRV measures on both synthetic time series and real ECG recordings from human subjects and dogs. Our results showed that a smaller time increment could capture more dynamical information on transient changes. Considering a too short increment such as 10 s would cause the indented time courses of the four measures, a 1-min time increment (4-min overlapping) was suggested in the analysis of mHRV in the study. ECG recordings from human subjects and dogs were used to further assess the effectiveness of the proposed method. The pilot study demonstrated that the proposed analysis of mHRV could provide more accurate assessment of the dynamical changes in cardiac activity than the conventional measures of HRV (without time overlapping). The proposed method may provide an efficient means in delineating the dynamics of momentary HRV and it would be worthy performing more investigations. PMID:26172953

  15. Incremental development and prototyping in current laboratory software development projects: Preliminary analysis

    NASA Technical Reports Server (NTRS)

    Griesel, Martha Ann

    1988-01-01

    Several Laboratory software development projects that followed nonstandard development processes, which were hybrids of incremental development and prototyping, are being studied. Factors in the project environment leading to the decision to use a nonstandard development process and affecting its success are analyzed. A simple characterization of project environment based on this analysis is proposed, together with software development approaches which have been found effective for each category. These approaches include both documentation and review requirements.

  16. A new technique for the characterization of chaff elements

    NASA Astrophysics Data System (ADS)

    Scholfield, David; Myat, Maung; Dauby, Jason; Fesler, Jonathon; Bright, Jonathan

    2011-07-01

    A new technique for the experimental characterization of electromagnetic chaff based on Inverse Synthetic Aperture Radar is presented. This technique allows for the characterization of as few as one filament of chaff in a controlled anechoic environment allowing for stability and repeatability of experimental results. This approach allows for a deeper understanding of the fundamental phenomena of electromagnetic scattering from chaff through an incremental analysis approach. Chaff analysis can now begin with a single element and progress through the build-up of particles into pseudo-cloud structures. This controlled incremental approach is supported by an identical incremental modeling and validation process. Additionally, this technique has the potential to produce considerable savings in financial and schedule cost and provides a stable and repeatable experiment to aid model valuation.

  17. The Decremental Protocol as an Alternative Protocol to Measure Maximal Oxygen Consumption in Athletes.

    PubMed

    Taylor, Katrina; Seegmiller, Jeffrey; Vella, Chantal A

    2016-11-01

    To determine whether a decremental protocol could elicit a higher maximal oxygen consumption (VO 2 max) than an incremental protocol in trained participants. A secondary aim was to examine whether cardiac-output (Q) and stroke-volume (SV) responses differed between decremental and incremental protocols in this sample. Nineteen runners/triathletes were randomized to either the decremental or incremental group. All participants completed an initial incremental VO 2 max test on a treadmill, followed by a verification phase. The incremental group completed 2 further incremental tests. The decremental group completed a second VO 2 max test using the decremental protocol, based on their verification phase. The decremental group then completed a final incremental test. During each test, VO 2 , ventilation, and heart rate were measured, and cardiac variables were estimated with thoracic bioimpedance. Repeated-measures analysis of variance was conducted with an alpha level set at .05. There were no significant main effects for group (P = .37) or interaction (P = .10) over time (P = .45). VO 2 max was similar between the incremental (57.29 ± 8.94 mL · kg -1 · min -1 ) and decremental (60.82 ± 8.49 mL · kg -1 · min -1 ) groups over time. Furthermore, Q and SV were similar between the incremental (Q 22.72 ± 5.85 L/min, SV 119.64 ± 33.02 mL/beat) and decremental groups (Q 20.36 ± 4.59 L/min, SV 109.03 ± 24.27 mL/beat) across all 3 trials. The findings suggest that the decremental protocol does not elicit higher VO 2 max than an incremental protocol but may be used as an alternative protocol to measure VO 2 max in runners and triathletes.

  18. Dental caries increments and related factors in children with type 1 diabetes mellitus.

    PubMed

    Siudikiene, J; Machiulskiene, V; Nyvad, B; Tenovuo, J; Nedzelskiene, I

    2008-01-01

    The aim of this study was to analyse possible associations between caries increments and selected caries determinants in children with type 1 diabetes mellitus and their age- and sex-matched non-diabetic controls, over 2 years. A total of 63 (10-15 years old) diabetic and non-diabetic pairs were examined for dental caries, oral hygiene and salivary factors. Salivary flow rates, buffer effect, concentrations of mutans streptococci, lactobacilli, yeasts, total IgA and IgG, protein, albumin, amylase and glucose were analysed. Means of 2-year decayed/missing/filled surface (DMFS) increments were similar in diabetics and their controls. Over the study period, both unstimulated and stimulated salivary flow rates remained significantly lower in diabetic children compared to controls. No differences were observed in the counts of lactobacilli, mutans streptococci or yeast growth during follow-up, whereas salivary IgA, protein and glucose concentrations were higher in diabetics than in controls throughout the 2-year period. Multivariable linear regression analysis showed that children with higher 2-year DMFS increments were older at baseline and had higher salivary glucose concentrations than children with lower 2-year DMFS increments. Likewise, higher 2-year DMFS increments in diabetics versus controls were associated with greater increments in salivary glucose concentrations in diabetics. Higher increments in active caries lesions in diabetics versus controls were associated with greater increments of dental plaque and greater increments of salivary albumin. Our results suggest that, in addition to dental plaque as a common caries risk factor, diabetes-induced changes in salivary glucose and albumin concentrations are indicative of caries development among diabetics. Copyright 2008 S. Karger AG, Basel.

  19. Route profile analysis system and method

    DOEpatents

    Mullenhoff, Donald J.; Wilson, Stephen W.

    1986-01-01

    A system for recording terrain profile information is disclosed. The system accurately senses incremental distances traveled by a vehicle along with vehicle inclination, recording both with elapsed time. The incremental distances can subsequently be differentiated with respect to time to obtain acceleration. The acceleration can then be used by the computer to correct the sensed inclination.

  20. Route profile analysis system and method

    DOEpatents

    Mullenhoff, D.J.; Wilson, S.W.

    1982-07-29

    A system for recording terrain profile information is disclosed. The system accurately senses incremental distances traveled by a vehicle along with vehicle inclination, recording both with elapsed time. The incremental distances can subsequently be differentiated with respect to time to obtain acceleration. The computer acceleration can then be used to correct the sensed inclination.

  1. The Trait Emotional Intelligence Questionnaire: Internal Structure, Convergent, Criterion, and Incremental Validity in an Italian Sample

    ERIC Educational Resources Information Center

    Andrei, Federica; Smith, Martin M.; Surcinelli, Paola; Baldaro, Bruno; Saklofske, Donald H.

    2016-01-01

    This study investigated the structure and validity of the Italian translation of the Trait Emotional Intelligence Questionnaire. Data were self-reported from 227 participants. Confirmatory factor analysis supported the four-factor structure of the scale. Hierarchical regressions also demonstrated its incremental validity beyond demographics, the…

  2. 76 FR 63623 - Board of Scientific Counselors, National Center for Environmental Health/Agency for Toxic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-13

    ... Nutritional Biomarker Report: transfat analysis; update on ATSDR Science Symposium; update on Camp LeJeune; update on Environmental Health Tracking; presentation on hydraulic fracking; global health updates...

  3. Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections

    NASA Astrophysics Data System (ADS)

    Wakazuki, Y.

    2015-12-01

    A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.

  4. Evaluation of an emergency department-based enrollment program for uninsured children.

    PubMed

    Mahajan, Prashant; Stanley, Rachel; Ross, Kevin W; Clark, Linda; Sandberg, Keisha; Lichtenstein, Richard

    2005-03-01

    We evaluate the effectiveness of an emergency department (ED)-based outreach program in increasing the enrollment of uninsured children. The study involved placing a full-time worker trained to enroll uninsured children into Medicaid or the State Children's Health Insurance Program in an inner-city academic children's hospital ED. Analysis was carried out for outpatient ED visits by insurance status, average revenue per patient from uninsured and insured children, proportion of patients enrolled in Medicaid and State Children's Health Insurance Program through this program, estimated incremental revenue from new enrollees, and program-specific incremental costs. A cost-benefit analysis and breakeven analysis was conducted to determine the impact of this intervention on ED revenues. Five thousand ninety-four uninsured children were treated during the 10 consecutive months assessed, and 4,667 were treated during program hours. One thousand eight hundred and three applications were filed, giving a program penetration rate of 39%. Eighty-four percent of applications filed were resolved (67% of these were Medicaid). Average revenue from each outpatient ED visit for Medicaid was US135.68 dollars, other insurance was US210.43 dollars, and uninsured was US15.03 dollars. Estimated incremental revenue for each uninsured patient converted to Medicaid was US120.65 dollars. Total annualized incremental revenue was US224,474 dollars, and the net incremental revenue, after accounting for program costs, was US157,414 dollars per year. A program enrolling uninsured children at an inner-city pediatric ED into government insurance was effective and generated revenue that paid for program costs.

  5. Cost-effectiveness of population based BRCA testing with varying Ashkenazi Jewish ancestry.

    PubMed

    Manchanda, Ranjit; Patel, Shreeya; Antoniou, Antonis C; Levy-Lahad, Ephrat; Turnbull, Clare; Evans, D Gareth; Hopper, John L; Macinnis, Robert J; Menon, Usha; Jacobs, Ian; Legood, Rosa

    2017-11-01

    Population-based BRCA1/BRCA2 testing has been found to be cost-effective compared with family history-based testing in Ashkenazi-Jewish women were >30 years old with 4 Ashkenazi-Jewish grandparents. However, individuals may have 1, 2, or 3 Ashkenazi-Jewish grandparents, and cost-effectiveness data are lacking at these lower BRCA prevalence estimates. We present an updated cost-effectiveness analysis of population BRCA1/BRCA2 testing for women with 1, 2, and 3 Ashkenazi-Jewish grandparents. Decision analysis model. Lifetime costs and effects of population and family history-based testing were compared with the use of a decision analysis model. 56% BRCA carriers are missed by family history criteria alone. Analyses were conducted for United Kingdom and United States populations. Model parameters were obtained from the Genetic Cancer Prediction through Population Screening trial and published literature. Model parameters and BRCA population prevalence for individuals with 3, 2, or 1 Ashkenazi-Jewish grandparent were adjusted for the relative frequency of BRCA mutations in the Ashkenazi-Jewish and general populations. Incremental cost-effectiveness ratios were calculated for all Ashkenazi-Jewish grandparent scenarios. Costs, along with outcomes, were discounted at 3.5%. The time horizon of the analysis is "life-time," and perspective is "payer." Probabilistic sensitivity analysis evaluated model uncertainty. Population testing for BRCA mutations is cost-saving in Ashkenazi-Jewish women with 2, 3, or 4 grandparents (22-33 days life-gained) in the United Kingdom and 1, 2, 3, or 4 grandparents (12-26 days life-gained) in the United States populations, respectively. It is also extremely cost-effective in women in the United Kingdom with just 1 Ashkenazi-Jewish grandparent with an incremental cost-effectiveness ratio of £863 per quality-adjusted life-years and 15 days life gained. Results show that population-testing remains cost-effective at the £20,000-30000 per quality-adjusted life-years and $100,000 per quality-adjusted life-years willingness-to-pay thresholds for all 4 Ashkenazi-Jewish grandparent scenarios, with ≥95% simulations found to be cost-effective on probabilistic sensitivity analysis. Population-testing remains cost-effective in the absence of reduction in breast cancer risk from oophorectomy and at lower risk-reducing mastectomy (13%) or risk-reducing salpingo-oophorectomy (20%) rates. Population testing for BRCA mutations with varying levels of Ashkenazi-Jewish ancestry is cost-effective in the United Kingdom and the United States. These results support population testing in Ashkenazi-Jewish women with 1-4 Ashkenazi-Jewish grandparent ancestry. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. 10 CFR 72.70 - Safety analysis report updating.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... original FSAR or, as appropriate, the last update to the FSAR under this section. The update shall include... for an ISFSI or MRS shall update periodically, as provided in paragraphs (b) and (c) of this section... applicant commitments developed during the license approval and/or hearing process. (b) Each update shall...

  7. 10 CFR 72.70 - Safety analysis report updating.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... original FSAR or, as appropriate, the last update to the FSAR under this section. The update shall include... for an ISFSI or MRS shall update periodically, as provided in paragraphs (b) and (c) of this section... applicant commitments developed during the license approval and/or hearing process. (b) Each update shall...

  8. 10 CFR 72.70 - Safety analysis report updating.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... original FSAR or, as appropriate, the last update to the FSAR under this section. The update shall include... for an ISFSI or MRS shall update periodically, as provided in paragraphs (b) and (c) of this section... applicant commitments developed during the license approval and/or hearing process. (b) Each update shall...

  9. 10 CFR 72.70 - Safety analysis report updating.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... original FSAR or, as appropriate, the last update to the FSAR under this section. The update shall include... for an ISFSI or MRS shall update periodically, as provided in paragraphs (b) and (c) of this section... applicant commitments developed during the license approval and/or hearing process. (b) Each update shall...

  10. Cost effectiveness analysis of immunotherapy in patients with grass pollen allergic rhinoconjunctivitis in Germany.

    PubMed

    Westerhout, K Y; Verheggen, B G; Schreder, C H; Augustin, M

    2012-01-01

    An economic evaluation was conducted to assess the outcomes and costs as well as cost-effectiveness of the following grass-pollen immunotherapies: OA (Oralair; Stallergenes S.A., Antony, France) vs GRZ (Grazax; ALK-Abelló, Hørsholm, Denmark), and ALD (Alk Depot SQ; ALK-Abelló) (immunotherapy agents alongside symptomatic medication) and symptomatic treatment alone for grass pollen allergic rhinoconjunctivitis. The costs and outcomes of 3-year treatment were assessed for a period of 9 years using a Markov model. Treatment efficacy was estimated using an indirect comparison of available clinical trials with placebo as a common comparator. Estimates for immunotherapy discontinuation, occurrence of asthma, health state utilities, drug costs, resource use, and healthcare costs were derived from published sources. The analysis was conducted from the insurant's perspective including public and private health insurance payments and co-payments by insurants. Outcomes were reported as quality-adjusted life years (QALYs) and symptom-free days. The uncertainty around incremental model results was tested by means of extensive deterministic univariate and probabilistic multivariate sensitivity analyses. In the base case analysis the model predicted a cost-utility ratio of OA vs symptomatic treatment of €14,728 per QALY; incremental costs were €1356 (95%CI: €1230; €1484) and incremental QALYs 0.092 (95%CI: 0.052; 0.140). OA was the dominant strategy compared to GRZ and ALD, with estimated incremental costs of -€1142 (95%CI: -€1255; -€1038) and -€54 (95%CI: -€188; €85) and incremental QALYs of 0.015 (95%CI: -0.025; 0.056) and 0.027 (95%CI: -0.022; 0.075), respectively. At a willingness-to-pay threshold of €20,000, the probability of OA being the most cost-effective treatment was predicted to be 79%. Univariate sensitivity analyses show that incremental outcomes were moderately sensitive to changes in efficacy estimates. The main study limitation was the requirement of an indirect comparison involving several steps to assess relative treatment effects. The analysis suggests OA to be cost-effective compared to GRZ and ALD, and a symptomatic treatment. Sensitivity analyses showed that uncertainty surrounding treatment efficacy estimates affected the model outcomes.

  11. MEDIAN-BASED INCREMENTAL COST-EFFECTIVENESS RATIOS WITH CENSORED DATA

    PubMed Central

    Bang, Heejung; Zhao, Hongwei

    2016-01-01

    Cost-effectiveness is an essential part of treatment evaluation, in addition to effectiveness. In the cost-effectiveness analysis, a measure called the incremental cost-effectiveness ratio (ICER) is widely utilized, and the mean cost and the mean (quality-adjusted) life years have served as norms to summarize cost and effectiveness for a study population. Recently, the median-based ICER was proposed for complementary or sensitivity analysis purposes. In this paper, we extend this method when some data are censored. PMID:26010599

  12. Using Data Assimilation Diagnostics to Assess the SMAP Level-4 Soil Moisture Product

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Liu, Qing; De Lannoy, Gabrielle; Crow, Wade; Kimball, John; Koster, Randy; Ardizzone, Joe

    2018-01-01

    The Soil Moisture Active Passive (SMAP) mission Level-4 Soil Moisture (L4_SM) product provides 3-hourly, 9-km resolution, global estimates of surface (0-5 cm) and root-zone (0-100 cm) soil moisture and related land surface variables from 31 March 2015 to present with approx.2.5-day latency. The ensemble-based L4_SM algorithm assimilates SMAP brightness temperature (Tb) observations into the Catchment land surface model. This study describes the spatially distributed L4_SM analysis and assesses the observation-minus-forecast (O-F) Tb residuals and the soil moisture and temperature analysis increments. Owing to the climatological rescaling of the Tb observations prior to assimilation, the analysis is essentially unbiased, with global mean values of approx. 0.37 K for the O-F Tb residuals and practically zero for the soil moisture and temperature increments. There are, however, modest regional (absolute) biases in the O-F residuals (under approx. 3 K), the soil moisture increments (under approx. 0.01 cu m/cu m), and the surface soil temperature increments (under approx. 1 K). Typical instantaneous values are approx. 6 K for O-F residuals, approx. 0.01 (approx. 0.003) cu m/cu m for surface (root-zone) soil moisture increments, and approx. 0.6 K for surface soil temperature increments. The O-F diagnostics indicate that the actual errors in the system are overestimated in deserts and densely vegetated regions and underestimated in agricultural regions and transition zones between dry and wet climates. The O-F auto-correlations suggest that the SMAP observations are used efficiently in western North America, the Sahel, and Australia, but not in many forested regions and the high northern latitudes. A case study in Australia demonstrates that assimilating SMAP observations successfully corrects short-term errors in the L4_SM rainfall forcing.

  13. Global Assessment of the SMAP Level-4 Soil Moisture Product Using Assimilation Diagnostics

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Liu, Qing; De Lannoy, Gabrielle; Crow, Wade; Kimball, John; Koster, Randy; Ardizzone, Joe

    2018-01-01

    The Soil Moisture Active Passive (SMAP) mission Level-4 Soil Moisture (L4_SM) product provides 3-hourly, 9-km resolution, global estimates of surface (0-5 cm) and root-zone (0-100 cm) soil moisture and related land surface variables from 31 March 2015 to present with approx. 2.5-day latency. The ensemble-based L4_SM algorithm assimilates SMAP brightness temperature (Tb) observations into the Catchment land surface model. This study describes the spatially distributed L4_SM analysis and assesses the observation-minus-forecast (O-F) Tb residuals and the soil moisture and temperature analysis increments. Owing to the climatological rescaling of the Tb observations prior to assimilation, the analysis is essentially unbiased, with global mean values of approx. 0.37 K for the O-F Tb residuals and practically zero for the soil moisture and temperature increments. There are, however, modest regional (absolute) biases in the O-F residuals (under approx. 3 K), the soil moisture increments (under approx. 0.01 cu m/cu m), and the surface soil temperature increments (under approx. 1 K). Typical instantaneous values are approx. 6 K for O-F residuals, approx. 0.01 (approx. 0.003) cu m/cu m for surface (root-zone) soil moisture increments, and approx. 0.6 K for surface soil temperature increments. The O-F diagnostics indicate that the actual errors in the system are overestimated in deserts and densely vegetated regions and underestimated in agricultural regions and transition zones between dry and wet climates. The O-F auto-correlations suggest that the SMAP observations are used efficiently in western North America, the Sahel, and Australia, but not in many forested regions and the high northern latitudes. A case study in Australia demonstrates that assimilating SMAP observations successfully corrects short-term errors in the L4_SM rainfall forcing.

  14. Lines of Evidence–Incremental Markings in Molar Enamel of Soay Sheep as Revealed by a Fluorochrome Labeling and Backscattered Electron Imaging Study

    PubMed Central

    Kierdorf, Horst; Kierdorf, Uwe; Frölich, Kai; Witzel, Carsten

    2013-01-01

    We studied the structural characteristics and periodicities of regular incremental markings in sheep enamel using fluorochrome injections for vital labeling of forming enamel and backscattered electron imaging in the scanning electron microscope. Microscopic analysis of mandibular first molars revealed the presence of incremental markings with a daily periodicity (laminations) that indicated successive positions of the forming front of interprismatic enamel. In addition to the laminations, incremental markings with a sub-daily periodicity were discernible both in interprismatic enamel and in enamel prisms. Five sub-daily increments were present between two consecutive laminations. Backscattered electron imaging revealed that each sub-daily growth increment consisted of a broader and more highly mineralized band and a narrower and less mineralized band (line). The sub-daily markings in the prisms of sheep enamel morphologically resembled the (daily) prisms cross striations seen in primate enamel. Incremental markings with a supra-daily periodicity were not observed in sheep enamel. Based on the periodicity of the incremental markings, maximum mean daily apposition rates of 17.0 µm in buccal enamel and of 13.4 µm in lingual enamel were recorded. Enamel extension rates were also high, with maximum means of 180 µm/day and 217 µm/day in upper crown areas of buccal and lingual enamel, respectively. Values in more cervical crown portions were markedly lower. Our results are in accordance with previous findings in other ungulate species. Using the incremental markings present in primate enamel as a reference could result in a misinterpretation of the incremental markings in ungulate enamel. Thus, the sub-daily growth increments in the prisms of ungulate enamel might be mistaken as prism cross striations with a daily periodicity, and the laminations misidentified as striae of Retzius with a supra-daily periodicity. This would lead to a considerable overestimation of crown formation times in ungulate teeth. PMID:24040293

  15. International Space Station Increment Operations Services

    NASA Astrophysics Data System (ADS)

    Michaelis, Horst; Sielaff, Christian

    2002-01-01

    The Industrial Operator (IO) has defined End-to-End services to perform efficiently all required operations tasks for the Manned Space Program (MSP) as agreed during the Ministerial Council in Edinburgh in November 2001. Those services are the result of a detailed task analysis based on the operations processes as derived from the Space Station Program Implementation Plans (SPIP) and defined in the Operations Processes Documents (OPD). These services are related to ISS Increment Operations and ATV Mission Operations. Each of these End-to-End services is typically characterised by the following properties: It has a clearly defined starting point, where all requirements on the end-product are fixed and associated performance metrics of the customer are well defined. It has a clearly defined ending point, when the product or service is delivered to the customer and accepted by him, according to the performance metrics defined at the start point. The implementation of the process might be restricted by external boundary conditions and constraints mutually agreed with the customer. As far as those are respected the IO has the free choice to select methods and means of implementation. The ISS Increment Operations Service (IOS) activities required for the MSP Exploitation program cover the complete increment specific cycle starting with the support to strategic planning and ending with the post increment evaluation. These activities are divided into sub-services including the following tasks: - ISS Planning Support covering the support to strategic and tactical planning up to the generation - Development &Payload Integration Support - ISS Increment Preparation - ISS Increment Execution These processes are tight together by the Increment Integration Management, which provides the planning and scheduling of all activities as well as the technical management of the overall process . The paper describes the entire End-to-End ISS Increment Operations service and the implementation to support the Columbus Flight 1E related increment and subsequent ISS increments. Special attention is paid to the implications caused by long term operations on hardware, software and operations personnel.

  16. Contact stresses in meshing spur gear teeth: Use of an incremental finite element procedure

    NASA Technical Reports Server (NTRS)

    Hsieh, Chih-Ming; Huston, Ronald L.; Oswald, Fred B.

    1992-01-01

    Contact stresses in meshing spur gear teeth are examined. The analysis is based upon an incremental finite element procedure that simultaneously determines the stresses in the contact region between the meshing teeth. The teeth themselves are modeled by two dimensional plain strain elements. Friction effects are included, with the friction forces assumed to obey Coulomb's law. The analysis assumes that the displacements are small and that the tooth materials are linearly elastic. The analysis procedure is validated by comparing its results with those for the classical two contacting semicylinders obtained from the Hertz method. Agreement is excellent.

  17. Efficient dynamic graph construction for inductive semi-supervised learning.

    PubMed

    Dornaika, F; Dahbi, R; Bosaghzadeh, A; Ruichek, Y

    2017-10-01

    Most of graph construction techniques assume a transductive setting in which the whole data collection is available at construction time. Addressing graph construction for inductive setting, in which data are coming sequentially, has received much less attention. For inductive settings, constructing the graph from scratch can be very time consuming. This paper introduces a generic framework that is able to make any graph construction method incremental. This framework yields an efficient and dynamic graph construction method that adds new samples (labeled or unlabeled) to a previously constructed graph. As a case study, we use the recently proposed Two Phase Weighted Regularized Least Square (TPWRLS) graph construction method. The paper has two main contributions. First, we use the TPWRLS coding scheme to represent new sample(s) with respect to an existing database. The representative coefficients are then used to update the graph affinity matrix. The proposed method not only appends the new samples to the graph but also updates the whole graph structure by discovering which nodes are affected by the introduction of new samples and by updating their edge weights. The second contribution of the article is the application of the proposed framework to the problem of graph-based label propagation using multiple observations for vision-based recognition tasks. Experiments on several image databases show that, without any significant loss in the accuracy of the final classification, the proposed dynamic graph construction is more efficient than the batch graph construction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Continuous Improvement of a Groundwater Model over a 20-Year Period: Lessons Learned.

    PubMed

    Andersen, Peter F; Ross, James L; Fenske, Jon P

    2018-04-17

    Groundwater models developed for specific sites generally become obsolete within a few years due to changes in: (1) modeling technology; (2) site/project personnel; (3) project funding; and (4) modeling objectives. Consequently, new models are sometimes developed for the same sites using the latest technology and data, but without potential knowledge gained from the prior models. When it occurs, this practice is particularly problematic because, although technology, data, and observed conditions change, development of the new numerical model may not consider the conceptual model's underpinnings. As a contrary situation, we present the unique case of a numerical flow and trichloroethylene (TCE) transport model that was first developed in 1993 and since revised and updated annually by the same personnel. The updates are prompted by an increase in the amount of data, exposure to a wider range of hydrologic conditions over increasingly longer timeframes, technological advances, evolving modeling objectives, and revised modeling methodologies. The history of updates shows smooth, incremental changes in the conceptual model and modeled aquifer parameters that result from both increase and decrease in complexity. Myriad modeling objectives have included demonstrating the ineffectiveness of a groundwater extraction/injection system, evaluating potential TCE degradation, locating new monitoring points, and predicting likelihood of exceedance of groundwater standards. The application emphasizes an original tenet of successful groundwater modeling: iterative adjustment of the conceptual model based on observations of actual vs. model response. © 2018, National Ground Water Association.

  19. Statistically Controlling for Confounding Constructs Is Harder than You Think

    PubMed Central

    Westfall, Jacob; Yarkoni, Tal

    2016-01-01

    Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707

  20. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  1. Volatilities, traded volumes, and the hypothesis of price increments in derivative securities

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik

    2007-08-01

    A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.

  2. 10 CFR 72.248 - Safety analysis report updating.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... appropriate, the last update to the FSAR under this section. The update shall include the effects 1 of: 1... for a spent fuel storage cask design shall update periodically, as provided in paragraph (b) of this... the issued Certificate of Compliance (CoC). (b) Each update shall contain all the changes necessary to...

  3. 10 CFR 72.248 - Safety analysis report updating.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... appropriate, the last update to the FSAR under this section. The update shall include the effects 1 of: 1... for a spent fuel storage cask design shall update periodically, as provided in paragraph (b) of this... the issued Certificate of Compliance (CoC). (b) Each update shall contain all the changes necessary to...

  4. 10 CFR 72.248 - Safety analysis report updating.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... appropriate, the last update to the FSAR under this section. The update shall include the effects 1 of: 1... for a spent fuel storage cask design shall update periodically, as provided in paragraph (b) of this... the issued Certificate of Compliance (CoC). (b) Each update shall contain all the changes necessary to...

  5. 10 CFR 72.248 - Safety analysis report updating.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... appropriate, the last update to the FSAR under this section. The update shall include the effects 1 of: 1... for a spent fuel storage cask design shall update periodically, as provided in paragraph (b) of this... the issued Certificate of Compliance (CoC). (b) Each update shall contain all the changes necessary to...

  6. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation

    PubMed Central

    Reeves, Anthony P.; Xie, Yiting; Liu, Shuang

    2017-01-01

    Abstract. With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset. PMID:28612037

  7. DYCAST: A finite element program for the crash analysis of structures

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Winter, R.; Ogilvie, P.

    1987-01-01

    DYCAST is a nonlinear structural dynamic finite element computer code developed for crash simulation. The element library contains stringers, beams, membrane skin triangles, plate bending triangles and spring elements. Changing stiffnesses in the structure are accounted for by plasticity and very large deflections. Material nonlinearities are accommodated by one of three options: elastic-perfectly plastic, elastic-linear hardening plastic, or elastic-nonlinear hardening plastic of the Ramberg-Osgood type. Geometric nonlinearities are handled in an updated Lagrangian formulation by reforming the structure into its deformed shape after small time increments while accumulating deformations, strains, and forces. The nonlinearities due to combined loadings are maintained, and stiffness variation due to structural failures are computed. Numerical time integrators available are fixed-step central difference, modified Adams, Newmark-beta, and Wilson-theta. The last three have a variable time step capability, which is controlled internally by a solution convergence error measure. Other features include: multiple time-load history tables to subject the structure to time dependent loading; gravity loading; initial pitch, roll, yaw, and translation of the structural model with respect to the global system; a bandwidth optimizer as a pre-processor; and deformed plots and graphics as post-processors.

  8. Association of estrogen receptor gene polymorphisms with human precocious puberty: a systematic review and meta-analysis.

    PubMed

    Luo, Yan; Liu, Qin; Lei, Xun; Wen, Yi; Yang, Ya-Lan; Zhang, Rui; Hu, Meng-Yao

    2015-07-01

    This study aims to estimate the association between ESR1 polymorphisms (PvuII and XbaI) and ESR2 polymorphisms (RsaI and AluI) with precocious puberty. Relevant studies published before March 2014 were retrieved by a electronic search among nine databases. Meta-analysis of the pooled odds ratios (ORs) with 95% confidence intervals (CIs) was calculated. Four eligible case-control studies including 491 precocious puberty patients and 370 healthy controls were identified. Three studies reported ESR1 PvuII and XbaI polymorphism and one study reported ESR2 RsaI and AluI polymorphism. Increment of precocious puberty risk was associated with PvuII polymorphism in the heterosis model ((CT) versus TT: OR 1.42, 95% CI: 1.05-1.91, p = 0.02). Risk of precocious puberty was associated with XbaI polymorphism in the dominant model (GG + GA versus AA: OR 1.48, 95% CI: 1.11-1.97, p = 0.007) and the heterosis model (GA versus AA: OR 1.68, 95% CI: 1.23-2.29, p = 0.001). This meta-analysis suggests that ESR1 XbaI and PvuII polymorphisms are associated with precocious puberty susceptibility, and the relationship between ESR2 RsaI and AluI polymorphism with precocious puberty remains to be further investigated. Well-designed studies with large sample size among different polymorphisms and ethnicities are in urgent need to provide and update reliable data for comprehensive and definite conclusion.

  9. Optimal tree increment models for the Northeastern United Statesq

    Treesearch

    Don C. Bragg

    2003-01-01

    used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...

  10. Optimal Tree Increment Models for the Northeastern United States

    Treesearch

    Don C. Bragg

    2005-01-01

    I used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...

  11. High-resolution analysis of stem increment and sap flow for loblolly pine trees attacked by southern pine beetle

    Treesearch

    Stan D. Wullschleger; Samuel B. McLaughlin; Matthew P. Ayres

    2004-01-01

    Manual and automated dendrometers, and thermal dissipation probes were used to measure stem increment and sap flow for loblolly pine (Pinus taeda L.) trees attacked by southern pine beetle (Dendroctonus frontalis Zimm.) in east Tennessee, USA. Seasonal-long measurements with manual dendrometers indicated linear increases in stem...

  12. Mathematic analysis of incremental packing density with detachable coils: does that last coil matter much?

    PubMed

    Taussky, P; Kallmes, D F; Cloft, H

    2012-05-01

    Higher packing attenuation of coils in cerebral aneurysms is associated with a decreased recurrence rate. However, geometric relationships suggest that an additional coil may have very little effect on packing attenuation as aneurysm size increases. We mathematically evaluated the relationship between aneurysm size and incremental packing attenuation for coils currently available.

  13. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.

    PubMed

    Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.

  14. Structure of analysis-minus-observation misfits within a global ocean reanalysis system: implications for atmospheric reanalyses

    NASA Astrophysics Data System (ADS)

    Carton, James; Chepurin, Gennady

    2017-04-01

    While atmospheric reanalyses do not ingest data from the subsurface ocean they must produce fluxes consistent with, for example, ocean storage and divergence of heat transport. Here we present a test of the consistency of two different atmospheric reanalyses with 2.5 million global ocean temperature observations during the data-rich eight year period 2007-2014. The examination is carried out by using atmospheric reanalysis variables to drive the SODA3 ocean reanalysis system, and then collecting and analyzing the temperature analysis increments (observation misfits). For the widely used MERRA2 and ERA-Int atmospheric reanalyses the temperature analysis increments reveal inconsistencies between those atmospheric fluxes and the ocean observations in the range of 10-30 W/m2. In the interior basins excess heat during a single assimilation cycle is stored primarily locally within the mixed layer, a simplification of the heat budget that allows us to identify the source of the error as the specified net surface heat flux. Along the equator the increments are primarily confined to thermocline depths indicating the primary source of the error is dominated by heat transport divergence. The error in equatorial heat transport divergence, in turn, can be traced to errors in the strength of the equatorial trade winds. We test our conclusions by introducing modifications of the atmospheric reanalyses based on analysis of ocean temperature analysis increments and repeating the ocean reanalysis experiments using the modified surface fluxes. Comparison of the experiments reveals that the modified fluxes reduce the misfit to ocean observations as well as the differences between the different atmospheric reanalyses.

  15. Requirements, Resource Planning, and Management for Decrewing/Recrewing Scenarios of the International Space Station

    NASA Technical Reports Server (NTRS)

    Bach, David A.; Brand, Susan N.; Hasbrook, Peter V.

    2013-01-01

    Following the failure of 44 Progress (44P) on launch in August 2011, and the subsequent grounding of all Russian Soyuz rocket based launches, the International Space Station (ISS) ground teams engaged in an effort to determine how long the ISS could remain crewed, what would be required to safely configure the ISS for decrewing, and what would be required to recrew the ISS upon resumption of Soyuz rocket launches if decrewing became necessary. This White Paper was written to capture the processes and lessons learned from real-time time events and to provide a reference and training document for ISS Program teams in the event decrewing of the ISS is needed. Through coordination meetings and assessments, teams identified six decrewing priorities for ground and crew operations. These priorities were integrated along with preflight priorities through the Increment re-planning process. Additionally, the teams reviewed, updated, and implemented changes to the governing documentation for the configuration of the ISS for a contingency decrewing event. Steps were taken to identify critical items for disposal prior to decrewing, as well as identifying the required items to be strategically staged or flown with the astronauts and cosmonauts who would eventually recrew the ISS. After the successful launches and dockings of both 45P and 28 Soyuz (28S), the decrewing team transitioned to finalizing and publishing the documentation for standardizing the decrewing flight rules. With the continued launching of crews and cargo to the ISS, utilization and science is again a high priority; both Increment pairs 29 and 30, and Increment 31 and 32 reaching the milestone of at least 35 hours per week average utilization.

  16. Requirements, Resource Planning and Management for Decrewing/Recrewing Scenarios of the International Space Station

    NASA Astrophysics Data System (ADS)

    Bach, David A.; Brand, Susan N.; Hasbrook, Peter V.

    2013-09-01

    Following the failure of 44 Progress (44P) on launch in August 2011, and the subsequent grounding of all Russian Soyuz rocket based launches, the International Space Station (ISS) ground teams engaged in an effort to determine how long the ISS could remain crewed, what would be required to safely configure the ISS for decrewing, and what would be required to recrew the ISS upon resumption of Soyuz rocket launches if decrewing became necessary. This White Paper was written to capture the processes and lessons learned from real-time time events and to provide a reference and training document for ISS Program teams in the event decrewing of the ISS is needed.Through coordination meetings and assessments, teams identified six decrewing priorities for ground and crew operations. These priorities were integrated along with preflight priorities through the Increment re-planning process. Additionally, the teams reviewed, updated, and implemented changes to the governing documentation for the configuration of the ISS for a contingency decrewing event. Steps were taken to identify critical items for disposal prior to decrewing, as well as identifying the required items to be strategically staged or flown with the astronauts and cosmonauts who would eventually recrew the ISS.After the successful launches and dockings of both 45P and 28 Soyuz (28S), the decrewing team transitioned to finalizing and publishing the documentation for standardizing the decrewing flight rules. With the continued launching of crews and cargo to the ISS, utilization and science is again a high priority; both Increment pairs 29 and 30, and Increment 31 and 32 reaching the milestone of at least 35 hours per week average utilization.

  17. Why and how did Israel adopt activity-based hospital payment? The Procedure-Related Group incremental reform.

    PubMed

    Brammli-Greenberg, Shuli; Waitzberg, Ruth; Perman, Vadim; Gamzu, Ronni

    2016-10-01

    Historically, Israel paid its non-profit hospitals on a perdiem (PD) basis. Recently, like other OECD countries, Israel has moved to activity-based payments. While most countries have adopted a diagnostic related group (DRG) payment system, Israel has chosen a Procedure-Related Group (PRG) system. This differs from the DRG system because it classifies patients by procedure rather than diagnosis. In Israel, the PRG system was found to be more feasible given the lack of data and information needed in the DRG classification system. The Ministry of Health (MoH) chose a payment scheme that depends only on inhouse creation of PRG codes and costing, thus avoiding dependence on hospital data. The PRG tariffs are priced by a joint Health and Finance Ministry commission and updated periodically. Moreover, PRGs are believed to achieve the same main efficiency objectives as DRGs: increasing the volume of activity, shortening unnecessary hospitalization days, and reducing the gaps between the costs and prices of activities. The PRG system is being adopted through an incremental reform that started in 2002 and was accelerated in 2010. The Israeli MoH involved the main players in the hospital market in the consolidation of this potentially controversial reform in order to avoid opposition. The reform was implemented incrementally in order to preserve the balance of resource allocation and overall expenditures of the system, thus becoming budget neutral. Yet, as long as gaps remain between marginal costs and prices of procedures, PRGs will not attain all their objectives. Moreover, it is still crucial to refine PRG rates to reflect the severity of cases, in order to tackle incentives for selection of patients within each procedure. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  18. Declarative Consciousness for Reconstruction

    NASA Astrophysics Data System (ADS)

    Seymour, Leslie G.

    2013-12-01

    Existing information technology tools are harnessed and integrated to provide digital specification of human consciousness of individual persons. An incremental compilation technology is proposed as a transformation of LifeLog derived persona specifications into a Canonical representation of the neocortex architecture of the human brain. The primary purpose is to gain an understanding of the semantical allocation of the neocortex capacity. Novel neocortex content allocation simulators with browsers are proposed to experiment with various approaches of relieving the brain from overload conditions. An IT model of the neocortex is maintained, which is then updated each time new stimuli are received from the LifeLog data stream; new information is gained from brain signal measurements; and new functional dependencies are discovered between live persona consumed/produced signals

  19. Distributed Coordination of Energy Storage with Distributed Generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tao; Wu, Di; Stoorvogel, Antonie A.

    2016-07-18

    With a growing emphasis on energy efficiency and system flexibility, a great effort has been made recently in developing distributed energy resources (DER), including distributed generators and energy storage systems. This paper first formulates an optimal coordination problem considering constraints at both system and device levels, including power balance constraint, generator output limits, storage energy and power capacity and charging/discharging efficiencies. An algorithm is then proposed to dynamically and automatically coordinate DERs in a distributed manner. With the proposed algorithm, the agent at each DER only maintains a local incremental cost and updates it through information exchange with a fewmore » neighbors, without relying on any central decision maker. Simulation results are used to illustrate and validate the proposed algorithm.« less

  20. Real-time garbage collection for list processing

    NASA Technical Reports Server (NTRS)

    Shuler, R. L., Jr. (Inventor)

    1986-01-01

    In a list processing system, small reference counters are maintained in conjunction with memory cells for the purpose of identifying memory cells that become available for re-use. The counters are updated as references to the cells are created and destroyed, and when a counter of a cell is decremented to logical zero the cell is immediately returned to a list of free cells. In those cases where a counter must be incremented beyond the maximum value that can be represented in a small counter, the cell is restructured so that the additional reference count can be represented. The restructuring involves allocating an additional cell, distributing counter, tag, and pointer information among the two cells, and linking both cells appropriately into the existing list structure.

  1. Functional data analysis on ground reaction force of military load carriage increment

    NASA Astrophysics Data System (ADS)

    Din, Wan Rozita Wan; Rambely, Azmin Sham

    2014-06-01

    Analysis of ground reaction force on military load carriage is done through functional data analysis (FDA) statistical technique. The main objective of the research is to investigate the effect of 10% load increment and to find the maximum suitable load for the Malaysian military. Ten military soldiers age 31 ± 6.2 years, weigh 71.6 ± 10.4 kg and height of 166.3 ± 5.9 cm carrying different military load range from 0% body weight (BW) up to 40% BW participated in an experiment to gather the GRF and kinematic data using Vicon Motion Analysis System, Kirstler force plates and thirty nine body markers. The analysis is conducted in sagittal, medial lateral and anterior posterior planes. The results show that 10% BW load increment has an effect when heel strike and toe-off for all the three planes analyzed with P-value less than 0.001 at 0.05 significant levels. FDA proves to be one of the best statistical techniques in analyzing the functional data. It has the ability to handle filtering, smoothing and curve aligning according to curve features and points of interest.

  2. Cost-utility analysis of screening for diabetic retinopathy in Japan: a probabilistic Markov modeling study.

    PubMed

    Kawasaki, Ryo; Akune, Yoko; Hiratsuka, Yoshimune; Fukuhara, Shunichi; Yamada, Masakazu

    2015-02-01

    To evaluate the cost-effectiveness for a screening interval longer than 1 year detecting diabetic retinopathy (DR) through the estimation of incremental costs per quality-adjusted life year (QALY) based on the best available clinical data in Japan. A Markov model with a probabilistic cohort analysis was framed to calculate incremental costs per QALY gained by implementing a screening program detecting DR in Japan. A 1-year cycle length and population size of 50,000 with a 50-year time horizon (age 40-90 years) was used. Best available clinical data from publications and national surveillance data was used, and a model was designed including current diagnosis and management of DR with corresponding visual outcomes. One-way and probabilistic sensitivity analyses were performed considering uncertainties in the parameters. In the base-case analysis, the strategy with a screening program resulted in an incremental cost of 5,147 Japanese yen (¥; US$64.6) and incremental effectiveness of 0.0054 QALYs per person screened. The incremental cost-effectiveness ratio was ¥944,981 (US$11,857) per QALY. The simulation suggested that screening would result in a significant reduction in blindness in people aged 40 years or over (-16%). Sensitivity analyses suggested that in order to achieve both reductions in blindness and cost-effectiveness in Japan, the screening program should screen those aged 53-84 years, at intervals of 3 years or less. An eye screening program in Japan would be cost-effective in detecting DR and preventing blindness from DR, even allowing for the uncertainties in estimates of costs, utility, and current management of DR.

  3. Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System

    NASA Technical Reports Server (NTRS)

    Williams-Hayes, Peggy S.

    2004-01-01

    The NASA F-15 Intelligent Flight Control System project team developed a series of flight control concepts designed to demonstrate neural network-based adaptive controller benefits, with the objective to develop and flight-test control systems using neural network technology to optimize aircraft performance under nominal conditions and stabilize the aircraft under failure conditions. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to baseline aerodynamic derivatives in flight. This open-loop flight test set was performed in preparation for a future phase in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed - pitch frequency sweep and automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. Flight data examination shows that addition of flight-identified aerodynamic derivative increments into the simulation improved aircraft pitch handling qualities.

  4. Using secure messaging to update medications list in ambulatory care setting.

    PubMed

    Raghu, T S; Frey, Keith; Chang, Yu-Hui; Cheng, Meng-Ru; Freimund, Sharon; Patel, Asha

    2015-10-01

    This study analyzed patient adoption of secure messaging to update medication list in an ambulatory care setting. The objective was to establish demographic differences between users and non-users of secure messaging for medications list update. Efficiency of secure messaging for the updates was compared to fax and telephone based updates. The study used a retrospective, cross-sectional study of patient medical records and pharmacy call logs at Mayo Clinic, Arizona from December 2012 to May 2013, approximately one year after organizing a pharmacy call center for medication updates. A subgroup analysis during a 2-week period was used to measure time to complete update. Main dependent variable is the frequency of medication list updates over the study duration. Technician time required for the update was also utilized. A total of 22,495 outpatient visits were drawn and 18,702 unique patients were included in the primary analysis. A total of 402 unique patients were included in sub-group analysis. Secure message response rate (49.5%) was statistically significantly lower than that for phone calls (54.8%, p<0.001). Time to complete the update was significantly higher for faxed medication lists (Wilcoxon rank-sum tests, p<0.001) when compared to those for secure message or phone. Around 50% of the patients respond to medication update requests before office visit when contacted using phone calls and secure messages. Given the demographic differences between users and non-users of patient portal, mixed mode communication with patients is likely to be the norm for the foreseeable future in outpatient settings. Copyright © 2015. Published by Elsevier Ireland Ltd.

  5. [Economic impact of nosocomial bacteraemia. A comparison of three calculation methods].

    PubMed

    Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Castells, Xavier; Knobel, Hernando; Cots, Francesc

    2016-12-01

    The excess cost associated with nosocomial bacteraemia (NB) is used as a measurement of the impact of these infections. However, some authors have suggested that traditional methods overestimate the incremental cost due to the presence of various types of bias. The aim of this study was to compare three assessment methods of NB incremental cost to correct biases in previous analyses. Patients who experienced an episode of NB between 2005 and 2007 were compared with patients grouped within the same All Patient Refined-Diagnosis-Related Group (APR-DRG) without NB. The causative organisms were grouped according to the Gram stain, and whether bacteraemia was caused by a single or multiple microorganisms, or by a fungus. Three assessment methods are compared: stratification by disease; econometric multivariate adjustment using a generalised linear model (GLM); and propensity score matching (PSM) was performed to control for biases in the econometric model. The analysis included 640 admissions with NB and 28,459 without NB. The observed mean cost was €24,515 for admissions with NB and €4,851.6 for controls (without NB). Mean incremental cost was estimated at €14,735 in stratified analysis. Gram positive microorganism had the lowest mean incremental cost, €10,051. In the GLM, mean incremental cost was estimated as €20,922, and adjusting with PSM, the mean incremental cost was €11,916. The three estimates showed important differences between groups of microorganisms. Using enhanced methodologies improves the adjustment in this type of study and increases the value of the results. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  6. Glycemic index, glycemic load and endometrial cancer risk: results from the Australian National Endometrial Cancer study and an updated systematic review and meta-analysis.

    PubMed

    Nagle, Christina M; Olsen, Catherine M; Ibiebele, Torukiri I; Spurdle, Amanda B; Webb, Penelope M

    2013-03-01

    The relationship between habitual consumption of foods with a high glycemic index (GI) and/or a diet with a high glycemic load (GL) and risk of endometrial cancer is uncertain, and relatively few studies have investigated these associations. The objectives of this study were to examine the association between GI/GL and risk of endometrial cancer using data from an Australian population-based case-control study and systematically review all the available evidence to quantify the magnitude of the association using meta-analysis. The case-control study included 1,290 women aged 18-79 years with newly diagnosed, histologically confirmed endometrial cancer and 1,436 population controls. Controls were selected to match the expected Australian state of residence and age distribution (in 5-year bands) of cases. For the systematic review, relevant studies were identified by searching PubMed and Embase databases through to July 2011. Random-effects models were used to calculate the summary risk estimates, overall and dose-response. In our case-control study, we observed a modest positive association between high dietary GI (OR 1.43, 95 % CI 1.11-1.83) and risk of endometrial cancer, but no association with high dietary GL (OR 1.15, 95 % CI 0.90-1.48). For the meta-analysis, we collated information from six cohort and two case-control studies, involving a total of 5,569 cases. The pooled OR for the highest versus the lowest intake category of GI was 1.15 (0.95-1.40); however, there was significant heterogeneity (p 0.004) by study design (RR 1.00 [95 % CI 0.87-1.14] for cohort studies and 1.56 [95 % CI 1.21-2.02] for case-control studies). There was no association in the dose-response meta-analysis of GI (RR per 5 unit/day increment of GI 1.00, 95 % CI 0.97-1.03). GL was positively associated with endometrial cancer. The pooled RR for the highest versus the lowest GL intake was 1.21 (95 % CI 1.09-1.33) and 1.06 (95 % CI 1.01-1.11) per 50 unit/day increment of GL in the dose-response meta-analysis. The pooled results from observational studies, including our case-control results, provide evidence of a modest positive association between high GL, but not GI, and endometrial cancer risk.

  7. Utilizing Flight Data to Update Aeroelastic Stability Estimates

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty

    1997-01-01

    Stability analysis of high performance aircraft must account for errors in the system model. A method for computing flutter margins that incorporates flight data has been developed using robust stability theory. This paper considers applying this method to update flutter margins during a post-flight or on-line analysis. Areas of modeling uncertainty that arise when using flight data with this method are investigated. The amount of conservatism in the resulting flutter margins depends on the flight data sets used to update the model. Post-flight updates of flutter margins for an F/A-18 are presented along with a simulation of on-line updates during a flight test.

  8. High-resolution sclerochronological analysis of the bivalve mollusk Saxidomus gigantea from Alaska and British Columbia: techniques for revealing environmental archives and archaeological seasonality

    USGS Publications Warehouse

    Hallman, Nadine; Burchell, Meghan; Schone, Bernd R.; Irvine, Gail V.; Maxwell, David

    2009-01-01

    The butter clam, Saxidomus gigantea, is one of the most commonly recovered bivalves from archaeological shell middens on the Pacific Coast of North America. This study presents the results of the sclerochronology of modern specimens of S. gigantea, collected monthly from Pender Island (British Columbia), and additional modern specimens from the Dundas Islands (BC) and Mink and Little Takli Islands (Alaska). The methods presented can be used as a template to interpret local environmental conditions and increase the precision of seasonality estimates in shellfish using sclerochronology and oxygen isotope analysis. This method can also identify, with a high degree of accuracy, the date of shell collection to the nearest fortnightly cycle, the time of day the shell was collected and the approximate tidal elevation (i.e., approx. water depth and distance from the shoreline) from which the shell was collected. Life-history traits of S. gigantea were analyzed to understand the timing of growth line formation, the duration of the growing season, the growth rate, and the reliability of annual increments. We also examine the influence of the tidal regime and freshwater mixing in estuarine locations and how these variables can affect both incremental structures and oxygen isotope values. The results of the sclerochronological analysis show that there is a latitudinal trend in shell growth that needs to be considered when using shells for seasonality studies. Oxygen isotope analysis reveals clear annual cycles with the most positive values corresponding to the annual winter growth lines, and the most negative values corresponding to high temperatures during the summer. Intra-annual increment widths demonstrate clear seasonal oscillations with broadest increments in summer and very narrow increments or no growth during the winter months. This study provides new insights into the biology, geochemistry and seasonal growth of S. gigantea, which are crucial for paleoclimate reconstructions and interpreting seasonality patterns of past human collection.

  9. Are financial incentives cost-effective to support smoking cessation during pregnancy?

    PubMed

    Boyd, Kathleen A; Briggs, Andrew H; Bauld, Linda; Sinclair, Lesley; Tappin, David

    2016-02-01

    To investigate the cost-effectiveness of up to £400 worth of financial incentives for smoking cessation in pregnancy as an adjunct to routine health care. Cost-effectiveness analysis based on a Phase II randomized controlled trial (RCT) and a cost-utility analysis using a life-time Markov model. The RCT was undertaken in Glasgow, Scotland. The economic analysis was undertaken from the UK National Health Service (NHS) perspective. A total of 612 pregnant women randomized to receive usual cessation support plus or minus financial incentives of up to £400 vouchers (US $609), contingent upon smoking cessation. Comparison of usual support and incentive interventions in terms of cotinine-validated quitters, quality-adjusted life years (QALYs) and direct costs to the NHS. The incremental cost per quitter at 34-38 weeks pregnant was £1127 ($1716).This is similar to the standard look-up value derived from Stapleton & West's published ICER tables, £1390 per quitter, by looking up the Cessation in Pregnancy Incentives Trial (CIPT) incremental cost (£157) and incremental 6-month quit outcome (0.14). The life-time model resulted in an incremental cost of £17 [95% confidence interval (CI) = -£93, £107] and a gain of 0.04 QALYs (95% CI = -0.058, 0.145), giving an ICER of £482/QALY ($734/QALY). Probabilistic sensitivity analysis indicates uncertainty in these results, particularly regarding relapse after birth. The expected value of perfect information was £30 million (at a willingness to pay of £30 000/QALY), so given current uncertainty, additional research is potentially worthwhile. Financial incentives for smoking cessation in pregnancy are highly cost-effective, with an incremental cost per quality-adjusted life years of £482, which is well below recommended decision thresholds. © 2015 Society for the Study of Addiction.

  10. Heart failure disease management programs: a cost-effectiveness analysis.

    PubMed

    Chan, David C; Heidenreich, Paul A; Weinstein, Milton C; Fonarow, Gregg C

    2008-02-01

    Heart failure (HF) disease management programs have shown impressive reductions in hospitalizations and mortality, but in studies limited to short time frames and high-risk patient populations. Current guidelines thus only recommend disease management targeted to high-risk patients with HF. This study applied a new technique to infer the degree to which clinical trials have targeted patients by risk based on observed rates of hospitalization and death. A Markov model was used to assess the incremental life expectancy and cost of providing disease management for high-risk to low-risk patients. Sensitivity analyses of various long-term scenarios and of reduced effectiveness in low-risk patients were also considered. The incremental cost-effectiveness ratio of extending coverage to all patients was $9700 per life-year gained in the base case. In aggregate, universal coverage almost quadrupled life-years saved as compared to coverage of only the highest quintile of risk. A worst case analysis with simultaneous conservative assumptions yielded an incremental cost-effectiveness ratio of $110,000 per life-year gained. In a probabilistic sensitivity analysis, 99.74% of possible incremental cost-effectiveness ratios were <$50,000 per life-year gained. Heart failure disease management programs are likely cost-effective in the long-term along the whole spectrum of patient risk. Health gains could be extended by enrolling a broader group of patients with HF in disease management.

  11. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  12. Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments

    NASA Astrophysics Data System (ADS)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-04-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static stability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.

  13. A hierarchical structure for automatic meshing and adaptive FEM analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Saxena, Mukul; Perucchio, Renato

    1987-01-01

    A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.

  14. Discriminant and Incremental Validity of Self-Concept and Academic Self-Efficacy: A Meta-Analysis

    ERIC Educational Resources Information Center

    Huang, Chiungjung

    2012-01-01

    Two studies examined the discriminant and incremental validity of self-concept and academic self-efficacy. Study 1, which meta-analysed 64 studies comprising 74 independent samples (N = 24,773), found a strong mean correlation of 0.43 between self-concept and academic self-efficacy. The domains of self-concept and self-efficacy, and the domain…

  15. Exploiting Outage and Error Probability of Cooperative Incremental Relaying in Underwater Wireless Sensor Networks

    PubMed Central

    Nasir, Hina; Javaid, Nadeem; Sher, Muhammad; Qasim, Umar; Khan, Zahoor Ali; Alrajeh, Nabil; Niaz, Iftikhar Azim

    2016-01-01

    This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs); performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE) efficient depth based routing and Enhanced-ACE (E-ACE) are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ). E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment. PMID:27420061

  16. Effect of preventive zinc supplementation on linear growth in children under 5 years of age in developing countries: a meta-analysis of studies for input to the lives saved tool

    PubMed Central

    2011-01-01

    Introduction Zinc plays an important role in cellular growth, cellular differentiation and metabolism. The results of previous meta-analyses evaluating effect of zinc supplementation on linear growth are inconsistent. We have updated and evaluated the available evidence according to Grading of Recommendations, Assessment, Development and Evaluation (GRADE) criteria and tried to explain the difference in results of the previous reviews. Methods A literature search was done on PubMed, Cochrane Library, IZiNCG database and WHO regional data bases using different terms for zinc and linear growth (height). Data were abstracted in a standardized form. Data were analyzed in two ways i.e. weighted mean difference (effect size) and pooled mean difference for absolute increment in length in centimeters. Random effect models were used for these pooled estimates. We have given our recommendations for effectiveness of zinc supplementation in the form of absolute increment in length (cm) in zinc supplemented group compared to control for input to Live Saves Tool (LiST). Results There were thirty six studies assessing the effect of zinc supplementation on linear growth in children < 5 years from developing countries. In eleven of these studies, zinc was given in combination with other micronutrients (iron, vitamin A, etc). The final effect size after pooling all the data sets (zinc ± iron etc) showed a significant positive effect of zinc supplementation on linear growth [Effect size: 0.13 (95% CI 0.04, 0.21), random model] in the developing countries. A subgroup analysis by excluding those data sets where zinc was supplemented in combination with iron showed a more pronounced effect of zinc supplementation on linear growth [Weighed mean difference 0.19 (95 % CI 0.08, 0.30), random model]. A subgroup analysis from studies that reported actual increase in length (cm) showed that a dose of 10 mg zinc/day for duration of 24 weeks led to a net a gain of 0.37 (±0.25) cm in zinc supplemented group compared to placebo. This estimate is recommended for inclusion in Lives Saved Tool (LiST) model. Conclusions Zinc supplementation has a significant positive effect on linear growth, especially when administered alone, and should be included in national strategies to reduce stunting in children < 5 years of age in developing countries. PMID:21501440

  17. Systems Engineering and Integration (SE and I)

    NASA Technical Reports Server (NTRS)

    Chevers, ED; Haley, Sam

    1990-01-01

    The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.

  18. Living systematic reviews: 3. Statistical methods for updating meta-analyses.

    PubMed

    Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian

    2017-11-01

    A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Multi-Reanalysis Comparison of Variability in Analysis Increment of Column-Integrated Water Vapor Associated with Madden-Julian Oscillation

    NASA Astrophysics Data System (ADS)

    Yokoi, S.

    2014-12-01

    This study conducts a comparison of three reanalysis products (JRA-55, JRA-25, and ERA-Interim) in representation of Madden-Julian Oscillation (MJO), focusing on column-integrated water vapor (CWV) that is considered as an essential variable for discussing MJO dynamics. Besides the analysis fields of CWV, which exhibit spatio-temporal distributions that are quite similar to satellite observations, CWV tendency simulated by forecast models and analysis increment calculated by data assimilation are examined. For JRA-55, it is revealed that, while its forecast model is able to simulate eastward propagation of the CWV anomaly, it tends to weaken the amplitude, and data assimilation process sustains the amplitude. The multi-reanalysis comparison of the analysis increment further reveals that this weakening bias is probably caused by excessively weak cloud-radiative feedback represented by the model. This bias in the feedback strength makes anomalous moisture supply by the vertical advection term in the CWV budget equation too insensitive to precipitation anomaly, resulting in reduction of the amplitude of CWV anomaly. ERA-Interim has a nearly opposite feature; the forecast model represents excessively strong feedback and unrealistically strengthens the amplitude, while the data assimilation weakens it. These results imply the necessity of accurate representation of the cloud-radiative feedback strength for a short-term MJO forecast, and may be evidence to support the argument that this feedback is essential for the existence of MJO. Furthermore, this study demonstrates that the multi-reanalysis comparison of the analysis increment will provide useful information for identifying model biases and, potentially, for estimating parameters that are difficult to estimate solely from observation data, such as gross moist stability.

  20. A SWOT Analysis of the Updated National HIV/AIDS Strategy for the U.S., 2015-2020.

    PubMed

    Holtgrave, David R; Greenwald, Robert

    2016-01-01

    In July 2015, President Barack Obama released an updated National HIV/AIDS Strategy (NHAS) for the United States to guide HIV efforts through the year 2020. A federal action plan to accompany the updated NHAS will be released in December 2015. In this editorial, we offer a strengths, weaknesses, opportunities and threats analysis with the aim of increasing discussion of ways to truly fulfill the promise of the updated NHAS and to address barriers that may thwart it from achieving its full potential.

  1. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms

    PubMed Central

    He, Li; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546

  2. Updating the Behavior Engineering Model.

    ERIC Educational Resources Information Center

    Chevalier, Roger

    2003-01-01

    Considers Thomas Gilbert's Behavior Engineering Model as a tool for systematically identifying barriers to individual and organizational performance. Includes a detailed case study and a performance aid that incorporates gap analysis, cause analysis, and force field analysis to update the original model. (Author/LRW)

  3. Predicting success of methotrexate treatment by pretreatment HCG level and 24-hour HCG increment.

    PubMed

    Levin, Gabriel; Saleh, Narjes A; Haj-Yahya, Rani; Matan, Liat S; Avi, Benshushan

    2018-04-01

    To evaluate β-human chorionic gonadotropin (β-HCG) level and its 24-hour increment as predictors of successful methotrexate treatment for ectopic pregnancy. Data were retrospectively reviewed from women with ectopic pregnancy who were treated by single-dose methotrexate (50 mg/m 2 ) at a university hospital in Jerusalem, Israel, between January 1, 2000, and June 30, 2015. Serum β-HCG before treatment and its percentage increment in the 24 hours before treatment were compared between treatment success and failure groups. Sixty-nine women were included in the study. Single-dose methotrexate treatment was successful for 44 (63.8%) women. Both mean β-HCG level and its 24-hour increment were lower for women with successful treatment than for those with failed treatment (respectively, 1224 IU\\L vs 2362 IU\\L, P=0.018; and 13.5% vs 29.6%, P=0.009). Receiver operator characteristic curve analysis yielded cutoff values of 1600 IU\\L and 14% increment with a positive predictive value of 75% and 82%, respectively, for treatment success. β-HCG level and its 24-hour increment were independent predictors of treatment outcome by logistic regression (both P<0.01). A β-HCG increment of less than 14% in the 24 hours before single-dose methotrexate and serum β-HCG of less than 1600 IU\\L were found to be good predictors of treatment success. © 2017 International Federation of Gynecology and Obstetrics.

  4. Predicting height increment of young-growth red fir in California and southern Oregon

    Treesearch

    K. Leroy Dolph

    1992-01-01

    An equation is given to estimate 10-year height increment for young-growth red fir trees in California and southern Oregon. The independent variables are the individual tree, stand, and site characteristics significantly related to a tree's height growth. Data used to develop the equation came from stem analysis of 492 trees sampled from 56 stands in the study...

  5. Survey of Attitudinal Acceptance of a Children's Incremental Dental Care Program by Parents, Teachers and School Administrators. Final Report.

    ERIC Educational Resources Information Center

    Guess, L. Lynn; And Others

    This report presents an analysis of the attitudes of parents, teachers, and school administrators to the Chattanooga Incremental Dental Care Program. This project provided dental care in the public elementary schools at specific intervals of time to specific age groups in order to establish and maintain a state of oral health. Dental services were…

  6. Investigating Forest Inventory and Analysis-collected tree-ring data from Utah as a proxy for historical climate

    Treesearch

    R. Justin DeRose; W. Shih-Yu (Simon) Wang; John D. Shaw

    2012-01-01

    Increment cores collected as part of the periodic inventory in the Intermountain West were examined for their potential to represent growth and be a proxy for climate (precipitation) over a large region (Utah). Standardized and crossdated time-series created from pinyon pine (n=249) and Douglas-fir (n=274) increment cores displayed spatiotemporal patterns in growth...

  7. Economic evaluation of second generation pneumococcal conjugate vaccines in Norway.

    PubMed

    Robberstad, Bjarne; Frostad, Carl R; Akselsen, Per E; Kværner, Kari J; Berstad, Aud K H

    2011-11-03

    A seven valent pneumococcal conjugate vaccine (PCV7) was introduced in the Norwegian childhood immunization programme in 2006, and since then the incidence of invasive pneumococcal disease has declined substantially. Recently, two new second generation pneumococcal conjugate vaccines have become available, and an update of the economic evidence is needed. The aim of this study was to estimate incremental costs, health effects and cost-effectiveness of the pneumococcal conjugate vaccines PCV7, PCV13 and PHiD-CV in Norway. We used a Markov model to estimate costs and epidemiological burden of pneumococcal- and NTHi-related diseases (invasive pneumococcal disease (IPD), Community Acquired Pneumonia (CAP) and acute otitis media (AOM)) for a specific birth cohort. Using the most relevant evidence and assumptions for a Norwegian setting, we calculated incremental costs, health effects and cost-effectiveness for different vaccination strategies. In addition we performed sensitivity analyses for key parameters, tested key assumptions in scenario analyses and explored overall model uncertainty using probabilistic sensitivity analysis. The model predicts that both PCV13 and PHiD-CV provide more health gains at a lower cost than PCV7. Differences in health gains between the two second generation vaccines are small for invasive pneumococcal disease but larger for acute otitis media and myringotomy procedures. Consequently, PHiD-CV saves more disease treatment costs and indirect costs than PCV13. This study predicts that, compared to PVC13, PHiD-CV entails lower costs and greater benefits if the latter is measured in terms of quality adjusted life years. PVC13 entails more life years gained than PHiD-CV, but those come at a cost of NOK 3.1 million (∼€0.4 million) per life year. The results indicate that PHiD-CV is cost-effective compared to PCV13 in the Norwegian setting. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. The threshold rate of oral atypical anti-psychotic adherence at which paliperidone palmitate is cost saving.

    PubMed

    Edwards, Natalie C; Muser, Erik; Doshi, Dilesh; Fastenau, John

    2012-01-01

    To identify, estimate, and compare 'real world' costs and outcomes associated with paliperidone palmitate compared with branded oral atypical anti-psychotics, and to estimate the threshold rate of oral atypical adherence at which paliperidone palmitate is cost saving. Decision analytic modeling techniques developed by Glazer and Ereshefsky have previously been used to estimate the cost-effectiveness of depot haloperidol, LAI risperidone, and, more recently, LAI olanzapine. This study used those same techniques, along with updated comparative published clinical data, to evaluate paliperidone palmitate. Adherence rates were based on strict Medication Event Monitoring System (MEMS) criteria. The evaluation was conducted from the perspective of US healthcare payers. Paliperidone palmitate patients had fewer mean annual days of relapse (8.7 days; 6.0 requiring hospitalization, 2.7 not requiring hospitalization vs 17.8 days; 12.4 requiring hospitalization, 5.4 not requiring hospitalization), and lower annual total cost ($20,995) compared to oral atypicals (mean $22,481). Because paliperidone palmitate was both more effective and less costly, it is considered economically dominant. Paliperidone palmitate saved costs when the rate of adherence of oral atypical anti-psychotics was below 44.9% using strict MEMS criteria. Sensitivity analyses showed results were robust to changes in parameter values. For patients receiving 156 mg paliperidone palmitate, the annual incremental cost was $1216 per patient (ICER = $191 per day of relapse averted). Inclusion of generic risperidone (market share 18.6%) also resulted in net incremental cost for paliperidone palmitate ($120; ICER = $13). Limitations of this evaluation include use of simplifying assumptions, data from multiple sources, and generalizability of results. Although uptake of LAIs in the US has not been as rapid as elsewhere, many thought leaders emphasize their importance in optimizing outcomes in patients with adherence problems. The findings of this analysis support the cost-effectiveness of paliperidone palmitate in these patients.

  9. Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.

    PubMed

    Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping

    2017-10-06

    Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.

  10. Incremental and comparative health care expenditures for head and neck cancer in the United States.

    PubMed

    Dwojak, Sunshine M; Bhattacharyya, Neil

    2014-10-01

    Determine the incremental costs associated with head and neck cancer (HNCa) and compare the costs with other common cancers. Cross-sectional analysis of a healthcare expenditure database. The Medical Expenditure Panel Survey is a national survey of US households. All cases of HNCa were extracted for 2006, 2008, and 2010. The incremental expenditures associated with HNCa were determined by comparing the healthcare expenditures of individuals with HNCa to the population without cancer, controlling for age, sex, education, insurance status, marital status, geographic region, and comorbidities. Healthcare expenditures for HNCa were then compared to individuals with lung cancer and colon cancer to determine relative healthcare expenditures. An estimated 264,713 patients (annualized) with HNCa were identified. The mean annual healthcare expenditures per individual for HNCa were $23,408 ± $3,397 versus $3,860 ± $52 for those without cancer. The mean adjusted incremental cost associated with HNCa was $15,852 ± $3,297 per individual (P < .001). Within this incremental cost, there was an increased incremental outpatient services cost of $3,495 ± $1,044 (P = .001) and an increased incremental hospital inpatient cost of $6,783 ± $2,894 (P = .020) associated with HNCa. The annual healthcare expenditures per individual fell in between those for lung cancer ($25,267 ± $2,375, P = .607) and colon cancer ($16,975 ± $1,291, P = .055). Despite its lower relative incidence, HNCa is associated with a significant incremental increase in annual healthcare expenditures per individual, which is comparable to or higher than other common cancers. In aggregate, the estimated annual costs associated with HNCa are $4.20 billion. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  11. Herpes zoster vaccine: A health economic evaluation for Switzerland.

    PubMed

    Blank, Patricia R; Ademi, Zanfina; Lu, Xiaoyan; Szucs, Thomas D; Schwenkglenks, Matthias

    2017-07-03

    Herpes zoster (HZ) or "shingles" results from a reactivation of the varicella zoster virus (VZV) acquired during primary infection (chickenpox) and surviving in the dorsal root ganglia. In about 20% of cases, a complication occurs, known as post-herpetic neuralgia (PHN). A live attenuated vaccine against VZV is available for the prevention of HZ and subsequent PHN. The present study aims to update an earlier evaluation estimating the cost-effectiveness of the HZ vaccine from a Swiss third party payer perspective. It takes into account updated vaccine prices, a different age cohort, latest clinical data and burden of illness data. A Markov model was developed to simulate the lifetime consequences of vaccinating 15% of the Swiss population aged 65-79 y. Information from sentinel data, official statistics and published literature were used. Endpoints assessed were number of HZ and PHN cases, quality-adjusted life years (QALYs), costs of hospitalizations, consultations and prescriptions. Based on a vaccine price of CHF 162, the vaccination strategy accrued additional costs of CHF 17,720,087 and gained 594 QALYs. The incremental cost-effectiveness ratio (ICER) was CHF 29,814 per QALY gained. Sensitivity analyses showed that the results were most sensitive to epidemiological inputs, utility values, discount rates, duration of vaccine efficacy, and vaccine price. Probabilistic sensitivity analyses indicated a more than 99% chance that the ICER was below 40,000 CHF per QALY. Findings were in line with existing cost-effectiveness analyses of HZ vaccination. This updated study supports the value of an HZ vaccination strategy targeting the Swiss population aged 65-79 y.

  12. Herpes zoster vaccine: A health economic evaluation for Switzerland

    PubMed Central

    Blank, Patricia R.; Ademi, Zanfina; Lu, Xiaoyan; Szucs, Thomas D.; Schwenkglenks, Matthias

    2017-01-01

    ABSTRACT Herpes zoster (HZ) or “shingles” results from a reactivation of the varicella zoster virus (VZV) acquired during primary infection (chickenpox) and surviving in the dorsal root ganglia. In about 20% of cases, a complication occurs, known as post-herpetic neuralgia (PHN). A live attenuated vaccine against VZV is available for the prevention of HZ and subsequent PHN. The present study aims to update an earlier evaluation estimating the cost-effectiveness of the HZ vaccine from a Swiss third party payer perspective. It takes into account updated vaccine prices, a different age cohort, latest clinical data and burden of illness data. A Markov model was developed to simulate the lifetime consequences of vaccinating 15% of the Swiss population aged 65–79 y. Information from sentinel data, official statistics and published literature were used. Endpoints assessed were number of HZ and PHN cases, quality-adjusted life years (QALYs), costs of hospitalizations, consultations and prescriptions. Based on a vaccine price of CHF 162, the vaccination strategy accrued additional costs of CHF 17,720,087 and gained 594 QALYs. The incremental cost-effectiveness ratio (ICER) was CHF 29,814 per QALY gained. Sensitivity analyses showed that the results were most sensitive to epidemiological inputs, utility values, discount rates, duration of vaccine efficacy, and vaccine price. Probabilistic sensitivity analyses indicated a more than 99% chance that the ICER was below 40,000 CHF per QALY. Findings were in line with existing cost-effectiveness analyses of HZ vaccination. This updated study supports the value of an HZ vaccination strategy targeting the Swiss population aged 65–79 y. PMID:28481678

  13. Breast Cancer and Estrogen-Alone Update

    MedlinePlus

    ... Current Issue Past Issues Research News From NIH Breast Cancer and Estrogen-Alone Update Past Issues / Summer 2006 ... hormone therapy does not increase the risk of breast cancer in postmenopausal women, according to an updated analysis ...

  14. Cost-effectiveness analysis of entecavir versus lamivudine in the first-line treatment of Australian patients with chronic hepatitis B.

    PubMed

    Arnold, Elizabeth; Yuan, Yong; Iloeje, Uchenna; Cook, Greg

    2008-01-01

    Chronic hepatitis B (CHB) virus infection is a major global healthcare problem. The recent introduction of entecavir in Australia for the treatment of CHB patients in the naive treatment setting has triggered significant optimism with regards to improved clinical outcomes for CHB patients. To estimate, from an Australian healthcare perspective, the cost effectiveness of entecavir 0.5 mg/day versus lamivudine 100 mg/day in the treatment of CHB patients naive to nucleos(t)ide therapy. A cost-utility analysis to project the clinical and economic outcomes associated with CHB disease and treatment was conducted by developing two decision-tree models specific to hepatitis B e antigen-positive (HBeAg+ve) and HBeAg-ve CHB patient subsets. This analysis was constructed using the Australian payer perspective of direct costs and outcomes, with indirect medical costs and lost productivity not being included. The study population comprised a hypothetical cohort of 1000 antiviral treatment-naive CHB patients who received either entecavir 0.5 mg/day or lamivudine 100 mg/day at model entry. The population of patients used in this analysis was representative of those patients likely to receive initial antiviral therapy in clinical practice in Australia. The long-term cost effectiveness of entecavir compared with lamivudine in the first-line treatment of CHB patients was expressed as an incremental cost per life-year gained (LYG) or QALY gained. Results revealed that the availability of entecavir 0.5 mg/day as part of the Australian hepatologist's treatment armamentarium should result in significantly lower future rates of compensated cirrhosis (CC), decompensated cirrhosis (DC), and hepatocellular carcinoma (HCC) events (i.e. 54 fewer cases of CC, seven fewer cases of DC, and 20 fewer cases of HCC over the model's timeframe for HBeAg+ve CHB patients, and 69 fewer cases of CC, eight fewer cases of DC and 25 fewer cases of HCC over the model's timeframe for HBeAg-ve CHB patients). Compared with lamivudine 100 mg/day, entecavir 0.5 mg/day generated an estimated incremental cost per LYG of Australian dollars ($A, year 2006 values) 5046 and an estimated incremental cost per QALY of $A5952 in the HBeAg+ve CHB patient population, an estimated incremental cost per LYG of $A7063 and an estimated incremental cost per QALY of $A8003 in the HBeAg-ve CHB patient population, and an overall estimated incremental cost per LYG of $A5853 and an estimated incremental cost per QALY of $A6772 in the general CHB population. The availability of entecavir in Australian clinical practice should make long-term suppression of hepatitis B virus replication increasingly attainable, resulting in fewer CHB sequelae, at an acceptable financial cost.

  15. Issues and prospects for the next generation of the spatial data transfer standard (SDTS)

    USGS Publications Warehouse

    Arctur, D.; Hair, D.; Timson, G.; Martin, E.P.; Fegeas, R.

    1998-01-01

    The Spatial Data Transfer Standard (SDTS) was designed to be capable of representing virtually any data model, rather than being a prescription for a single data model. It has fallen short of this ambitious goal for a number of reasons, which this paper investigates. In addition to issues that might have been anticipated in its design, a number of new issues have arisen since its initial development. These include the need to support explicit feature definitions, incremental update, value-added extensions, and change tracking within large, national databases. It is time to consider the next stage of evolution for SDTS. This paper suggests development of an Object Profile for SDTS that would integrate concepts for a dynamic schema structure, OpenGIS interface, and CORBA IDL.

  16. A new gated x-ray detector for the Orion laser facility

    NASA Astrophysics Data System (ADS)

    Clark, David D.; Aragonez, Robert; Archuleta, Thomas; Fatherley, Valerie; Hsu, Albert; Jorgenson, Justin; Mares, Danielle; Oertel, John; Oades, Kevin; Kemshall, Paul; Thomas, Phillip; Young, Trevor; Pederson, Neal

    2012-10-01

    Gated X-Ray Detectors (GXD) are considered the work-horse target diagnostic of the laser based inertial confinement fusion (ICF) program. Recently, Los Alamos National Laboratory (LANL) has constructed three new GXDs for the Orion laser facility at the Atomic Weapons Establishment (AWE) in the United Kingdom. What sets these three new instruments apart from what has previously been constructed for the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) is: improvements in detector head microwave transmission lines, solid state embedded hard drive and updated control software, and lighter air box design and other incremental mechanical improvements. In this paper we will present the latest GXD design enhancements and sample calibration data taken on the Trident laser facility at Los Alamos National Laboratory using the newly constructed instruments.

  17. Validity of the alcohol purchase task: a meta-analysis.

    PubMed

    Kiselica, Andrew M; Webber, Troy A; Bornovalova, Marina A

    2016-05-01

    Behavioral economists assess alcohol consumption as a function of unit price. This method allows construction of demand curves and demand indices, which are thought to provide precise numerical estimates of risk for alcohol problems. One of the more commonly used behavioral economic measures is the Alcohol Purchase Task (APT). Although the APT has shown promise as a measure of risk for alcohol problems, the construct validity and incremental utility of the APT remain unclear. This paper presents a meta-analysis of the APT literature. Sixteen studies were included in the meta-analysis. Studies were gathered via searches of the PsycInfo, PubMed, Web of Science and EconLit research databases. Random-effects meta-analyses with inverse variance weighting were used to calculate summary effect sizes for each demand index-drinking outcome relationship. Moderation of these effects by drinking status (regular versus heavy drinkers) was examined. Additionally, tests of the incremental utility of the APT indices in predicting drinking problems above and beyond measuring alcohol consumption were performed. The APT indices were correlated in the expected directions with drinking outcomes, although many effects were small in size. These effects were typically not moderated by the drinking status of the samples. Additionally, the intensity metric demonstrated incremental utility in predicting alcohol use disorder symptoms beyond measuring drinking. The Alcohol Purchase Task appears to have good construct validity, but limited incremental utility in estimating risk for alcohol problems. © 2015 Society for the Study of Addiction.

  18. [Medical and economic evaluation of donated blood screening for hepatitis C and non-A, non-B, non-C hepatitis].

    PubMed

    Vergnon, P; Colin, C; Jullien, A M; Bory, E; Excoffier, S; Matillon, Y; Trepo, C

    1996-01-01

    The aim of this study was to evaluate the cost of hepatitis C and non-A non-B non-C screening strategy in donated blood, currently used in French transfusion centres and to assess the effect in the blood transfusion centres according to the prevalence of the disease and the intrinsec values of tests. This screening strategy was based on alanine aminotransferase assay, and HBc and HCV antibodies detection. In 1993, a survey was conducted in 26 French transfusion centers to estimate the costs of the screening strategy currently used. Average expenditure on diagnostic sets, equipment, staff and administration charges for hepatitis C and non-A non-B non-C screening were calculated. From these results, we estimated the cost of the previous strategy which did not involve HCV antibody testing, so as to determine the incremental cost between the two strategies. We used clinical decision analysis and sensitivity analysis to estimate the incremental cost-effectiveness ratio with data gathered from the literature and examine the impact on blood transfusion centre. Implemented for 100,000 volunteer blood donations, the incremental cost of the new strategy was FF 2,566,111 (1992) and the marginal effectiveness was 180 additional infected donations detected. The sensitivity analysis showed the major influence of infection prevalence in donated blood on the incremental cost-effectiveness ratio: the lower the prevalence, the higher the cost-effectiveness ratio per contaminated blood product avoided.

  19. Improving Reanalyses Using TRMM and SSM/I-Derived Precipitation and Total Precipitable Water Observations

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.

    1999-01-01

    Global reanalyses currently contain significant errors in the primary fields of the hydrological cycle such as precipitation, evaporation, moisture, and the related cloud fields, especially in the tropics. The Data Assimilation Office (DAO) at the NASA Goddard Space Flight Center has been exploring the use of rainfall and total precipitable water (TPW) observations from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) and the Special Sensor Microwave/ Imager (SSM/I) instruments to improve these fields in reanalyses. The DAO has developed a "1+1"D procedure to assimilate 6-hr averaged rainfall and TPW into the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The algorithm is based on a 6-hr time integration of a column version of the GEOS DAS. The "1+1" designation refers to one spatial dimension plus one temporal dimension. The scheme minimizes the least-square differences between the satellite-retrieved rain rates and those produced by the column model over the 6-hr analysis window. The control variables are analysis increments of moisture within the Incremental Analysis Update (IAU) framework of the GEOS DAS. This 1+1D scheme, in its generalization to four dimensions, is related to the standard 4D variational assimilation but differs in its choice of the control variable. Instead of estimating the initial condition at the beginning of the assimilation cycle, it estimates the constant IAU forcing applied over a 6-hr assimilation cycle. In doing so, it imposes the forecast model as a weak constraint in a manner similar to the variational continuous assimilation techniques. We present results from an experiment in which the observed rain rate and TPW are assumed to be "perfect". They show that assimilating the TMI and SSM/I-derived surface precipitation and TPW observations improves not only the precipitation and moisture fields but also key climate parameters directly linked to convective activities such as clouds, the outgoing longwave radiation, and the large-scale circulation in the tropics. In particular, assimilating these data types reduce the state-dependent systematic errors in the assimilated products. The improved analysis also leads to a better short-range forecast, but the impact is modest compared with improvements in the time-averaged fields. These results suggest that, in the presence of biases and other errors of the forecast model, it is possible to improve the time-averaged "climate content" in the assimilated data without comparable improvements in the short-range forecast skill. Results of this experiment provide a useful benchmark for evaluating error covariance models for optimal use of these data types.

  20. Multiscale Characterization of the Probability Density Functions of Velocity and Temperature Increment Fields

    NASA Astrophysics Data System (ADS)

    DeMarco, Adam Ward

    The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.

  1. The Space Station decision - Incremental politics and technological choice

    NASA Technical Reports Server (NTRS)

    Mccurdy, Howard E.

    1990-01-01

    Using primary documents and interviews with participants, this book describes the events that led up to the 1984 decision that NASA should build a permanently occupied, international space station in low earth orbit. The role that civil servants in NASA played in initiating the program is highlighted. The trail of the Space Station proposal as its advocates devised strategies to push it through the White House policy review process is followed. The critical analysis focuses on the way in which 'incrementalism' (the tendency of policy makers to introduce incremental changes once projects are under way) operated in connection with the Space Station program. The book calls for a commitment to a long-range space policy.

  2. File Specification for GEOS-5 FP-IT (Forward Processing for Instrument Teams)

    NASA Technical Reports Server (NTRS)

    Lucchesi, R.

    2013-01-01

    The GEOS-5 FP-IT Atmospheric Data Assimilation System (GEOS-5 ADAS) uses an analysis developed jointly with NOAA's National Centers for Environmental Prediction (NCEP), which allows the Global Modeling and Assimilation Office (GMAO) to take advantage of the developments at NCEP and the Joint Center for Satellite Data Assimilation (JCSDA). The GEOS-5 AGCM uses the finite-volume dynamics (Lin, 2004) integrated with various physics packages (e.g, Bacmeister et al., 2006), under the Earth System Modeling Framework (ESMF) including the Catchment Land Surface Model (CLSM) (e.g., Koster et al., 2000). The GSI analysis is a three-dimensional variational (3DVar) analysis applied in grid-point space to facilitate the implementation of anisotropic, inhomogeneous covariances (e.g., Wu et al., 2002; Derber et al., 2003). The GSI implementation for GEOS-5 FP-IT incorporates a set of recursive filters that produce approximately Gaussian smoothing kernels and isotropic correlation functions. The GEOS-5 ADAS is documented in Rienecker et al. (2008). More recent updates to the model are presented in Molod et al. (2011). The GEOS-5 system actively assimilates roughly 2 × 10(exp 6) observations for each analysis, including about 7.5 × 10(exp 5) AIRS radiance data. The input stream is roughly twice this volume, but because of the large volume, the data are thinned commensurate with the analysis grid to reduce the computational burden. Data are also rejected from the analysis through quality control procedures designed to detect, for example, the presence of cloud. To minimize the spurious periodic perturbations of the analysis, GEOS-5 FP-IT uses the Incremental Analysis Update (IAU) technique developed by Bloom et al. (1996). More details of this procedure are given in Appendix A. The analysis is performed at a horizontal resolution of 0.625-degree longitude by 0.5-degree latitude and at 72 levels, extending to 0.01 hPa. All products are generated at the native resolution of the horizontal grid. The majority of data products are time-averaged, but four instantaneous products are also available. Hourly data intervals are used for two-dimensional products, while 3-hourly intervals are used for three-dimensional products. These may be on the model's native 72-layer vertical grid or at 42 pressure surfaces extending to 0.1 hPa. This document describes the gridded output files produced by the GMAO near real-time operational GEOS-5 FP-IT processing in support of the EOS instrument teams. Additional details about variables listed in this file specification can be found in a separate document, the GEOS-5 File Specification Variable Definition Glossary.

  3. Projecting the impact of a nationwide school plain water access intervention on childhood obesity: a cost-benefit analysis.

    PubMed

    An, R; Xue, H; Wang, L; Wang, Y

    2017-09-22

    This study aimed to project the societal cost and benefit of an expansion of a water access intervention that promotes lunchtime plain water consumption by placing water dispensers in New York school cafeterias to all schools nationwide. A decision model was constructed to simulate two events under Markov chain processes - placing water dispensers at lunchtimes in school cafeterias nationwide vs. no action. The incremental cost pertained to water dispenser purchase and maintenance, whereas the incremental benefit was resulted from cases of childhood overweight/obesity prevented and corresponding lifetime direct (medical) and indirect costs saved. Based on the decision model, the estimated incremental cost of the school-based water access intervention is $18 per student, and the corresponding incremental benefit is $192, resulting in a net benefit of $174 per student. Subgroup analysis estimates the net benefit per student to be $199 and $149 among boys and girls, respectively. Nationwide adoption of the intervention would prevent 0.57 million cases of childhood overweight, resulting in a lifetime cost saving totalling $13.1 billion. The estimated total cost saved per dollar spent was $14.5. The New York school-based water access intervention, if adopted nationwide, may have a considerably favourable benefit-cost portfolio. © 2017 World Obesity Federation.

  4. Stable Isotope Analysis of Extant Lamnoid Shark Centra: A New Tool in Age Determination?

    NASA Astrophysics Data System (ADS)

    Labs, J.

    2003-12-01

    The oxygen isotopes of fourteen vertebral centra from ten extant lamnoid sharks (including Carcharodon carcharias [great white], Isurus paucus [longfin mako], and Isurus oxyrinchus [shortfin mako]) were sampled and measured along the growth axis to determine the periodicity of incremental growth represented in the centra. As part of the internal (endochondral) skeleton, shark centra are composed initially of hyaline cartilage, which then secondarily ossifies during ontogeny forming calcified hydroxyapatite bone. The incremental growth of shark centra forms definite growth rings, with darker denser portions being deposited during slower growth times (i.e., winter) and lighter portions being deposited during more rapid growth (i.e., summer). Thus, shark centra, whether they are extant or extinct, are characterized by clearly delineated incremental growth couplets. The problem with this general rule is that there are several factors in which the growth of these couplets can vary depending upon physical environment (including temperature and water depth), food availability, and stress. The challenge for paleobiological interpretations is how to interpret the periodicity of this growth. It can generally be assumed that these bands are annual, but it is uncertain the extent to which exceptions to the rule occur. Stable isotopic analysis provides the potential to independently determine the periodicity of the growth increments and ultimately the ontogenetic age of an individual.

  5. Using Hand Grip Force as a Correlate of Longitudinal Acceleration Comfort for Rapid Transit Trains

    PubMed Central

    Guo, Beiyuan; Gan, Weide; Fang, Weining

    2015-01-01

    Longitudinal acceleration comfort is one of the essential metrics used to evaluate the ride comfort of train. The aim of this study was to investigate the effectiveness of using hand grip force as a correlate of longitudinal acceleration comfort of rapid transit trains. In the paper, a motion simulation system was set up and a two-stage experiment was designed to investigate the role of the grip force on the longitudinal comfort of rapid transit trains. The results of the experiment show that the incremental grip force was linearly correlated with the longitudinal acceleration value, while the incremental grip force had no correlation with the direction of the longitudinal acceleration vector. The results also show that the effects of incremental grip force and acceleration duration on the longitudinal comfort of rapid transit trains were significant. Based on multiple regression analysis, a step function model was established to predict the longitudinal comfort of rapid transit trains using the incremental grip force and the acceleration duration. The feasibility and practicably of the model was verified by a field test. Furthermore, a comparative analysis shows that the motion simulation system and the grip force based model were valid to support the laboratory studies on the longitudinal comfort of rapid transit trains. PMID:26147730

  6. An incremental strategy for calculating consistent discrete CFD sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.

    1992-01-01

    In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.

  7. Effect of respiratory muscle training on exercise performance in healthy individuals: a systematic review and meta-analysis.

    PubMed

    Illi, Sabine K; Held, Ulrike; Frank, Irène; Spengler, Christina M

    2012-08-01

    Two distinct types of specific respiratory muscle training (RMT), i.e. respiratory muscle strength (resistive/threshold) and endurance (hyperpnoea) training, have been established to improve the endurance performance of healthy individuals. We performed a systematic review and meta-analysis in order to determine the factors that affect the change in endurance performance after RMT in healthy subjects. A computerized search was performed without language restriction in MEDLINE, EMBASE and CINAHL and references of original studies and reviews were searched for further relevant studies. RMT studies with healthy individuals assessing changes in endurance exercise performance by maximal tests (constant load, time trial, intermittent incremental, conventional [non-intermittent] incremental) were screened and abstracted by two independent investigators. A multiple linear regression model was used to identify effects of subjects' fitness, type of RMT (inspiratory or combined inspiratory/expiratory muscle strength training, respiratory muscle endurance training), type of exercise test, test duration and type of sport (rowing, running, swimming, cycling) on changes in performance after RMT. In addition, a meta-analysis was performed to determine the effect of RMT on endurance performance in those studies providing the necessary data. The multiple linear regression analysis including 46 original studies revealed that less fit subjects benefit more from RMT than highly trained athletes (6.0% per 10 mL · kg⁻¹ · min⁻¹ decrease in maximal oxygen uptake, 95% confidence interval [CI] 1.8, 10.2%; p = 0.005) and that improvements do not differ significantly between inspiratory muscle strength and respiratory muscle endurance training (p = 0.208), while combined inspiratory and expiratory muscle strength training seems to be superior in improving performance, although based on only 6 studies (+12.8% compared with inspiratory muscle strength training, 95% CI 3.6, 22.0%; p = 0.006). Furthermore, constant load tests (+16%, 95% CI 10.2, 22.9%) and intermittent incremental tests (+18.5%, 95% CI 10.8, 26.3%) detect changes in endurance performance better than conventional incremental tests (both p < 0.001) with no difference between time trials and conventional incremental tests (p = 0.286). With increasing test duration, improvements in performance are greater (+0.4% per minute test duration, 95% CI 0.1, 0.6%; p = 0.011) and the type of sport does not influence the magnitude of improvements (all p > 0.05). The meta-analysis, performed on eight controlled trials revealed a significant improvement in performance after RMT, which was detected by constant load tests, time trials and intermittent incremental tests, but not by conventional incremental tests. RMT improves endurance exercise performance in healthy individuals with greater improvements in less fit individuals and in sports of longer durations. The two most common types of RMT (inspiratory muscle strength and respiratory muscle endurance training) do not differ significantly in their effect, while combined inspiratory/expiratory strength training might be superior. Improvements are similar between different types of sports. Changes in performance can be detected by constant load tests, time trials and intermittent incremental tests only. Thus, all types of RMT can be used to improve exercise performance in healthy subjects but care must be taken regarding the test used to investigate the improvements.

  8. Sociobehavioral Factors Associated with Caries Increment: A Longitudinal Study from 24 to 36 Months Old Children in Thailand

    PubMed Central

    Peltzer, Karl; Mongkolchati, Aroonsri; Satchaiyan, Gamon; Rajchagool, Sunsanee; Pimpak, Taksin

    2014-01-01

    The aim of this study is to investigate sociobehavioral risk factors from the prenatal period until 36 months of age, and the caries increment from 24 to 36 months of the child in Thailand. The data utilized in this study come from the prospective cohort study of Thai children (PCTC) from prenatal to 36 months of the child in Mueang Nan district, Northern Thailand. The total sample size recruited was 783 infants. The sample size with dental caries data was 603 and 597, at 24 months and at 36 months, respectively. The sample size of having two assessment points with a dental examination (at 24 months and at 36 months) was 597. Results indicate that the caries increment was 52.9%, meaning from 365 caries free children at 24 months 193 had developed dental caries at 36 months. The prevalence of dental caries was 34.2% at 24 months (n = 206) and 68.5% at 36 months of age (n = 409). In bivariate analysis, higher education of the mother, lower household income, bottle feeding of the infant, frequent sweet candy consumptions, and using rain or well water as drinking water were associated with dental caries increment, while in multivariate conditional logistic regression analysis lower household income, higher education of the mother, and using rain or well water as drinking water remained associated with dental caries increment. In conclusion, a very significant increase in caries development was observed, and oral health may be influenced by sociobehavioural risk factors. PMID:25329535

  9. Biomarker of Long-Chain n-3 Fatty Acid Intake and Breast Cancer: Accumulative Evidence from an Updated Meta-Analysis of Epidemiological Studies.

    PubMed

    Yang, Bo; Ren, Xiao L; Wang, Zhi Y; Wang, Liang; Zhao, Feng; Guo, Xiao J; Li, Duo

    2018-06-14

    We aimed to summarize the up-to-date epidemiology evidence on biomarkers of long-chain (LC) n-3 fatty acid (FA) intake in relation to breast cancer (BC). Epidemiology studies determining FA levels in biospecimen (circulating blood or adipose tissue (AT)) were identified from PubMed, EMBASE, and Cochrane Library databases until March 2018. Multivariate-adjusted risk ratios (RRs) with 95% confidence intervals (CIs) were pooled using a random-effect model. Difference in biospecimen proportions of LC n-3 FA between BC cases and non-cases were analyzed as a standardized mean difference (SMD). Thirteen cohort and eleven case-control studies were eligible for the present meta-analysis. The estimated SMD was -0.14 (95% CI: -0.27, -0.11) for LC n-3 FA and -0.27 (95% CI: -0.42, -0.11) for LC n-3/n-6 FA ratio. When comparing the top tertiles with the bottom baseline levels, circulating LC n-3 FA was significantly associated with a lower risk of BC (RR: 0.84, 95% CI: 0.74, 0.96), but not AT (RR: 1.02, 95% CI: 0.70, 1.48). Significant inverse dose-response associations were observed for each 1% increment of circulating 20:5n-3 and 22:6n-3. This meta-analysis highlights that circulating LC n-3 FA as a biomarker of intake may be an independent predictive factor for BC, especially 20:5n-3 and 22:6n-3.

  10. A Numerical Process Control Method for Circular-Tube Hydroforming Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Kenneth I.; Nguyen, Ba Nghiep; Davies, Richard W.

    2004-03-01

    This paper describes the development of a solution control method that tracks the stresses, strains and mechanical behavior of a tube during hydroforming to estimate the proper axial feed (end-feed) and internal pressure loads through time. The analysis uses the deformation theory of plasticity and Hill?s criterion to describe the plastic flow. Before yielding, the pressure and end-feed increments are estimated based on the initial tube geometry, elastic properties and yield stress. After yielding, the pressure increment is calculated based on the tube geometry at the previous solution increment and the current hoop stress increment. The end-feed increment is computedmore » from the increment of the axial plastic strain. Limiting conditions such as column buckling (of long tubes), local axi-symmetric wrinkling of shorter tubes, and bursting due to localized wall thinning are considered. The process control method has been implemented in the Marc finite element code. Hydroforming simulations using this process control method were conducted to predict the load histories for controlled expansion of 6061-T4 aluminum tubes within a conical die shape and under free hydroforming conditions. The predicted loading paths were transferred to the hydroforming equipment to form the conical and free-formed tube shapes. The model predictions and experimental results are compared for deformed shape, strains and the extent of forming at rupture.« less

  11. Cost-Effectiveness Analysis of Regorafenib for Metastatic Colorectal Cancer

    PubMed Central

    Goldstein, Daniel A.; Ahmad, Bilal B.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.

    2015-01-01

    Purpose Regorafenib is a standard-care option for treatment-refractory metastatic colorectal cancer that increases median overall survival by 6 weeks compared with placebo. Given this small incremental clinical benefit, we evaluated the cost-effectiveness of regorafenib in the third-line setting for patients with metastatic colorectal cancer from the US payer perspective. Methods We developed a Markov model to compare the cost and effectiveness of regorafenib with those of placebo in the third-line treatment of metastatic colorectal cancer. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Drug costs were based on Medicare reimbursement rates in 2014. Model robustness was addressed in univariable and probabilistic sensitivity analyses. Results Regorafenib provided an additional 0.04 QALYs (0.13 life-years) at a cost of $40,000, resulting in an incremental cost-effectiveness ratio of $900,000 per QALY. The incremental cost-effectiveness ratio for regorafenib was > $550,000 per QALY in all of our univariable and probabilistic sensitivity analyses. Conclusion Regorafenib provides minimal incremental benefit at high incremental cost per QALY in the third-line management of metastatic colorectal cancer. The cost-effectiveness of regorafenib could be improved by the use of value-based pricing. PMID:26304904

  12. Economic analysis: randomized placebo-controlled clinical trial of erlotinib in advanced non-small cell lung cancer.

    PubMed

    Bradbury, Penelope A; Tu, Dongsheng; Seymour, Lesley; Isogai, Pierre K; Zhu, Liting; Ng, Raymond; Mittmann, Nicole; Tsao, Ming-Sound; Evans, William K; Shepherd, Frances A; Leighl, Natasha B

    2010-03-03

    The NCIC Clinical Trials Group conducted the BR.21 trial, a randomized placebo-controlled trial of erlotinib (an epidermal growth factor receptor tyrosine kinase inhibitor) in patients with previously treated advanced non-small cell lung cancer. This trial accrued patients between August 14, 2001, and January 31, 2003, and found that overall survival and quality of life were improved in the erlotinib arm than in the placebo arm. However, funding restrictions limit access to erlotinib in many countries. We undertook an economic analysis of erlotinib treatment in this trial and explored different molecular and clinical predictors of outcome to determine the cost-effectiveness of treating various populations with erlotinib. Resource utilization was determined from individual patient data in the BR.21 trial database. The trial recruited 731 patients (488 in the erlotinib arm and 243 in the placebo arm). Costs arising from erlotinib treatment, diagnostic tests, outpatient visits, acute hospitalization, adverse events, lung cancer-related concomitant medications, transfusions, and radiation therapy were captured. The incremental cost-effectiveness ratio was calculated as the ratio of incremental cost (in 2007 Canadian dollars) to incremental effectiveness (life-years gained). In exploratory analyses, we evaluated the benefits of treatment in selected subgroups to determine the impact on the incremental cost-effectiveness ratio. The incremental cost-effectiveness ratio for erlotinib treatment in the BR.21 trial population was $94,638 per life-year gained (95% confidence interval = $52,359 to $429,148). The major drivers of cost-effectiveness included the magnitude of survival benefit and erlotinib cost. Subgroup analyses revealed that erlotinib may be more cost-effective in never-smokers or patients with high EGFR gene copy number. With an incremental cost-effectiveness ratio of $94 638 per life-year gained, erlotinib treatment for patients with previously treated advanced non-small cell lung cancer is marginally cost-effective. The use of molecular predictors of benefit for targeted agents may help identify more or less cost-effective subgroups for treatment.

  13. Comparative analysis of age dynamics of average values of body dimensions in children from birth to 7 years.

    PubMed

    Deryabin, Vasily E; Krans, Valentina M; Fedotova, Tatiana K

    2005-07-01

    Mean values of different body dimensions in different age cohorts of children make it possible to learn a lot about their dynamic changes. Their comparative analysis, as is usually practiced, in fact leads to a simple description of changes in measurement units (mm or cm) at the average level of some body dimension during a shorter or longer period of time. To estimate comparative intensity of the growth process of different body dimensions, the authors use the analogue of Mahalanobis distance, the so-called Kullback divergence (1967), which does not demand stability of dispersion or correlation coefficients of dimensions in compared cohorts of children. Most of the dimensions, excluding skinfolds, demonstrate growth dynamics with gradually reducing increments from birth to 7 years. Body length has the highest integrative increment, leg length about 94% of body length, body mass 77%, and trunk and extremities circumferences 56%. Skinfolds have a non-monotonic pattern of accumulated standardized increments with some increase until 1-2 years of age.

  14. Tolerance of image enhancement brightness and contrast in lateral cephalometric digital radiography for Steiner analysis

    NASA Astrophysics Data System (ADS)

    Rianti, R. A.; Priaminiarti, M.; Syahraini, S. I.

    2017-08-01

    Image enhancement brightness and contrast can be adjusted on lateral cephalometric digital radiographs to improve image quality and anatomic landmarks for measurement by Steiner analysis. To determine the limit value for adjustments of image enhancement brightness and contrast in lateral cephalometric digital radiography for Steiner analysis. Image enhancement brightness and contrast were adjusted on 100 lateral cephalometric radiography in 10-point increments (-30, -20, -10, 0, +10, +20, +30). Steiner analysis measurements were then performed by two observers. Reliabilities were tested by the Interclass Correlation Coefficient (ICC) and significance tested by ANOVA or the Kruskal Wallis test. No significant differences were detected in lateral cephalometric analysis measurements following adjustment of the image enhancement brightness and contrast. The limit value of adjustments of the image enhancement brightness and contrast associated with incremental 10-point changes (-30, -20, -10, 0, +10, +20, +30) does not affect the results of Steiner analysis.

  15. Sensitivity of finite helical axis parameters to temporally varying realistic motion utilizing an idealized knee model.

    PubMed

    Johnson, T S; Andriacchi, T P; Erdman, A G

    2004-01-01

    Various uses of the screw or helical axis have previously been reported in the literature in an attempt to quantify the complex displacements and coupled rotations of in vivo human knee kinematics. Multiple methods have been used by previous authors to calculate the axis parameters, and it has been theorized that the mathematical stability and accuracy of the finite helical axis (FHA) is highly dependent on experimental variability and rotation increment spacing between axis calculations. Previous research has not addressed the sensitivity of the FHA for true in vivo data collection, as required for gait laboratory analysis. This research presents a controlled series of experiments simulating continuous data collection as utilized in gait analysis to investigate the sensitivity of the three-dimensional finite screw axis parameters of rotation, displacement, orientation and location with regard to time step increment spacing, utilizing two different methods for spatial location. Six-degree-of-freedom motion parameters are measured for an idealized rigid body knee model that is constrained to a planar motion profile for the purposes of error analysis. The kinematic data are collected using a multicamera optoelectronic system combined with an error minimization algorithm known as the point cluster method. Rotation about the screw axis is seen to be repeatable, accurate and time step increment insensitive. Displacement along the axis is highly dependent on time step increment sizing, with smaller rotation angles between calculations producing more accuracy. Orientation of the axis in space is accurate with only a slight filtering effect noticed during motion reversal. Locating the screw axis by a projected point onto the screw axis from the mid-point of the finite displacement is found to be less sensitive to motion reversal than finding the intersection of the axis with a reference plane. A filtering effect of the spatial location parameters was noted for larger time step increments during periods of little or no rotation.

  16. Cost-Utility Analysis of Cochlear Implantation in Australian Adults.

    PubMed

    Foteff, Chris; Kennedy, Steven; Milton, Abul Hasnat; Deger, Melike; Payk, Florian; Sanderson, Georgina

    2016-06-01

    Sequential and simultaneous bilateral cochlear implants are emerging as appropriate treatment options for Australian adults with sensory deficits in both cochleae. Current funding of Australian public hospitals does not provide for simultaneous bilateral cochlear implantation (CI) as a separate surgical procedure. Previous cost-effectiveness studies of sequential and simultaneous bilateral CI assumed 100% of unilaterally treated patients' transition to a sequential bilateral CI. This assumption does not place cochlear implantation in the context of the generally treated population. When mutually exclusive treatment options exist, such as unilateral CI, sequential bilateral CI, and simultaneous bilateral CI, the mean costs of the treated populations are weighted in the calculation of incremental cost-utility ratios. The objective was to evaluate the cost-utility of bilateral hearing aids (HAs) compared with unilateral, sequential, and simultaneous bilateral CI in Australian adults with bilateral severe to profound sensorineural hearing loss. Cost-utility analysis of secondary sources input to a Markov model. Australian health care perspective, lifetime horizon with costs and outcomes discounted 5% annually. Bilateral HAs as treatment for bilateral severe to profound sensorineural hearing loss compared with unilateral, sequential, and simultaneous bilateral CI. Incremental costs per quality adjusted life year (AUD/QALY). When compared with bilateral hearing aids the incremental cost-utility ratio for the CI treatment population was AUD11,160/QALY. The incremental cost-utility ratio was weighted according to the number of patients treated unilaterally, sequentially, and simultaneously, as these were mutually exclusive treatment options. No peer-reviewed articles have reported the incremental analysis of cochlear implantation in a continuum of care for surgically treated populations with bilateral severe to profound sensorineural hearing loss. Unilateral, sequential, and simultaneous bilateral CI were cost-effective when compared with bilateral hearing aids. Technologies that reduce the total number of visits for a patient could introduce additional cost efficiencies into clinical practice.

  17. Cost-effectiveness of histamine receptor-2 antagonist versus proton pump inhibitor for stress ulcer prophylaxis in critically ill patients*.

    PubMed

    MacLaren, Robert; Campbell, Jon

    2014-04-01

    To examine the cost-effectiveness of using histamine receptor-2 antagonist or proton pump inhibitor for stress ulcer prophylaxis. Decision analysis model examining costs and effectiveness of using histamine receptor-2 antagonist or proton pump inhibitor for stress ulcer prophylaxis. Costs were expressed in 2012 U.S. dollars from the perspective of the institution and included drug regimens and the following outcomes: clinically significant stress-related mucosal bleed, ventilator-associated pneumonia, and Clostridium difficile infection. Effectiveness was the mortality risk associated with these outcomes and represented by survival. Costs, occurrence rates, and mortality probabilities were extracted from published data. A simulation model. A mixed adult ICU population. Histamine receptor-2 antagonist or proton pump inhibitor for 9 days of stress ulcer prophylaxis therapy. Output variables were expected costs, expected survival rates, incremental cost, and incremental survival rate. Univariate sensitivity analyses were conducted to determine the drivers of incremental cost and incremental survival. Probabilistic sensitivity analysis was conducted using second-order Monte Carlo simulation. For the base case analysis, the expected cost of providing stress ulcer prophylaxis was $6,707 with histamine receptor-2 antagonist and $7,802 with proton pump inhibitor, resulting in a cost saving of $1,095 with histamine receptor-2 antagonist. The associated mortality probabilities were 3.819% and 3.825%, respectively, resulting in an absolute survival benefit of 0.006% with histamine receptor-2 antagonist. The primary drivers of incremental cost and survival were the assumptions surrounding ventilator-associated pneumonia and bleed. The probabilities that histamine receptor-2 antagonist was less costly and provided favorable survival were 89.4% and 55.7%, respectively. A secondary analysis assuming equal rates of C. difficile infection showed a cost saving of $908 with histamine receptor-2 antagonists, but the survival benefit of 0.0167% favored proton pump inhibitors. Histamine receptor-2 antagonist therapy appears to reduce costs with survival benefit comparable to proton pump inhibitor therapy for stress ulcer prophylaxis. Ventilator-associated pneumonia and bleed are the variables most affecting these outcomes. The uncertainty in the findings justifies a prospective trial.

  18. Perceptually Guided Photo Retargeting.

    PubMed

    Xia, Yingjie; Zhang, Luming; Hong, Richang; Nie, Liqiang; Yan, Yan; Shao, Ling

    2016-04-22

    We propose perceptually guided photo retargeting, which shrinks a photo by simulating a human's process of sequentially perceiving visually/semantically important regions in a photo. In particular, we first project the local features (graphlets in this paper) onto a semantic space, wherein visual cues such as global spatial layout and rough geometric context are exploited. Thereafter, a sparsity-constrained learning algorithm is derived to select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path which simulates how a human actively perceives semantics in a photo. Furthermore, we learn the prior distribution of such active graphlet paths (AGPs) from training photos that are marked as esthetically pleasing by multiple users. The learned priors enforce the corresponding AGP of a retargeted photo to be maximally similar to those from the training photos. On top of the retargeting model, we further design an online learning scheme to incrementally update the model with new photos that are esthetically pleasing. The online update module makes the algorithm less dependent on the number and contents of the initial training data. Experimental results show that: 1) the proposed AGP is over 90% consistent with human gaze shifting path, as verified by the eye-tracking data, and 2) the retargeting algorithm outperforms its competitors significantly, as AGP is more indicative of photo esthetics than conventional saliency maps.

  19. Error reduction and representation in stages (ERRIS) in hydrological modelling for ensemble streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.

    2016-09-01

    This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.

  20. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  1. Online Reinforcement Learning Using a Probability Density Estimation.

    PubMed

    Agostini, Alejandro; Celaya, Enric

    2017-01-01

    Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods. This nonstationarity has a local profile, varying not only along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a gaussian mixture model. To deal with the nonstationarity problem, we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it dependent on the local density of samples, which we use to estimate the nonstationarity of the function at any given input point. To address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting depending only on time, thus avoiding undesired distortions of the approximation in less sampled regions.

  2. File Specification for GEOS-5 FP (Forward Processing)

    NASA Technical Reports Server (NTRS)

    Lucchesi, R.

    2013-01-01

    The GEOS-5 FP Atmospheric Data Assimilation System (GEOS-5 ADAS) uses an analysis developed jointly with NOAA's National Centers for Environmental Prediction (NCEP), which allows the Global Modeling and Assimilation Office (GMAO) to take advantage of the developments at NCEP and the Joint Center for Satellite Data Assimilation (JCSDA). The GEOS-5 AGCM uses the finite-volume dynamics (Lin, 2004) integrated with various physics packages (e.g, Bacmeister et al., 2006), under the Earth System Modeling Framework (ESMF) including the Catchment Land Surface Model (CLSM) (e.g., Koster et al., 2000). The GSI analysis is a three-dimensional variational (3DVar) analysis applied in grid-point space to facilitate the implementation of anisotropic, inhomogeneous covariances (e.g., Wu et al., 2002; Derber et al., 2003). The GSI implementation for GEOS-5 FP incorporates a set of recursive filters that produce approximately Gaussian smoothing kernels and isotropic correlation functions. The GEOS-5 ADAS is documented in Rienecker et al. (2008). More recent updates to the model are presented in Molod et al. (2011). The GEOS-5 system actively assimilates roughly 2 × 10(exp 6) observations for each analysis, including about 7.5 × 10(exp 5) AIRS radiance data. The input stream is roughly twice this volume, but because of the large volume, the data are thinned commensurate with the analysis grid to reduce the computational burden. Data are also rejected from the analysis through quality control procedures designed to detect, for example, the presence of cloud. To minimize the spurious periodic perturbations of the analysis, GEOS-5 FP uses the Incremental Analysis Update (IAU) technique developed by Bloom et al. (1996). More details of this procedure are given in Appendix A. The assimilation is performed at a horizontal resolution of 0.3125-degree longitude by 0.25- degree latitude and at 72 levels, extending to 0.01 hPa. All products are generated at the native resolution of the horizontal grid. The majority of data products are time-averaged, but four instantaneous products are also available. Hourly data intervals are used for two-dimensional products, while 3-hourly intervals are used for three-dimensional products. These may be on the model's native 72-layer vertical grid or at 42 pressure surfaces extending to 0.1 hPa. This document describes the gridded output files produced by the GMAO near real-time operational FP, using the most recent version of the GEOS-5 assimilation system. Additional details about variables listed in this file specification can be found in a separate document, the GEOS-5 File Specification Variable Definition Glossary. Documentation about the current access methods for products described in this document can be found on the GMAO products page: http://gmao.gsfc.nasa.gov/products/.

  3. Approach for Estimating Exposures and Incremental Health ...

    EPA Pesticide Factsheets

    Approach for Estimating Exposures and Incremental Health Effects from Lead During Renovation, Repair, and Painting Activities in Public and Commercial Buildings” (Technical Approach Document). Also available for public review and comment are two supplementary documents: the detailed appendices for the Technical Approach Document and a supplementary report entitled “Developing a Concentration-Response Function for Pb Exposure and Cardiovascular Disease-Related Mortality.” Together, these documents describes an analysis for estimating exposures and incremental health effects created by renovations of public and commercial buildings (P&CBs). This analysis could be used to identify and evaluate hazards from renovation, repair, and painting activities in P&CBs. A general overview of how this analysis can be used to inform EPA’s hazard finding is described in the Framework document that was previously made available for public comment (79 FR 31072; FRL9910-44). The analysis can be used in any proposed rulemaking to estimate the reduction in deleterious health effects that would result from any proposed regulatory requirements to mitigate exposure from P&CB renovation activities. The Technical Approach Document describes in detail how the analyses under this approach have been performed and presents the results – expected changes in blood lead levels and health effects due to lead exposure from renovation activities.

  4. Meta-Analysis of Incremental Rehearsal Using Phi Coefficients to Compare Single-Case and Group Designs

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Zaslofsky, Anne F.; Kanive, Rebecca; Parker, David C.

    2012-01-01

    The current study meta-analyzed single-case design (SCD) and group research regarding incremental rehearsal (IR). We used phi to meta-analyze data from 19 IR studies. Data from the SCD studies resulted in a nonoverlap of all pairs (NAP) score of 98.9% (95% CI = 97.6-100%), which resulted in a weighted phi of 0.77 (95% CI = 0.69-0.83). The group…

  5. Supplier Relationship Management at Army Life Cycle Management Commands: Gap Analysis of Best Practices

    DTIC Science & Technology

    2012-01-01

    307–308) define kaizen as “continuous, incremental improvement of an activity to create more value with less muda.” They define muda as “any activity...approaches, kaizen events, Six Sigma, total quality management (TQM) for continuous improvement, kaikaku,6 process reengineering for discontinuous...them fix problems and develop capabilities. These efforts may include kaizen (i.e., continuous, incremental improvement) events, process mapping, work

  6. Full On-Device Stay Points Detection in Smartphones for Location-Based Mobile Applications.

    PubMed

    Pérez-Torres, Rafael; Torres-Huitzil, César; Galeana-Zapién, Hiram

    2016-10-13

    The tracking of frequently visited places, also known as stay points, is a critical feature in location-aware mobile applications as a way to adapt the information and services provided to smartphones users according to their moving patterns. Location based applications usually employ the GPS receiver along with Wi-Fi hot-spots and cellular cell tower mechanisms for estimating user location. Typically, fine-grained GPS location data are collected by the smartphone and transferred to dedicated servers for trajectory analysis and stay points detection. Such Mobile Cloud Computing approach has been successfully employed for extending smartphone's battery lifetime by exchanging computation costs, assuming that on-device stay points detection is prohibitive. In this article, we propose and validate the feasibility of having an alternative event-driven mechanism for stay points detection that is executed fully on-device, and that provides higher energy savings by avoiding communication costs. Our solution is encapsulated in a sensing middleware for Android smartphones, where a stream of GPS location updates is collected in the background, supporting duty cycling schemes, and incrementally analyzed following an event-driven paradigm for stay points detection. To evaluate the performance of the proposed middleware, real world experiments were conducted under different stress levels, validating its power efficiency when compared against a Mobile Cloud Computing oriented solution.

  7. Middle Atmospheric Transport Properties of Assimilated Datasets

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Rood, Richard

    1999-01-01

    One of the most compelling reasons for performing data assimilation in the middle atmosphere is to obtain global, balanced datasets for studies of trace gas transport and chemistry. This is a major motivation behind the Goddard Earth observation System-Data Assimilation System (GEOS-DAS). Previous studies have shown that while this and other data assimilation systems can generally obtain good estimates of the extratropical rotational velocity field, the divergent part of the dynamical field is deficient; this impacts the "residual circulation" and leads to spurious trace gas transport on seasonal and interannual timescales. These problems are impacted by the quality and the method of use of the observational data and by deficiencies in the atmospheric general circulation model. Whichever the cause at any place and time, the "solution" is to introduce non-physical forcing terms into the system (the so-called incremental analysis updates); these can directly (thermal) or indirectly (mechanical) affect the residual circulation. This paper will illustrate how the divergent circulation is affected by deficiencies in both observations and models. Theoretical considerations will be illustrated with examples from the GEOS-DAS and from simplified numerical experiments. These are designed to isolate known problems, such as the inability of models to sustain a quasi-biennial oscillation and sparse observational constraints on tropical dynamics, or radiative inconsistencies in the presence of volcanic aerosols.

  8. Middle Atmosphere Transport Properties of Assimilated Datasets

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Rood, Richard

    1999-01-01

    One of the most compelling reasons for performing data assimilation in the middle atmosphere is to obtain global, balanced datasets for studies of trace gas transport and chemistry. This is a major motivation behind the Goddard Earth observation System-Data Assimilation System (GEOS-DAS). Previous studies have shown that while this and other data assimilation systems can generally obtain good estimates of the extratropical rotational velocity field, the divergent part of the dynamical field is deficient; this impacts the "residual circulation" and leads to spurious trace gas transport on seasonal and interannual timescales. These problems are impacted by the quality and the method of use of the observational data and by deficiencies in the atmospheric general circulation model. Whichever the cause at any place and time, the "solution" is to introduce non-physical forcing terms into the system (the so-called incremental analysis updates); these can directly (thermal) or indirectly (mechanical) affect the residual circulation. This paper will illustrate how the divergent circulation is affected by deficiencies in both observations and models. Theoretical considerations will be illustrated with examples from the GEOS-DAS and from simplified numerical experiments. These are designed to isolate known problems, such as the inability of models to sustain a quasi-biennial oscillation and sparse observational constraints on tropical dynamics, or radiative inconsistencies in the presence of volcanic aerosols.

  9. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  10. Cutoff for the East Process

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Lubetzky, E.; Martinelli, F.

    2015-05-01

    The East process is a 1 d kinetically constrained interacting particle system, introduced in the physics literature in the early 1990s to model liquid-glass transitions. Spectral gap estimates of Aldous and Diaconis in 2002 imply that its mixing time on L sites has order L. We complement that result and show cutoff with an -window. The main ingredient is an analysis of the front of the process (its rightmost zero in the setup where zeros facilitate updates to their right). One expects the front to advance as a biased random walk, whose normal fluctuations would imply cutoff with an -window. The law of the process behind the front plays a crucial role: Blondel showed that it converges to an invariant measure ν, on which very little is known. Here we obtain quantitative bounds on the speed of convergence to ν, finding that it is exponentially fast. We then derive that the increments of the front behave as a stationary mixing sequence of random variables, and a Stein-method based argument of Bolthausen (`82) implies a CLT for the location of the front, yielding the cutoff result. Finally, we supplement these results by a study of analogous kinetically constrained models on trees, again establishing cutoff, yet this time with an O(1)-window.

  11. Detecting misinformation and knowledge conflicts in relational data

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Jackobsen, Matthew; Riordan, Brian

    2014-06-01

    Information fusion is required for many mission-critical intelligence analysis tasks. Using knowledge extracted from various sources, including entities, relations, and events, intelligence analysts respond to commander's information requests, integrate facts into summaries about current situations, augment existing knowledge with inferred information, make predictions about the future, and develop action plans. However, information fusion solutions often fail because of conflicting and redundant knowledge contained in multiple sources. Most knowledge conflicts in the past were due to translation errors and reporter bias, and thus could be managed. Current and future intelligence analysis, especially in denied areas, must deal with open source data processing, where there is much greater presence of intentional misinformation. In this paper, we describe a model for detecting conflicts in multi-source textual knowledge. Our model is based on constructing semantic graphs representing patterns of multi-source knowledge conflicts and anomalies, and detecting these conflicts by matching pattern graphs against the data graph constructed using soft co-reference between entities and events in multiple sources. The conflict detection process maintains the uncertainty throughout all phases, providing full traceability and enabling incremental updates of the detection results as new knowledge or modification to previously analyzed information are obtained. Detected conflicts are presented to analysts for further investigation. In the experimental study with SYNCOIN dataset, our algorithms achieved perfect conflict detection in ideal situation (no missing data) while producing 82% recall and 90% precision in realistic noise situation (15% of missing attributes).

  12. Implicit personality theory in evaluation of brand extensions.

    PubMed

    Flaherty, K E; Pappas, J M

    2000-06-01

    Transference, the extent to which consumers transfer their opinions of a parent brand to a new extension, is critical to the success of any brand-extension strategy. Past research has shown that transference is a complex process that varies among persons depending upon an implicit personality theory, entity versus incremental. In a laboratory experiment analysis of ratings for 100 21-yr.-old undergraduates of attitude, perceived fit and risk, prior product involvement, and implicit personality theory (entity versus incremental) the influence of consumers' implicit personality theory on transference was considered within the brand-extension context. As expected, the amount of transference differed between those espousing entity and incremental theories. "Entity theorists" were much more likely to transfer feelings associated with the parent brand to the new extension than were "incremental theorists" who did not rely on prior brand information when forming evaluations of a new extension. This effect did not occur when perceived fit between the parent brand and the extension was high.

  13. Modeling the temporal periodicity of growth increments based on harmonic functions

    PubMed Central

    Morales-Bojórquez, Enrique; González-Peláez, Sergio Scarry; Bautista-Romero, J. Jesús; Lluch-Cota, Daniel Bernardo

    2018-01-01

    Age estimation methods based on hard structures require a process of validation to confirm the periodical pattern of growth marks. Among such processes, one of the most used is the marginal increment ratio (MIR), which was stated to follow a sinusoidal cycle in a population. Despite its utility, in most cases, its implementation has lacked robust statistical analysis. Accordingly, we propose a modeling approach for the temporal periodicity of growth increments based on single and second order harmonic functions. For illustrative purposes, the MIR periodicities for two geoduck species (Panopea generosa and Panopea globosa) were modeled to identify the periodical pattern of growth increments in the shell. This model identified an annual periodicity for both species but described different temporal patterns. The proposed procedure can be broadly used to objectively define the timing of the peak, the degree of symmetry, and therefore, the synchrony of band deposition of different species on the basis of MIR data. PMID:29694381

  14. The cost effectiveness of intracyctoplasmic sperm injection (ICSI).

    PubMed

    Hollingsworth, Bruce; Harris, Anthony; Mortimer, Duncan

    2007-12-01

    To estimate the incremental cost effectiveness of ICSI, and total costs for the population of Australia. Treatment effects for three patient groups were drawn from a published systematic review and meta-analysis of trials comparing fertilisation outcomes for ICSI. Incremental costs derived from resource-based costing of ICSI and existing practice comparators for each patient group. Incremental cost per live birth for patients unsuited to IVF is estimated between A$8,500 and 13,400. For the subnormal semen indication, cost per live birth could be as low as A$3,600, but in the worst case scenario, there would just be additional incremental costs of A$600 per procedure. Multiplying out the additional costs of ICSI over the relevant target populations in Australia gives potential total financial implications of over A$31 million per annum. While there are additional benefits from ICSI procedure, particularly for those with subnormal sperm, the additional cost for the health care system is substantial.

  15. The Potential Impact of CO2 and Air Temperature Increases on Krummholz's Transformation into Arborescent Form in the Southern Siberian Mountains

    NASA Technical Reports Server (NTRS)

    Kharuk, V. I.; Dvinskaya, M. L.; Im, S. T.; Ranson, K. J.

    2011-01-01

    Trees in the southern Siberian Mountains forest-tundra ecotone have considerably increased their radial and apical growth increments during the last few decades. This leads to the widespread vertical transformation of mat and prostrate krummholz forms of larch (Larix sibirica Ledeb) and Siberian pine (Pinus sibirica Du Tour). An analysis of the radial growth increments showed that these transformations began in the mid-1980s. Larch showed a greater resistance to the harsh alpine environment and attained a vertical growth form in areas where Siberian pine is still krummholz. Upper larch treeline is about 10 m higher than Siberian pine treeline. Observed apical and radial growth increment increases were correlated with CO2 concentration (r = 0.83-0.87), summer temperatures (r = 0.55-0.64), and "cold period" (i.e. September-May) air temperatures (r = 0.36-0.37). Positive correlation between growth increments and winter precipitation was attributed to snow cover protection for trees during wintertime.

  16. Updated recommendations: an assessment of NICE clinical guidelines

    PubMed Central

    2014-01-01

    Background Updating is important to ensure clinical guideline (CG) recommendations remain valid. However, little research has been undertaken in this field. We assessed CGs produced by the National Institute for Health and Care Excellence (NICE) to identify and describe updated recommendations and to investigate potential factors associated with updating. Also, we evaluated the reporting and presentation of recommendation changes. Methods We performed a descriptive analysis of original and updated CGs and recommendations, and an assessment of presentation formats and methods for recording information. We conducted a case-control study, defining cases as original recommendations that were updated (‘new-replaced’ recommendations), and controls as original recommendations that were considered to remain valid (‘not changed’ recommendations). We performed a comparison of main characteristics between cases and controls, and we planned a multiple regression analysis to identify potential predictive factors for updating. Results We included nine updated CGs (1,306 recommendations) and their corresponding original versions (1,106 recommendations). Updated CGs included 812 (62%) recommendations ‘not reviewed’, 368 (28.1%) ‘new’ recommendations, 104 (7.9%) ‘amended’ recommendations, and 25 (1.9%) recommendations reviewed but unchanged. The presentation formats used to indicate the changes in recommendations varied widely across CGs. Changes in ‘amended’, ‘deleted’, and ‘new-replaced’ recommendations (n = 296) were reported infrequently, mostly in appendices. These changes were recorded in 167 (56.4%) recommendations; and were explained in 81 (27.4%) recommendations. We retrieved a total of 7.1% (n = 78) case recommendations (‘new-replaced’) and 2.4% (n = 27) control recommendations (‘not changed’) in original CGs. The updates were mainly from ‘Fertility CG’, about ‘gynaecology, pregnancy and birth’ topic, and ‘treatment’ or ‘prevention’ purposes. We did not perform the multiple regression analysis as originally planned due to the small sample of recommendations retrieved. Conclusion Our study is the first to describe and assess updated CGs and recommendations from a national guideline program. Our results highlight the pressing need to standardise the reporting and presentation of updated recommendations and the research gap about the optimal way to present updates to guideline users. Furthermore, there is a need to investigate updating predictive factors. PMID:24919856

  17. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  18. Summary Analysis: Hanford Site Composite Analysis Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, W. E.; Lehman, L. L.

    2017-06-05

    The Hanford Site’s currently maintained Composite Analysis, originally completed in 1998, requires an update. A previous update effort was undertaken by the U.S. Department of Energy (DOE) in 2001-2005, but was ended before completion to allow the Tank Closure & Waste Management Environmental Impact Statement (TC&WM EIS) (DOE/EIS-0391) to be prepared without potential for conflicting sitewide models. This EIS was issued in 2012, and the deferral was ended with guidance in memorandum “Modeling to Support Regulatory Decision Making at Hanford” (Williams, 2012) provided with the aim of ensuring subsequent modeling is consistent with the EIS.

  19. Updated (BP3) Technical and Economic Feasibility Study - Electrochemical Membrane for Carbon Dioxide Capture and Power Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghezel-Ayagh, Hossein

    This topical report summarizes the results of an updated Technical & Economic Feasibility Study (T&EFS) which was conducted in Budget Period 3 of the project to evaluate the performance and cost of the Electrochemical Membrane (ECM)-based CO 2 capture system. The ECM technology is derived from commercially available inorganic membranes; the same used in FuelCell Energy’s commercial fuel cell power plants and sold under the trade name Direct FuelCell® (DFC®). The ECM stacks are utilized in the Combined Electric Power (generation) And Carbon dioxide Separation (CEPACS) systems which can be deployed as add-ons to conventional power plants (Pulverized Coal, Combinedmore » Cycle, etc.) or industrial facilities to simultaneously produce power while capturing >90% of the CO 2 from the flue gas. In this study, an ECM-based CEPACS plant was designed to capture and compress >90% of the CO 2 (for sequestration or beneficial use) from the flue gas of a reference 550 MW (nominal, net AC) Pulverized Coal (PC) Rankine Cycle (Subcritical steam) power plant. ECM performance was updated based on bench scale ECM stack test results. The system process simulations were performed to generate the CEPACS plant performance estimates. The performance assessment included estimation of the parasitic power consumption for CO 2 capture and compression, and the efficiency impact on the PC plant. While the ECM-based CEPACS system for the 550 MW PC plant captures 90% of CO 2 from the flue gas, it generates additional (net AC) power after compensating for the auxiliary power requirements of CO 2 capture and compression. An equipment list, ECM stacks packaging design, and CEPACS plant layout were developed to facilitate the economic analysis. Vendor quotes were also solicited. The economic feasibility study included estimation of CEPACS plant capital cost, cost of electricity (COE) analyses and estimation of cost per ton of CO 2 captured. The incremental COE for the ECM-based CO 2 capture is expected to meet U.S. DOE’s target of 35%. This study has indicated that CEPACS systems offer significant benefits with respect to cost, performance, water consumption and emissions to environment. The realization of these benefits will provide a single solution to carbon dioxide capture in addition to meeting the increasing demand for electricity.« less

  20. Updated (BP3) Technical and Economic Feasibility Study - Electrochemical Membrane for Carbon Dioxide Capture and Power Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghezel-Ayagh, Hossein

    This topical report summarizes the results of an updated Technical & Economic Feasibility Study (T&EFS) which was conducted in Budget Period 3 of the project to evaluate the performance and cost of the Electrochemical Membrane (ECM)-based CO2 capture system. The ECM technology is derived from commercially available inorganic membranes; the same used in FuelCell Energy’s commercial fuel cell power plants and sold under the trade name Direct FuelCell® (DFC®). The ECM stacks are utilized in the Combined Electric Power (generation) And Carbon dioxide Separation (CEPACS) systems which can be deployed as add-ons to conventional power plants (Pulverized Coal, Combined Cycle,more » etc.) or industrial facilities to simultaneously produce power while capturing >90% of the CO2 from the flue gas. In this study, an ECM-based CEPACS plant was designed to capture and compress >90% of the CO2 (for sequestration or beneficial use) from the flue gas of a reference 550 MW (nominal, net AC) Pulverized Coal (PC) Rankine Cycle (Subcritical steam) power plant. ECM performance was updated based on bench scale ECM stack test results. The system process simulations were performed to generate the CEPACS plant performance estimates. The performance assessment included estimation of the parasitic power consumption for CO2 capture and compression, and the efficiency impact on the PC plant. While the ECM-based CEPACS system for the 550 MW PC plant captures 90% of CO2 from the flue gas, it generates additional (net AC) power after compensating for the auxiliary power requirements of CO2 capture and compression. An equipment list, ECM stacks packaging design, and CEPACS plant layout were developed to facilitate the economic analysis. Vendor quotes were also solicited. The economic feasibility study included estimation of CEPACS plant capital cost, cost of electricity (COE) analyses and estimation of cost per ton of CO2 captured. The incremental COE for the ECM-based CO2 capture is expected to meet U.S. DOE’s target of 35%. This study has indicated that CEPACS systems offer significant benefits with respect to cost, performance, water consumption and emissions to environment. The realization of these benefits will provide a single solution to carbon dioxide capture in addition to meeting the increasing demand for electricity.« less

  1. Cost-effectiveness of available treatment options for patients suffering from severe COPD in the UK: a fully incremental analysis.

    PubMed

    Hertel, Nadine; Kotchie, Robert W; Samyshkin, Yevgeniy; Radford, Matthew; Humphreys, Samantha; Jameson, Kevin

    2012-01-01

    Frequent exacerbations which are both costly and potentially life-threatening are a major concern to patients with chronic obstructive pulmonary disease (COPD), despite the availability of several treatment options. This study aimed to assess the lifetime costs and outcomes associated with alternative treatment regimens for patients with severe COPD in the UK setting. A Markov cohort model was developed to predict lifetime costs, outcomes, and cost-effectiveness of various combinations of a long-acting muscarinic antagonist (LAMA), a long-acting beta agonist (LABA), an inhaled corticosteroid (ICS), and roflumilast in a fully incremental analysis. Patients willing and able to take ICS, and those refusing or intolerant to ICS were analyzed separately. Efficacy was expressed as relative rate ratios of COPD exacerbation associated with alternative treatment regimens, taken from a mixed treatment comparison. The analysis was conducted from the UK National Health Service (NHS) perspective. Parameter uncertainty was explored using one-way and probabilistic sensitivity analysis. Based on the results of the fully incremental analysis a cost-effectiveness frontier was determined, indicating those treatment regimens which represent the most cost-effective use of NHS resources. For ICS-tolerant patients the cost-effectiveness frontier suggested LAMA as initial treatment. Where patients continue to exacerbate and additional therapy is required, LAMA + LABA/ICS can be a cost-effective option, followed by LAMA + LABA/ICS + roflumilast (incremental cost-effectiveness ratio [ICER] versus LAMA + LABA/ICS: £16,566 per quality-adjusted life-year [QALY] gained). The ICER in ICS-intolerant patients, comparing LAMA + LABA + roflumilast versus LAMA + LABA, was £13,764/QALY gained. The relative rate ratio of exacerbations was identified as the primary driver of cost-effectiveness. The treatment algorithm recommended in UK clinical practice represents a cost-effective approach for the management of COPD. The addition of roflumilast to the standard of care regimens is a clinical and cost-effective treatment option for patients with severe COPD, who continue to exacerbate despite existing bronchodilator therapy.

  2. An indirect comparison and cost per responder analysis of adalimumab, methotrexate and apremilast in the treatment of methotrexate-naïve patients with psoriatic arthritis.

    PubMed

    Betts, Keith A; Griffith, Jenny; Friedman, Alan; Zhou, Zheng-Yi; Signorovitch, James E; Ganguli, Arijit

    2016-01-01

    Apremilast was recently approved for the treatment of active psoriatic arthritis (PsA). However, no studies compare apremilast with methotrexate or biologic therapies, so its relative comparative efficacy remains unknown. This study compared the response rates and incremental costs per responder associated with methotrexate, apremilast, and biologics for the treatment of active PsA. A systematic literature review was performed to identify phase 3 randomized controlled clinical trials of approved biologics, methotrexate, and apremilast in the methotrexate-naïve PsA population. Using Bayesian methods, a network meta-analysis was conducted to indirectly compare rates of achieving a ≥20% improvement in American College of Rheumatology component scores (ACR20). The number needed to treat (NNT) and the incremental costs per ACR20 responder (2014 US$) relative to placebo were estimated for each of the therapies. Three trials (MIPA for methotrexate, PALACE-4 for apremilast, and ADEPT for adalimumab) met all inclusion criteria. The NNTs relative to placebo were 2.63 for adalimumab, 6.69 for apremilast, and 8.31 for methotrexate. Among methotrexate-naïve PsA patients, the 16 week incremental costs per ACR20 responder were $3622 for methotrexate, $26,316 for adalimumab, and $45,808 for apremilast. The incremental costs per ACR20 responder were $222,488 for apremilast vs. methotrexate. Among methotrexate-naive PsA patients, adalimumab was found to have the lowest NNT for one additional ACR20 response and methotrexate was found to have the lowest incremental costs per ACR20 responder. There was no statistical evidence of greater efficacy for apremilast vs. methotrexate. A head-to-head trial between apremilast and methotrexate is recommended to confirm this finding.

  3. Cost-effectiveness analysis of low-molecular-weight heparin versus aspirin thromboprophylaxis in patients newly diagnosed with multiple myeloma.

    PubMed

    Chalayer, Emilie; Bourmaud, Aurélie; Tinquaut, Fabien; Chauvin, Franck; Tardy, Bernard

    2016-09-01

    The aim of this study was to assess the cost-effectiveness of low molecular weight heparin versus aspirin as primary thromboprophylaxis throughout chemotherapy for newly diagnosed multiple myeloma patients treated with protocols including thalidomide from the perspective of French health care providers. We used a modeling approach combining data from the only randomized trial evaluating the efficacy of the two treatments and secondary sources for costs, and utility values. We performed a decision-tree analysis and our base case was a hypothetical cohort of 10,000 patients. A bootstrap resampling technique was used. The incremental cost-effectiveness ratio was calculated using estimated quality-adjusted life years as the efficacy outcome. Incremental costs and effectiveness were estimated for each strategy and the incremental cost-effectiveness ratio was calculated. One-way sensitivity analyses were performed. The number of quality-adjusted life years was estimated to be 0.300 with aspirin and 0.299 with heparin. The estimated gain with aspirin was therefore approximately one day. Over 6months, the mean total cost was € 1518 (SD=601) per patient in the heparin arm and € 273 (SD=1019) in the aspirin arm. This resulted in an incremental cost of € 1245 per patient treated with heparin. The incremental cost-effectiveness ratio for the aspirin versus heparin strategy was calculated to be - 687,398 € (95% CI, -13,457,369 to -225,385). Aspirin rather than heparin thromboprophylaxis, during the first six months of chemotherapy for myeloma, is associated with significant cost savings per patient and also with an unexpected slight increase in quality of life. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Fast Fourier Transform Spectral Analysis Program

    NASA Technical Reports Server (NTRS)

    Daniel, J. A., Jr.; Graves, M. L.; Hovey, N. M.

    1969-01-01

    Fast Fourier Transform Spectral Analysis Program is used in frequency spectrum analysis of postflight, space vehicle telemetered trajectory data. This computer program with a digital algorithm can calculate power spectrum rms amplitudes and cross spectrum of sampled parameters at even time increments.

  5. Effect of comprehensive cardiac telerehabilitation on one-year cardiovascular rehospitalization rate, medical costs and quality of life: A cost-effectiveness analysis.

    PubMed

    Frederix, Ines; Hansen, Dominique; Coninx, Karin; Vandervoort, Pieter; Vandijck, Dominique; Hens, Niel; Van Craenenbroeck, Emeline; Van Driessche, Niels; Dendale, Paul

    2016-05-01

    Notwithstanding the cardiovascular disease epidemic, current budgetary constraints do not allow for budget expansion of conventional cardiac rehabilitation programmes. Consequently, there is an increasing need for cost-effectiveness studies of alternative strategies such as telerehabilitation. The present study evaluated the cost-effectiveness of a comprehensive cardiac telerehabilitation programme. This multi-centre randomized controlled trial comprised 140 cardiac rehabilitation patients, randomized (1:1) to a 24-week telerehabilitation programme in addition to conventional cardiac rehabilitation (intervention group) or to conventional cardiac rehabilitation alone (control group). The incremental cost-effectiveness ratio was calculated based on intervention and health care costs (incremental cost), and the differential incremental quality adjusted life years (QALYs) gained. The total average cost per patient was significantly lower in the intervention group (€2156 ± €126) than in the control group (€2720 ± €276) (p = 0.01) with an overall incremental cost of €-564.40. Dividing this incremental cost by the baseline adjusted differential incremental QALYs (0.026 QALYs) yielded an incremental cost-effectiveness ratio of €-21,707/QALY. The number of days lost due to cardiovascular rehospitalizations in the intervention group (0.33 ± 0.15) was significantly lower than in the control group (0.79 ± 0.20) (p = 0.037). This paper shows the addition of cardiac telerehabilitation to conventional centre-based cardiac rehabilitation to be more effective and efficient than centre-based cardiac rehabilitation alone. These results are useful for policy makers charged with deciding how limited health care resources should best be allocated in the era of exploding need. © The European Society of Cardiology 2015.

  6. Comparison of Multifrequency Bioelectrical Impedance vs. Dual-Energy X-ray Absorptiometry for Assessing Body Composition Changes After Participation in a 10-Week Resistance Training Program.

    PubMed

    Schoenfeld, Brad J; Nickerson, Brett S; Wilborn, Colin D; Urbina, Stacie L; Hayward, Sara B; Krieger, James; Aragon, Alan A; Tinsley, Grant M

    2018-06-20

    Schoenfeld, BJ, Nickerson, BS, Wilborn, CD, Urbina, SL, Hayward, SB, Krieger, J, Aragon, AA, and Tinsley, G. Comparison of multifrequency bioelectrical impedance vs. dual-energy x-ray absorptiometry for assessing body composition changes after participation in a 10-week resistance training program. J Strength Cond Res XX(X): 000-000, 2018-The purpose of this study was to assess the ability of multifrequency bioelectrical impedance analysis (MF-BIA) to determine alterations in total and segmental body composition across a 10-week resistance training (RT) program in comparison with the criterion reference dual-energy X-ray absorptiometry (DXA). Twenty-one young male volunteers (mean ± SD; age = 22.9 ± 3.0 years; height = 175.5 ± 5.9 cm; body mass = 82.9 ± 13.6 kg; body mass index = 26.9 ± 3.6) performed an RT program that included exercises for all major muscle groups. Body composition was assessed using both methods before and after the intervention; change scores were determined by subtracting pre-test values from post-test values for percent body fat ([INCREMENT]%BF), fat mass ([INCREMENT]FM), and fat-free mass ([INCREMENT]FFM). Mean changes were not significantly different when comparing MF-BIA with DXA for [INCREMENT]%BF (-1.05 vs. -1.28%), [INCREMENT]FM (-1.13 vs. -1.19 kg), and FFM (0.10 vs. 0.37 kg, respectively). Both methods showed strong agreement for [INCREMENT]%BF (r = 0.75; standard error of the estimate [SEE] = 1.15%), [INCREMENT]FM (r = 0.84; SEE 1.0 kg), and [INCREMENT]FFM (r = 0.71; SEE of 1.5 kg). The 2 methods were poor predictors of each other in regards to changes in segmental measurements. Our data indicate that MF-BIA is an acceptable alternative for tracking changes in FM and FFM during a combined diet and exercise program in young, athletic men, but segmental lean mass measurements must be interpreted with circumspection.

  7. Do other-reports of counterproductive work behavior provide an incremental contribution over self-reports? A meta-analytic comparison.

    PubMed

    Berry, Christopher M; Carpenter, Nichelle C; Barratt, Clare L

    2012-05-01

    Much of the recent research on counterproductive work behaviors (CWBs) has used multi-item self-report measures of CWB. Because of concerns over self-report measurement, there have been recent calls to collect ratings of employees' CWB from their supervisors or coworkers (i.e., other-raters) as alternatives or supplements to self-ratings. However, little is still known about the degree to which other-ratings of CWB capture unique and valid incremental variance beyond self-report CWB. The present meta-analysis investigates a number of key issues regarding the incremental contribution of other-reports of CWB. First, self- and other-ratings of CWB were moderately to strongly correlated with each other. Second, with some notable exceptions, self- and other-report CWB exhibited very similar patterns and magnitudes of relationships with a set of common correlates. Third, self-raters reported engaging in more CWB than other-raters reported them engaging in, suggesting other-ratings capture a narrower subset of CWBs. Fourth, other-report CWB generally accounted for little incremental variance in the common correlates beyond self-report CWB. Although many have viewed self-reports of CWB with skepticism, the results of this meta-analysis support their use in most CWB research as a viable alternative to other-reports. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  8. Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.

    PubMed

    Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim

    2017-12-12

    In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using T PNO = T TNO with appropriate values of 10 -7 to 10 -8 for reactions and 10 -8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >10 2 ) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.

  9. Learning other agents` preferences in multiagent negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bui, H.H.; Kieronska, D.; Venkatesh, S.

    In multiagent systems, an agent does not usually have complete information about the preferences and decision making processes of other agents. This might prevent the agents from making coordinated choices, purely due to their ignorance of what others want. This paper describes the integration of a learning module into a communication-intensive negotiating agent architecture. The learning module gives the agents the ability to learn about other agents` preferences via past interactions. Over time, the agents can incrementally update their models of other agents` preferences and use them to make better coordinated decisions. Combining both communication and learning, as two complementmore » knowledge acquisition methods, helps to reduce the amount of communication needed on average, and is justified in situations where communication is computationally costly or simply not desirable (e.g. to preserve the individual privacy).« less

  10. Incremental triangulation by way of edge swapping and local optimization

    NASA Technical Reports Server (NTRS)

    Wiltberger, N. Lyn

    1994-01-01

    This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve(s), refine existing triangulations based on solution error measures, and partition meshes based on the Cuthill-McKee, spectral, and coordinate bisection strategies.

  11. Analysis of Tube Free Hydroforming using an Inverse Approach with FLD-based Adjustment of Process Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Johnson, Kenneth I.; Khaleel, Mohammad A.

    2003-04-01

    This paper employs an inverse approach (IA) formulation for the analysis of tubes under free hydroforming conditions. The IA formulation is derived from that of Guo et al. established for flat sheet hydroforming analysis using constant strain triangular membrane elements. At first, an incremental analysis of free hydroforming for a hot-dip galvanized (HG/Z140) DP600 tube is performed using the finite element Marc code. The deformed geometry obtained at the last converged increment is then used as the final configuration in the inverse analysis. This comparative study allows us to assess the predicting capability of the inverse analysis. The results willmore » be compared with the experimental values determined by Asnafi and Skogsgardh. After that, a procedure based on a forming limit diagram (FLD) is proposed to adjust the process parameters such as the axial feed and internal pressure. Finally, the adjustment process is illustrated through a re-analysis of the same tube using the inverse approach« less

  12. Complex Physiological Response of Norway Spruce to Atmospheric Pollution - Decreased Carbon Isotope Discrimination and Unchanged Tree Biomass Increment.

    PubMed

    Čada, Vojtěch; Šantrůčková, Hana; Šantrůček, Jiří; Kubištová, Lenka; Seedre, Meelis; Svoboda, Miroslav

    2016-01-01

    Atmospheric pollution critically affects forest ecosystems around the world by directly impacting the assimilation apparatus of trees and indirectly by altering soil conditions, which subsequently also leads to changes in carbon cycling. To evaluate the extent of the physiological effect of moderate level sulfate and reactive nitrogen acidic deposition, we performed a retrospective dendrochronological analysis of several physiological parameters derived from periodic measurements of carbon stable isotope composition ((13)C discrimination, intercellular CO2 concentration and intrinsic water use efficiency) and annual diameter increments (tree biomass increment, its inter-annual variability and correlation with temperature, cloud cover, precipitation and Palmer drought severity index). The analysis was performed in two mountain Norway spruce (Picea abies) stands of the Bohemian Forest (Czech Republic, central Europe), where moderate levels of pollution peaked in the 1970s and 1980s and no evident impact on tree growth or link to mortality has been reported. The significant influence of pollution on trees was expressed most sensitively by a 1.88‰ reduction of carbon isotope discrimination (Δ(13)C). The effects of atmospheric pollution interacted with increasing atmospheric CO2 concentration and temperature. As a result, we observed no change in intercellular CO2 concentrations (Ci), an abrupt increase in water use efficiency (iWUE) and no change in biomass increment, which could also partly result from changes in carbon partitioning (e.g., from below- to above-ground). The biomass increment was significantly related to Δ(13)C on an individual tree level, but the relationship was lost during the pollution period. We suggest that this was caused by a shift from the dominant influence of the photosynthetic rate to stomatal conductance on Δ(13)C during the pollution period. Using biomass increment-climate correlation analyses, we did not identify any clear pollution-related change in water stress or photosynthetic limitation (since biomass increment did not become more sensitive to drought/precipitation or temperature/cloud cover, respectively). Therefore, we conclude that the direct effect of moderate pollution on stomatal conductance was likely the main driver of the observed physiological changes. This mechanism probably caused weakening of the spruce trees and increased sensitivity to other stressors.

  13. Complex Physiological Response of Norway Spruce to Atmospheric Pollution – Decreased Carbon Isotope Discrimination and Unchanged Tree Biomass Increment

    PubMed Central

    Čada, Vojtěch; Šantrůčková, Hana; Šantrůček, Jiří; Kubištová, Lenka; Seedre, Meelis; Svoboda, Miroslav

    2016-01-01

    Atmospheric pollution critically affects forest ecosystems around the world by directly impacting the assimilation apparatus of trees and indirectly by altering soil conditions, which subsequently also leads to changes in carbon cycling. To evaluate the extent of the physiological effect of moderate level sulfate and reactive nitrogen acidic deposition, we performed a retrospective dendrochronological analysis of several physiological parameters derived from periodic measurements of carbon stable isotope composition (13C discrimination, intercellular CO2 concentration and intrinsic water use efficiency) and annual diameter increments (tree biomass increment, its inter-annual variability and correlation with temperature, cloud cover, precipitation and Palmer drought severity index). The analysis was performed in two mountain Norway spruce (Picea abies) stands of the Bohemian Forest (Czech Republic, central Europe), where moderate levels of pollution peaked in the 1970s and 1980s and no evident impact on tree growth or link to mortality has been reported. The significant influence of pollution on trees was expressed most sensitively by a 1.88‰ reduction of carbon isotope discrimination (Δ13C). The effects of atmospheric pollution interacted with increasing atmospheric CO2 concentration and temperature. As a result, we observed no change in intercellular CO2 concentrations (Ci), an abrupt increase in water use efficiency (iWUE) and no change in biomass increment, which could also partly result from changes in carbon partitioning (e.g., from below- to above-ground). The biomass increment was significantly related to Δ13C on an individual tree level, but the relationship was lost during the pollution period. We suggest that this was caused by a shift from the dominant influence of the photosynthetic rate to stomatal conductance on Δ13C during the pollution period. Using biomass increment-climate correlation analyses, we did not identify any clear pollution-related change in water stress or photosynthetic limitation (since biomass increment did not become more sensitive to drought/precipitation or temperature/cloud cover, respectively). Therefore, we conclude that the direct effect of moderate pollution on stomatal conductance was likely the main driver of the observed physiological changes. This mechanism probably caused weakening of the spruce trees and increased sensitivity to other stressors. PMID:27375659

  14. Vibration analysis of resistance spot welding joint for dissimilar plate structure (mild steel 1010 and stainless steel 304)

    NASA Astrophysics Data System (ADS)

    Sani, M. S. M.; Nazri, N. A.; Alawi, D. A. J.

    2017-09-01

    Resistance spot welding (RSW) is a proficient joining method commonly used for sheet metal joining and become one of the oldest spot welding processes use in industry especially in the automotive. RSW involves the application of heat and pressure without neglecting time taken when joining two or more metal sheets at a localized area which is claimed as the most efficient welding process in metal fabrication. The purpose of this project is to perform model updating of RSW plate structure between mild steel 1010 and stainless steel 304. In order to do the updating, normal mode finite element analysis (FEA) and experimental modal analysis (EMA) have been carried out. Result shows that the discrepancies of natural frequency between FEA and EMA are below than 10 %. Sensitivity model updating is evaluated in order to make sure which parameters are influences in this structural dynamic modification. Young’s modulus and density both materials are indicate significant parameters to do model updating. As a conclusion, after perform model updating, total average error of dissimilar RSW plate is improved significantly.

  15. Relating annual increments of the endangered Blanding's turtle plastron growth to climate

    PubMed Central

    Richard, Monik G; Laroque, Colin P; Herman, Thomas B

    2014-01-01

    This research is the first published study to report a relationship between climate variables and plastron growth increments of turtles, in this case the endangered Nova Scotia Blanding's turtle (Emydoidea blandingii). We used techniques and software common to the discipline of dendrochronology to successfully cross-date our growth increment data series, to detrend and average our series of 80 immature Blanding's turtles into one common chronology, and to seek correlations between the chronology and environmental temperature and precipitation variables. Our cross-dated chronology had a series intercorrelation of 0.441 (above 99% confidence interval), an average mean sensitivity of 0.293, and an average unfiltered autocorrelation of 0.377. Our master chronology represented increments from 1975 to 2007 (33 years), with index values ranging from a low of 0.688 in 2006 to a high of 1.303 in 1977. Univariate climate response function analysis on mean monthly air temperature and precipitation values revealed a positive correlation with the previous year's May temperature and current year's August temperature; a negative correlation with the previous year's October temperature; and no significant correlation with precipitation. These techniques for determining growth increment response to environmental variables should be applicable to other turtle species and merit further exploration. PMID:24963390

  16. Relating annual increments of the endangered Blanding's turtle plastron growth to climate.

    PubMed

    Richard, Monik G; Laroque, Colin P; Herman, Thomas B

    2014-05-01

    This research is the first published study to report a relationship between climate variables and plastron growth increments of turtles, in this case the endangered Nova Scotia Blanding's turtle (Emydoidea blandingii). We used techniques and software common to the discipline of dendrochronology to successfully cross-date our growth increment data series, to detrend and average our series of 80 immature Blanding's turtles into one common chronology, and to seek correlations between the chronology and environmental temperature and precipitation variables. Our cross-dated chronology had a series intercorrelation of 0.441 (above 99% confidence interval), an average mean sensitivity of 0.293, and an average unfiltered autocorrelation of 0.377. Our master chronology represented increments from 1975 to 2007 (33 years), with index values ranging from a low of 0.688 in 2006 to a high of 1.303 in 1977. Univariate climate response function analysis on mean monthly air temperature and precipitation values revealed a positive correlation with the previous year's May temperature and current year's August temperature; a negative correlation with the previous year's October temperature; and no significant correlation with precipitation. These techniques for determining growth increment response to environmental variables should be applicable to other turtle species and merit further exploration.

  17. Clinical and cost-effectiveness analysis of early detection of patients at nutrition risk during their hospital stay through the new screening method CIPA: a study protocol.

    PubMed

    Suárez-Llanos, José Pablo; Benítez-Brito, Néstor; Vallejo-Torres, Laura; Delgado-Brito, Irina; Rosat-Rodrigo, Adriá; Hernández-Carballo, Carolina; Ramallo-Fariña, Yolanda; Pereyra-García-Castro, Francisca; Carlos-Romero, Juan; Felipe-Pérez, Nieves; García-Niebla, Jennifer; Calderón-Ledezma, Eduardo Mauricio; González-Melián, Teresa de Jesús; Llorente-Gómez de Segura, Ignacio; Barrera-Gómez, Manuel Ángel

    2017-04-20

    Malnutrition is highly prevalent in hospitalized patients and results in a worsened clinical course as well as an increased length of stay, mortality, and costs. Therefore, simple nutrition screening systems, such as CIPA (control of food intake, protein, anthropometry), may be implemented to facilitate the patient's recovery process. The aim of this study is to evaluate the effectiveness and cost-effectiveness of implementing such screening tool in a tertiary hospital, consistent with the lack of similar, published studies on any hospital nutrition screening system. The present study is carried out as an open, controlled, randomized study on patients that were admitted to the Internal Medicine and the General and Digestive Surgery ward; the patients were randomized to either a control or an intervention group (n = 824, thereof 412 patients in each of the two study arms). The control group underwent usual inpatient clinical care, while the intervention group was evaluated with the CIPA screening tool for early detection of malnutrition and treated accordingly. CIPA nutrition screening was performed upon hospital admission and classified positive when at least one of the following parameters was met: 72 h food intake control < 50%, serum albumin < 3 g/dL, body mass index < 18.5 kg/m 2 (or mid-upper arm circumference ≤ 22.5 cm). In this case, the doctor decided on whether or not providing nutrition support. The following variables will be evaluated: hospital length of stay (primary endpoint), mortality, 3-month readmission, and in-hospital complications. Likewise, the quality of life questionnaires EQ-5D-5 L are being collected for all patients at hospital admission, discharge, and 3 months post-discharge. Analysis of cost-effectiveness will be performed by measuring effectiveness in terms of quality-adjusted life years (QALYs). The cost per patient will be established by identifying health care resource utilization; cost-effectiveness will be determined through the incremental cost-effectiveness ratio (ICER). We will calculate the incremental cost per QALY gained with respect to the intervention. This ongoing trial aims to evaluate the cost-effectiveness of implementing the malnutrition screening tool CIPA in a tertiary hospital. Clinical Trial.gov ( NCT02721706 ). First receivevd: March 1, 2016 Last updated: April 8, 2017 Last verified: April 2017.

  18. 76 FR 1514 - Adoption of Updated EDGAR Filer Manual

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-11

    ... Filer Manual and the rule amendments is January 11, 2011. In accordance with the APA,\\8\\ we find that...-29547] Adoption of Updated EDGAR Filer Manual AGENCY: Securities and Exchange Commission. ACTION: Final... Electronic Data Gathering, Analysis, and Retrieval System (EDGAR) Filer Manual to reflect updates to the...

  19. 78 FR 29616 - Adoption of Updated EDGAR Filer Manual

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-21

    ... updated Filer Manual and the rule amendments is May 21, 2013. In accordance with the APA,\\7\\ we find that...-30515] Adoption of Updated EDGAR Filer Manual AGENCY: Securities and Exchange Commission. ACTION: Final... Electronic Data Gathering, Analysis, and Retrieval System (EDGAR) Filer Manual and related rules to reflect...

  20. NREL Updates Baseline Cost and Performance Data for Electricity Generation

    Science.gov Websites

    Technologies | News | NREL Updates Baseline Cost and Performance Data for Electricity Generation Technologies News Release: NREL Updates Baseline Cost and Performance Data for Electricity generation technology cost and performance data used to support and inform electric sector analysis in the

  1. Continuously updated network meta-analysis and statistical monitoring for timely decision-making

    PubMed Central

    Nikolakopoulou, Adriani; Mavridis, Dimitris; Egger, Matthias; Salanti, Georgia

    2016-01-01

    Pairwise and network meta-analysis (NMA) are traditionally used retrospectively to assess existing evidence. However, the current evidence often undergoes several updates as new studies become available. In each update recommendations about the conclusiveness of the evidence and the need of future studies need to be made. In the context of prospective meta-analysis future studies are planned as part of the accumulation of the evidence. In this setting, multiple testing issues need to be taken into account when the meta-analysis results are interpreted. We extend ideas of sequential monitoring of meta-analysis to provide a methodological framework for updating NMAs. Based on the z-score for each network estimate (the ratio of effect size to its standard error) and the respective information gained after each study enters NMA we construct efficacy and futility stopping boundaries. A NMA treatment effect is considered conclusive when it crosses an appended stopping boundary. The methods are illustrated using a recently published NMA where we show that evidence about a particular comparison can become conclusive via indirect evidence even if no further trials address this comparison. PMID:27587588

  2. Citation Discovery Tools for Conducting Adaptive Meta-analyses to Update Systematic Reviews.

    PubMed

    Bae, Jong-Myon; Kim, Eun Hee

    2016-03-01

    The systematic review (SR) is a research methodology that aims to synthesize related evidence. Updating previously conducted SRs is necessary when new evidence has been produced, but no consensus has yet emerged on the appropriate update methodology. The authors have developed a new SR update method called 'adaptive meta-analysis' (AMA) using the 'cited by', 'similar articles', and 'related articles' citation discovery tools in the PubMed and Scopus databases. This study evaluates the usefulness of these citation discovery tools for updating SRs. Lists were constructed by applying the citation discovery tools in the two databases to the articles analyzed by a published SR. The degree of overlap between the lists and distribution of excluded results were evaluated. The articles ultimately selected for the SR update meta-analysis were found in the lists obtained from the 'cited by' and 'similar' tools in PubMed. Most of the selected articles appeared in both the 'cited by' lists in Scopus and PubMed. The Scopus 'related' tool did not identify the appropriate articles. The AMA, which involves using both citation discovery tools in PubMed, and optionally, the 'related' tool in Scopus, was found to be useful for updating an SR.

  3. The 2006 Cape Canaveral Air Force Station Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the National Aeronautics and Space Administration's Space Shuttle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian

    2008-01-01

    The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  4. 10 CFR 72.70 - Safety analysis report updating.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Safety analysis report updating. 72.70 Section 72.70 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C WASTE Records...

  5. 78 FR 65634 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-01

    ... Updated Market Power Analysis of the Black Hills Corporation Public Utilities for the Northwest Region..., LLC submits the Triennial Market Power Update Analysis for Markets in the Northwest Region pursuant to...: Black Hills Power, Inc., Cheyenne Light Fuel & Power Company, Black Hills/Colorado Electric Utility Co...

  6. Numerical Analysis of Laminated, Orthotropic Composite Structures

    DTIC Science & Technology

    1975-11-01

    the meridian plane. In the first model , a nine degree-of-freedom, straight sided, tri- angular element was used. In this element, the three...E ■ 13.79 GPa v«. ■ «25» 6.. ■ 4.82 GPa ns its V . « .25, G. « 4.82 GPa nt nt vst * ,4S» 6st * 1*379 6P...means zero values of axial accelera- tion, and angular acceleration and velocity for each load increment) NLINC (Number of load increments with time

  7. Public Key Infrastructure (PKI) Increment 2 Root Cause Analysis (RCA) for Performance Assessments and Root Cause Analyses (PARCA)

    DTIC Science & Technology

    2015-05-01

    for issuing this critical change:  Inability to achieve PKI Increment 2 Full Deployment Decision ( FDD ) within five years of program initiation...March 1, 2014 deadline), and  Delay of over one year in the original FDD estimate provided to the Congress (1 March 2014 deadline). The proximate...to support a 1 March 2014 FDD .” The Director, Performance Assessments and Root Cause Analyses (PARCA), asked the Institute for Defense Analyses

  8. Analysis of Marine Corps Renewable Energy Planning to Meet Installation Energy Security Requirements

    DTIC Science & Technology

    2013-12-03

    experimentation eventually gives way to the era of ferment . A few new technologies break through in the industry and are applied to a growing number of niche...experimentation and ferment eventually give way to an era of incremental change, where the industry down-selects to the most successful and efficient...clearly in the later part of the era of incremental change. Most renewables, however, are only just now moving into the second phase, the era of ferment

  9. Warfighter Information Network - Tactical (WIN-T) Increment 2: Second Follow-on Operational Test and Evaluation

    DTIC Science & Technology

    2015-05-01

    HNW line-of-sight network is mounted on a 10-meter telescoping mast located just aft of the TCN’s cab. The flat plate Range Throughput Extension Kit... TAC – Tactical Command Post ATH – At-the-Halt PoP – Point of Presence SNE – Soldier Network Extension NOSC – Network Operations & Security...Survivability/Lethality Analysis Directorate (ARL/SLAD) conducted a Cooperative Vulnerability and Penetration Assessment on WIN-T Increment 2. The Army

  10. Posaconazole vs fluconazole or itraconazole for prevention of invasive fungal diseases in patients with acute myeloid leukemia or myelodysplastic syndrome: a cost-effectiveness analysis in an Asian teaching hospital.

    PubMed

    Chan, Thomas S Y; Marcella, Stephen W; Gill, Harinder; Hwang, Yu-Yan; Kwong, Yok-Lam

    2016-01-01

    Posaconazole is superior to fluconazole/itraconazole in preventing invasive fungal diseases (IFDs) in neutropenic patients. Whether the higher cost of posaconazole is offset by decreases in IFDs in a given institute requires cost-effective analysis encompassing the spectrum of IFDs and socioeconomic factors specific to that geographic area. This study performed a cost-effective analysis of posaconazole prophylaxis for IFDs in an Asian teaching hospital, employing decision modeling and data of IFDs and medication costs specific to the institute, in neutropenic patients with acute myeloid leukemia (AML) or myelodysplastic syndrome (MDS). In the cost-effectiveness analysis, the higher cost of posaconazole was partially offset by a reduction in the cost of treating IFDs that were prevented, resulting in an incremental cost of 125,954 Hong Kong dollars/16,148 USD per IFD avoided. Over a lifetime horizon, assuming same case fatality rate of IFDs in both groups, use of posaconazole results in 0.07 discounted life years saved. This corresponds to an incremental cost of 116,023 HKD/14,875 USD per life year saved. This incremental cost per life year saved in posaconazole prophylaxis fulfilled the World Health Organization defined threshold for cost-effectiveness. Posaconazole prophylaxis was cost-effective in Hong Kong.

  11. Cost-Effectiveness Analysis: a proposal of new reporting standards in statistical analysis

    PubMed Central

    Bang, Heejung; Zhao, Hongwei

    2014-01-01

    Cost-effectiveness analysis (CEA) is a method for evaluating the outcomes and costs of competing strategies designed to improve health, and has been applied to a variety of different scientific fields. Yet, there are inherent complexities in cost estimation and CEA from statistical perspectives (e.g., skewness, bi-dimensionality, and censoring). The incremental cost-effectiveness ratio that represents the additional cost per one unit of outcome gained by a new strategy has served as the most widely accepted methodology in the CEA. In this article, we call for expanded perspectives and reporting standards reflecting a more comprehensive analysis that can elucidate different aspects of available data. Specifically, we propose that mean and median-based incremental cost-effectiveness ratios and average cost-effectiveness ratios be reported together, along with relevant summary and inferential statistics as complementary measures for informed decision making. PMID:24605979

  12. Incremental Principal Component Analysis Based Outlier Detection Methods for Spatiotemporal Data Streams

    NASA Astrophysics Data System (ADS)

    Bhushan, A.; Sharker, M. H.; Karimi, H. A.

    2015-07-01

    In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.

  13. Update of the NEXT Ion Thruster Service Life Assessment with Post-Test Correlation to the Long Duration Test

    NASA Technical Reports Server (NTRS)

    Yim, John T.; Soulas, George C.; Shastry, Rohit; Choi, Maria; Mackey, Jonathan A.; Sarver-Verhey, Timothy R.

    2017-01-01

    The service life assessment for NASA's Evolutionary Xenon Thruster is updated to incorporate the results from the successful and voluntarily early completion of the 51,184 hour long duration test which demonstrated 918 kg of total xenon throughput. The results of the numerous post-test investigations including destructive interrogations have been assessed against all of the critical known and suspected failure mechanisms to update the life and throughput expectations for each major component. Analysis results of two of the most acute failure mechanisms, namely pit-and-groove erosion and aperture enlargement of the accelerator grid, are not updated in this work but will be published at a future time after analysis completion.

  14. An Update on Modifications to Water Treatment Plant Model

    EPA Science Inventory

    Water treatment plant (WTP) model is an EPA tool for informing regulatory options. WTP has a few versions: 1). WTP2.2 can help in regulatory analysis. An updated version (WTP3.0) will allow plant-specific analysis (WTP-ccam) and thus help meet plant-specific treatment objectives...

  15. Updated Meta-Analysis of Learner Control within Educational Technology

    ERIC Educational Resources Information Center

    Karich, Abbey C.; Burns, Matthew K.; Maki, Kathrin E.

    2014-01-01

    Giving a student control over their learning has theoretical and intuitive appeal, but its effects are neither powerful nor consistent in the empirical literature base. This meta-analysis updated previous meta-analytic research by Niemiec, Sikorski, and Walberg by studying the overall effectiveness of providing learner control within educational…

  16. Re: errors in the NOF meta-analysis of calcium and vitamin D supplements

    USDA-ARS?s Scientific Manuscript database

    The manuscript entitled "Calcium plus vitamin D supplementation and risk of fractures: an updated meta-analysis from the National Osteoporosis Foundation" sought to update a former AHRQ evidence report. The study was commissioned by the NOF to inform the organization, since a significant controversy...

  17. Kentucky, 2007 forest inventory and analysis factsheet

    Treesearch

    Christopher M. Oswalt; Christopher R. King; Tony G. Johnson

    2010-01-01

    This science update provides an overview of the forest resource attributes of Kentucky. The overview is based on an annual inventory conducted by the Forest Inventory and Analysis (FIA) Program at the Southern Research Station of the USDA Forest Service. The inventory, along with Web-posted supplemental tables, will be updated annually.

  18. Can Aerosol Direct Radiative Effects Account for Analysis Increments of Temperature in the Tropical Atlantic?

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo M.; Alpert, Pinhas

    2016-01-01

    In the late 1990's, prior to the launch of the Terra satellite, atmospheric general circulation models (GCMs) did not include aerosol processes because aerosols were not properly monitored on a global scale and their spatial distributions were not known well enough for their incorporation in operational GCMs. At the time of the first GEOS Reanalysis (Schubert et al. 1993), long time series of analysis increments (the corrections to the atmospheric state by all available meteorological observations) became readily available, enabling detailed analysis of the GEOS-1 errors on a global scale. Such analysis revealed that temperature biases were particularly pronounced in the Tropical Atlantic region, with patterns depicting a remarkable similarity to dust plumes emanating from the African continent as evidenced by TOMS aerosol index maps. Yoram Kaufman was instrumental encouraging us to pursue this issue further, resulting in the study reported in Alpert et al. (1998) where we attempted to assess aerosol forcing by studying the errors of a the GEOS-1 GCM without aerosol physics within a data assimilation system. Based on this analysis, Alpert et al. (1998) put forward that dust aerosols are an important source of inaccuracies in numerical weather-prediction models in the Tropical Atlantic region, although a direct verification of this hypothesis was not possible back then. Nearly 20 years later, numerical prediction models have increased in resolution and complexity of physical parameterizations, including the representation of aerosols and their interactions with the circulation. Moreover, with the advent of NASA's EOS program and subsequent satellites, atmospheric aerosols are now monitored globally on a routine basis, and their assimilation in global models are becoming well established. In this talk we will reexamine the Alpert et al. (1998) hypothesis using the most recent version of the GEOS-5 Data Assimilation System with assimilation of aerosols. We will explicitly calculate the impact of aerosols on the temperature analysis increments in the tropical Atlantic and assess the extent to which inclusion of atmospheric aerosols have reduced these increments.

  19. Forests of Louisiana, 2014

    Treesearch

    S.N. Oswalt

    2017-01-01

    This resource update provides an overview of forest resources in Louisiana based on an inventory conducted by the U.S. Forest Service, Forest Inventory and Analysis (FIA) program at the Southern Research Station. Estimates are based on field data collected using the FIA annualized sample design and are updated yearly. The estimates presented in this update are for the...

  20. Forests of Kentucky, 2014

    Treesearch

    Thomas Brandeis; Andy Hartsell; KaDonna Randolph; Sonja Oswalt; Consuelo Brandeis

    2016-01-01

    This resource update provides an overview of forest resources in Kentucky based on an inventory conducted by the U.S. Forest Service, Forest Inventory and Analysis (FIA) program at the Southern Research Station. Estimates are based on field data collected using the FIA annualized sample design and are updated yearly. The estimates presented in this update are...

  1. Development and Psychometric Analysis of a Nurses’ Attitudes and Skills Safety Scale: Initial Results

    PubMed Central

    Armstrong, Gail E.; Dietrich, Mary; Norman, Linda; Barnsteiner, Jane; Mion, Lorraine

    2016-01-01

    Health care organizations have incorporated updated safety principles in the analysis of errors and in norms and standards. Yet no research exists that assesses bedside nurses’ perceived skills or attitudes toward updated safety concepts. The aims of this study were to develop a scale assessing nurses’ perceived skills and attitudes toward updated safety concepts, determine content validity, and examine internal consistency of the scale and subscales. Understanding nurses’ perceived skills and attitudes about safety concepts can be used in targeting strategies to enhance their safety practices. PMID:27479518

  2. Development and Psychometric Analysis of a Nurses' Attitudes and Skills Safety Scale: Initial Results.

    PubMed

    Armstrong, Gail E; Dietrich, Mary; Norman, Linda; Barnsteiner, Jane; Mion, Lorraine

    Health care organizations have incorporated updated safety principles in the analysis of errors and in norms and standards. Yet no research exists that assesses bedside nurses' perceived skills or attitudes toward updated safety concepts. The aims of this study were to develop a scale assessing nurses' perceived skills and attitudes toward updated safety concepts, determine content validity, and examine internal consistency of the scale and subscales. Understanding nurses' perceived skills and attitudes about safety concepts can be used in targeting strategies to enhance their safety practices.

  3. Comparing cost-effectiveness of X-Stop with minimally invasive decompression in lumbar spinal stenosis: a randomized controlled trial.

    PubMed

    Lønne, Greger; Johnsen, Lars Gunnar; Aas, Eline; Lydersen, Stian; Andresen, Hege; Rønning, Roar; Nygaard, Øystein P

    2015-04-15

    Randomized clinical trial with 2-year follow-up. To compare the cost-effectiveness of X-stop to minimally invasive decompression in patients with symptomatic lumbar spinal stenosis. Lumbar spinal stenosis is the most common indication for operative treatment in elderly. Although surgery is more costly than nonoperative treatment, health outcomes for more than 2 years were shown to be significantly better. Surgical treatment with minimally invasive decompression is widely used. X-stop is introduced as another minimally invasive technique showing good results compared with nonoperative treatment. We enrolled 96 patients aged 50 to 85 years, with symptoms of neurogenic intermittent claudication within 250-m walking distance and 1- or 2-level lumbar spinal stenosis, randomized to either minimally invasive decompression or X-stop. Quality-adjusted life-years were based on EuroQol EQ-5D. The hospital unit costs were estimated by means of the top-down approach. Each cost unit was converted into a monetary value by dividing the overall cost by the amount of cost units produced. The analysis of costs and health outcomes is presented by the incremental cost-effectiveness ratio. The study was terminated after a midway interim analysis because of significantly higher reoperation rate in the X-stop group (33%). The incremental cost for X-stop compared with minimally invasive decompression was &OV0556;2832 (95% confidence interval: 1886-3778), whereas the incremental health gain was 0.11 quality-adjusted life-year (95% confidence interval: -0.01 to 0.23). Based on the incremental cost and effect, the incremental cost-effectiveness ratio was &OV0556;25,700. The majority of the bootstrap samples displayed in the northeast corner of the cost-effectiveness plane, giving a 50% likelihood that X-stop is cost-effective at the extra cost of &OV0556;25,700 (incremental cost-effectiveness ratio) for a quality-adjusted life-year. The significantly higher cost of X-stop is mainly due to implant cost and the significantly higher reoperation rate. 2.

  4. International Space Station Increment-2 Quick Look Report

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Hrovat, Kenneth; Kelly, Eric

    2001-01-01

    The objective of this quick look report is to disseminate the International Space Station (ISS) Increment-2 reduced gravity environment preliminary analysis in a timely manner to the microgravity scientific community. This report is a quick look at the processed acceleration data collected by the Microgravity Acceleration Measurement System (MAMS) during the period of May 3 to June 8, 2001. The report is by no means an exhaustive examination of all the relevant activities, which occurred during the time span mentioned above for two reasons. First, the time span being considered in this report is rather short since the MAMS was not active throughout the time span being considered to allow a detailed characterization. Second, as the name of the report implied, it is a quick look at the acceleration data. Consequently, a more comprehensive report, the ISS Increment-2 report, will be published following the conclusion of the Increment-2 tour of duty. NASA sponsors the MAMS and the Space Acceleration Microgravity System (SAMS) to support microgravity science experiments, which require microgravity acceleration measurements. On April 19, 2001, both the MAMS and the SAMS units were launched on STS-100 from the Kennedy Space Center for installation on the ISS. The MAMS unit was flown to the station in support of science experiments requiring quasisteady acceleration data measurements, while the SAMS unit was flown to support experiments requiring vibratory acceleration data measurement. Both acceleration systems are also used in support of the vehicle microgravity requirements verification. The ISS reduced gravity environment analysis presented in this report uses mostly the MAMS acceleration data measurements (the Increment-2 report will cover both systems). The MAMS has two sensors. The MAMS Orbital Acceleration Research Experiment Sensor Subsystem, which is a low frequency range sensor (up to 1 Hz), is used to characterize the quasi-steady environment for payloads and vehicle. The MAMS High Resolution Acceleration Package is used to characterize the ISS vibratory environment up to 100 Hz. This quick look report presents some selected quasi-steady and vibratory activities recorded by the MAMS during the ongoing ISS Increment-2 tour of duty.

  5. Progression-free survival/time to progression as a potential surrogate for overall survival in HR+, HER2- metastatic breast cancer.

    PubMed

    Forsythe, Anna; Chandiwana, David; Barth, Janina; Thabane, Marroon; Baeck, Johan; Tremblay, Gabriel

    2018-01-01

    Several recent randomized controlled trials (RCTs) in hormone receptor-positive (HR+), human epidermal growth factor receptor 2-negative (HER2-) metastatic breast cancer (MBC) have demonstrated significant improvements in progression-free survival (PFS); however, few have reported improvement in overall survival (OS). The surrogacy of PFS or time to progression (TTP) for OS has not been formally investigated in HR+, HER2- MBC. A systematic literature review of RCTs in HR+, HER2- MBC was conducted to identify studies that reported both median PFS/TTP and OS. The correlation between PFS/TTP and OS was evaluated using Pearson's product-moment correlation and Spearman's rank correlation. Subgroup analyses were performed to explore possible reasons for heterogeneity. Errors-in-variables weighted least squares regression (LSR) was used to model incremental OS months as a function of incremental PFS/TTP months. An exploratory analysis investigated the impact of three covariates (chemotherapy vs hormonal/targeted therapy, PFS vs TTP, and first-line therapy vs second-line therapy or greater) on OS prediction. The lower 95% prediction band was used to determine the minimum incremental PFS/TTP months required to predict OS benefit (surrogate threshold effect [STE]). Forty studies were identified. There was a statistically significant correlation between median PFS/TTP and OS (Pearson =0.741, P =0.000; Spearman =0.650, P =0.000). These results proved consistent for chemotherapy and hormonal/targeted therapy. Univariate LSR analysis yielded an R 2 of 0.354 with 1 incremental PFS/TTP month corresponding to 1.13 incremental OS months. Controlling the type of treatment (chemotherapy vs hormonal/targeted therapy), line of therapy (first vs subsequent), and progression measure (PFS vs TTP) led to an improved R 2 of 0.569 with 1 PFS/TTP month corresponding to 0.78 OS months. The STE for OS benefit was 5-6 months of incremental PFS/TTP. We demonstrated a significant association between PFS/TTP and OS, which may justify the use of PFS/TTP as a surrogate for OS benefit in HR+, HER2- MBC.

  6. An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry

    NASA Astrophysics Data System (ADS)

    Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul

    2013-12-01

    The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.

  7. Tooth brushing frequency and risk of new carious lesions.

    PubMed

    Holmes, Richard D

    2016-12-01

    Data sourcesMedline, Embase, CINHAL and the Cochrane databases.Study selectionTwo reviewers selected studies, and case-control, prospective cohort, retrospective cohort and experimental trials evaluating the effect of toothbrushing frequency on the incidence or increment of new carious lesions were considered.Data extraction and synthesisTwo reviewers undertook data abstraction independently using pre-piloted forms. Study quality was assessed using a quality assessment tool for quantitative studies developed by the Effective Public Health Practice Project (EPHPP). Meta-analysis of caries outcomes was carried out using RefMan and meta-regressions undertaken to assess the influence of sample size, follow-up period, caries diagnosis level and study methodological quality.ResultsThirty-three studies were included of which 13 were considered to be methodologically strong, 14 moderate and six weak. Twenty-five studies contributed to the quantitative analysis. Compared with frequent brushers, self-reported infrequent brushers demonstrated a higher incidence of carious lesions, OR=1.50 (95%CI: 1.34 -1.69). The odds of having carious lesions differed little when subgroup analysis was conducted to compare the incidence between ≥2 times/d vs <2 times/d; OR=1.45; (95%CI; 1.21 - 1.74) and ≥1 time/d vs <1 time/d brushers OR=1.56; (95%CI; 1.37 - 1.78). Brushing <2 times/day significantly caused an increment of carious lesions compared with ≥2/day brushing, standardised mean difference [SMD] =0.34; (95%CI; 0.18 - 0.49). Overall, infrequent brushing was associated with an increment of carious lesions, SMD= 0.28; (95%CI; 0.13 - 0.44). Meta-analysis conducted with the type of dentition as subgroups found the effect of infrequent brushing on incidence and increment of carious lesions was higher in deciduous, OR=1.75; (95%CI; 1.49 - 2.06) than permanent dentition OR=1.39; (95% CI: 1.29 -1.49). Meta-regression indicated that none of the included variables influenced the effect estimate.ConclusionsIndividuals who state that they brush their teeth infrequently are at greater risk for the incidence or increment of new carious lesions than those brushing more frequently. The effect is more pronounced in the deciduous than in the permanent dentition. A few studies indicate that this effect is independent of the presence of fluoride in toothpaste.

  8. Coffee Drinking and the Risk of Endometrial Cancer: An Updated Meta-Analysis of Observational Studies.

    PubMed

    Lukic, Marko; Guha, Neela; Licaj, Idlir; van den Brandt, Piet A; Stayner, Leslie Thomas; Tavani, Alessandra; Weiderpass, Elisabete

    2018-01-01

    Several compounds contained in coffee have been found to suppress carcinogenesis in experimental studies. We conducted a dose-response meta-analysis to assess the impact of coffee consumption on the risk of endometrial cancer. We searched MEDLINE and EMBASE databases for studies published up to August 2016. Using random effects models, we estimated summary relative risks (RR) for cohort studies and odds ratios (OR) for case-control studies with 95% confidence intervals (CI). Dose-response analyses were conducted by using generalized least square trend estimation. We identified 12 cohort studies and 8 case-control studies eligible for inclusion, contributing with 11,663 and 2,746 endometrial cancer cases, respectively. The summary RR for highest compared with lowest coffee intake was 0.74 (95% CI: 0.68-0.81; p heterogeneity = 0.09, I 2 = 32%). The corresponding summary RR among cohort studies was 0.78 (95% CI: 0.71-0.85; p heterogeneity = 0.14, I 2 = 31.9%) and 0.63 (95% CI: 0.53-0.76; p heterogeneity = 0.57, I 2 = 0%) for case-control studies. One-cup increment per day was associated with 3% risk reduction (95% CI: 2-4%) in cohort studies and 12% (95% CI: 5-18%) in case-control studies. After pooling the results from 5 cohort studies, the association remained significant only in women with body mass index over 30 (RR = 0.71, 95% CI: 0.61-0.81). The results from our meta-analysis strengthen the evidence of a protective effect of coffee consumption on the risk of EC and further suggest that increased coffee intake might be particularly beneficial for women with obesity.

  9. Cost-effectiveness of vaccination against pneumococcal bacteremia among elderly people.

    PubMed

    Sisk, J E; Moskowitz, A J; Whang, W; Lin, J D; Fedson, D S; McBean, A M; Plouffe, J F; Cetron, M S; Butler, J C

    Clinical, epidemiologic, and policy considerations support updating the cost-effectiveness of pneumococcal vaccination for elderly people and targeting the evaluation only to prevention of pneumococcal bacteremia. To assess the implications for medical costs and health effects of vaccination against pneumococcal bacteremia in elderly people. Cost-effectiveness analysis of pneumococcal vaccination compared with no vaccination, from a societal perspective. The elderly population aged 65 years and older in the United States in 3 geographic areas: metropolitan Atlanta, Ga; Franklin County, Ohio; and Monroe County, New York. Incremental medical costs and health effects, expressed in quality-adjusted life-years per person vaccinated. Vaccination was cost saving, ie, it both reduced medical expenses and improved health, for all age groups and geographic areas analyzed in the base case. For people aged 65 years and older, vaccination saved $8.27 and gained 1.21 quality-adjusted days of life per person vaccinated. Vaccination of the 23 million elderly people unvaccinated in 1993 would have gained about 78000 years of healthy life and saved $194 million. In univariate sensitivity analysis, the results remained cost saving except for doubling vaccination costs, including future medical costs of survivors, and lowering vaccination effectiveness. With assumptions most unfavorable to vaccination, cost per quality-adjusted life-year ranged from $35 822 for ages 65 to 74 years to $598 487 for ages 85 years and older. In probabilistic sensitivity analysis, probability intervals were more narrow, with less than 5% probability that the ratio for ages 85 years and older would exceed $100000. Pneumococcal vaccination saves costs in the prevention of bacteremia alone and is greatly underused among the elderly population, on both health and economic grounds. These results support recent recommendations of the Advisory Committee on Immunization Practices and public and private efforts under way to improve vaccination rates.

  10. Circulating levels of C-reactive protein, interleukin-6 and tumor necrosis factor-α and risk of colorectal adenomas: a meta-analysis.

    PubMed

    Zhang, Xiaoqian; Liu, Shanglong; Zhou, Yanbing

    2016-09-27

    Results from publications on inflammatory markers of C-reactive protein (CRP), interleukin-6 (IL-6) and tumor necrosis factor-α (TNF-α) and risk of colorectal adenomas are not consistent. A meta-analysis was conducted to explore the above-mentioned associations. Relevant studies were identified by a search of Embase, Medline and PubMed through February 2016. A random effect model was adopted to combine study-specific odds ratio (OR) and 95% confidence interval (95% CI). Between-study heterogeneity and publications bias were assessed. Dose-response relationships were assessed by restricted cubic splines. Nineteen observational studies were included. For highest vs. lowest levels, results from this meta-analysis did not support an association between circulating levels of CRP [OR (95% CI): 1.15 (0.94-1.40)], IL-6 [1.17 (0.94-1.46)] and TNF-α [0.99 (0.75-1.31)] and risk of colorectal adenomas, respectively. The findings were supported by sensitivity analysis and subgroup analysis. In dose-response analysis, the risk of colorectal adenomas increased by 2% [1.02 (0.97-1.08)] for each 1 mg/L increment in circulation CRP levels, 9% [1.09 (0.91-1.31)] for each 1 ng/L increment in circulation IL-6 levels, and 6% [1.06 (0.93-1.21)] for each 1 pg/mL increment in circulation TNF-α levels. Moderate between-study heterogeneity was found. No evidence of publication bias was found. Circulation levels of CRP, IL-6 and TNF-α might be not useful biomarkers for identifying colorectal adenomas, respectively.

  11. Improved Design of Tunnel Supports : Volume 3 : Finite Element Analysis of the Peachtree Center Station in Atlanta

    DOT National Transportation Integrated Search

    1980-06-01

    Volume 3 contains the application of the three-dimensional (3-D) finite element program, Automatic Dynamic Incremental Nonlinear Analysis (ADINA), which was designed to replace the traditional 2-D plane strain analysis, to a specific location. The lo...

  12. Cost-Effectiveness of Haemorrhoidal Artery Ligation versus Rubber Band Ligation for the Treatment of Grade II-III Haemorrhoids: Analysis Using Evidence from the HubBLe Trial.

    PubMed

    Alshreef, Abualbishr; Wailoo, Allan J; Brown, Steven R; Tiernan, James P; Watson, Angus J M; Biggs, Katie; Bradburn, Mike; Hind, Daniel

    2017-09-01

    Haemorrhoids are a common condition, with nearly 30,000 procedures carried out in England in 2014/15, and result in a significant quality-of-life burden to patients and a financial burden to the healthcare system. This study examined the cost effectiveness of haemorrhoidal artery ligation (HAL) compared with rubber band ligation (RBL) in the treatment of grade II-III haemorrhoids. This analyses used data from the HubBLe study, a multicentre, open-label, parallel group, randomised controlled trial conducted in 17 acute UK hospitals between September 2012 and August 2015. A full economic evaluation, including long-term cost effectiveness, was conducted from the UK National Health Service (NHS) perspective. Main outcomes included healthcare costs, quality-adjusted life-years (QALYs) and recurrence. Cost-effectiveness results were presented in terms of incremental cost per QALY gained and cost per recurrence avoided. Extrapolation analysis for 3 years beyond the trial follow-up, two subgroup analyses (by grade of haemorrhoids and recurrence following RBL at baseline), and various sensitivity analyses were undertaken. In the primary base-case within-trial analysis, the incremental total mean cost per patient for HAL compared with RBL was £1027 (95% confidence interval [CI] £782-£1272, p < 0.001). The incremental QALYs were 0.01 QALYs (95% CI -0.02 to 0.04, p = 0.49). This generated an incremental cost-effectiveness ratio (ICER) of £104,427 per QALY. In the extrapolation analysis, the estimated probabilistic ICER was £21,798 per QALY. Results from all subgroup and sensitivity analyses did not materially change the base-case result. Under all assessed scenarios, the HAL procedure was not cost effective compared with RBL for the treatment of grade II-III haemorrhoids at a cost-effectiveness threshold of £20,000 per QALY; therefore, economically, its use in the NHS should be questioned.

  13. Economic analysis of human papillomavirus triage, repeat cytology, and immediate colposcopy in management of women with minor cytological abnormalities in Sweden.

    PubMed

    Ostensson, Ellinor; Fröberg, Maria; Hjerpe, Anders; Zethraeus, Niklas; Andersson, Sonia

    2010-10-01

    To assess the cost-effectiveness of using human papillomavirus testing (HPV triage) in the management of women with minor cytological abnormalities in Sweden. An economic analysis based on a clinical trial, complemented with data from published meta-analyses on accuracy of HPV triage. The study takes perspective of the Swedish healthcare system. The Swedish population-based cervical cancer screening program. A decision analytic model was constructed to evaluate cost-effectiveness of HPV triage compared to repeat cytology and immediate colposcopy with biopsy, stratifying by index cytology (ASCUS = atypical squamous cells of undetermined significance, and LSIL = low-grade squamous intraepithelial lesion) and age (23-60 years, <30 years and ≥30 years). Costs, incremental cost, incremental effectiveness and incremental cost per additional high-grade lesion (CIN2+) detected. For women with ASCUS ≥30 years, HPV triage is the least costly alternative, whereas immediate colposcopy with biopsy provides the most effective option at an incremental cost-effectiveness ratio (ICER) of SEK 2,056 per additional case of CIN2+ detected. For LSIL (all age groups) and ASCUS (23-60 years and <30 years), HPV triage is dominated by immediate colposcopy and biopsy. Model results were sensitive to HPV test cost changes. With improved HPV testing techniques at lower costs, HPV triage can become a cost-effective alternative for follow-up of minor cytological abnormalities. Today, immediate colposcopy with biopsy is a cost-effective alternative compared to HPV triage and repeat cytology.

  14. Histomorphometric analysis of knee synovial membrane in dogs undergoing leg lengthening by classic Ilizarov method and rapid automatic distraction.

    PubMed

    Stupina, Tatyana A; Shchudlo, Mikhail M; Shchudlo, Natalia A; Stepanov, Mikhail A

    2013-10-01

    This pilot study aimed to test the theory that different lengthening methods affect the microscopic structure of knee joint synovium in diverse ways. This was a descriptive and analytical cross-sectional study of synovium carried out in two experimental models of canine leg lengthening using the Ilizarov fixator. Group 1 (n = 6) used manual gradual distraction most commonly used in clinical practice at one millimetre/day divided into four equal increments, 0.25 mm at each increment. Group 2 (n = 7) used an increased rate of automatic distraction at three millimetres/day divided in 120 increments, 0.025 mm at each increment. At the end of distraction and after fixator removal, the animals were euthanised. Control was via intact dogs. The thickness of the synovial lining layer, number of synovial cell rows, and numerical density of subsynovial microvessels were assessed in digital images for semiautomated computerised analysis of semi-thin sections stained with toluidine blue and methylene blue-basic fuchsin. Comparison of synovitis manifestation was made with grading scale. The vascular and nerve changes in the subsynovial layer were also compared. Group 1 developed marked synovitis, synovium hypervascularisation, degeneration of the nerve fibres in subsynovial nerves with the tendency to regeneration. Group 2 had moderate to mild degree of synovitis with progressive degenerative changes in subsynovial vessels and nerves. Both methods used are unfavourable for the state of the joint synovium, but modify it in different ways.

  15. Number needed to treat and costs per responder among biologic treatments for moderate-to-severe psoriasis: a network meta-analysis.

    PubMed

    Armstrong, April W; Betts, Keith A; Signorovitch, James E; Sundaram, Murali; Li, Junlong; Ganguli, Arijit X; Wu, Eric Q

    2018-04-23

    The clinical benefits of biologic therapies for moderate-to-severe psoriasis are well established, but wide variations exist in patient response. To determine the number needed to treat (NNT) to achieve a 75% and 90% reduction in the Psoriasis Area and Severity Index (PASI-75/90) with FDA-approved agents and evaluate the incremental cost per PASI-75 or PASI-90 responder. The relative probabilities of achieving PASI-75 and PASI-90, as well as NNTs, were estimated using a network meta-analysis. Costs (2017 USD) included drug acquisition and administration. The incremental cost per PASI-75 or PASI-90 responder for each treatment was estimated for the clinical trial period, and annually. Compared with supportive care, the NNT to achieve PASI-75 was 1.18 for ixekizumab, 1.29 for secukinumab 300 mg, 1.37 for infliximab, 1.48 for adalimumab, 1.53 for secukinumab 150 mg, 1.58 for ustekinumab, 2.25 for etanercept, and 3.71 for apremilast. The one-year incremental cost per PASI-75 responder relative to supportive care was $59,830 for infliximab, $88,775 for secukinumab 300 mg, $91,837 for adalimumab, $95,898 for ixekizumab, $97,363 for ustekinumab, $105,131 for secukinumab 150 mg, $129,665 for apremilast, and $159,328 for etanercept. Results were similar for PASI-90. The NNT and incremental cost per responder are meaningful ways to assess comparative effectiveness and cost effectiveness among psoriasis treatments.

  16. Use of high-frequency peak in spectral analysis of heart rate increment to improve screening of obstructive sleep apnoea.

    PubMed

    Poupard, Laurent; Court-Fortune, Isabelle; Pichot, Vincent; Chouchou, Florian; Barthélémy, Jean-Claude; Roche, Frédéric

    2011-12-01

    Several studies have correlated the ratio of the very low frequency power spectral density of heart rate increment (%VLFI) with obstructive sleep apnoea syndrome (OSAS). However, patients with impaired heart rate variability may exhibit large variations of heart rate increment (HRI) spectral pattern and alter the screening accuracy of the method. To overcome this limitation, the present study uses the high-frequency increment (HFI) peak in the HRI spectrum, which corresponds to the respiratory influence on RR variations over the frequency range 0.2 to 0.4 Hz. We evaluated 288 consecutive patients referred for snoring, observed nocturnal breathing cessation and/or daytime sleepiness. Patients were classified as OSAS if their apnoea plus hypopnoea index (AHI) during polysomnography exceeded 15 events per hour. Synchronized electrocardiogram Holter monitoring allowed HRI analysis. Using a %VLFI threshold >2.4% for identifying the presence of OSAS, sensitivity for OSAS was 74.9%, specificity 51%, positive predictive value 54.9% and negative predictive value 71.7% (33 false negative subjects). Using threshold for %VLFI >2.4% and HFI peak position >0.4 Hz, negative predictive value increased to 78.2% while maintaining specificity at 50.6%. Among 11 subjects with %VLFI <2.4% and HFI peak >0.4 Hz, nine demonstrated moderate to severe OSAS (AHI >30). HFI represents a minimal physiological criterion for applying %VLFI by ensuring that heart rate variations are band frequency limited.

  17. Assessment of Rainfall-induced Landslide Potential and Spatial Distribution

    NASA Astrophysics Data System (ADS)

    Chen, Yie-Ruey; Tsai, Kuang-Jung; Chen, Jing-Wen; Chiang, Jie-Lun; Hsieh, Shun-Chieh; Chue, Yung-Sheng

    2016-04-01

    Recently, due to the global climate change, most of the time the rainfall in Taiwan is of short duration but with high intensity. Due to Taiwan's steep terrain, rainfall-induced landslides often occur and lead to human causalities and properties loss. Taiwan's government has invested huge reconstruction funds to the affected areas. However, after rehabilitation they still face the risk of secondary sediment disasters. Therefore, this study assesses rainfall-induced (secondary) landslide potential and spatial distribution in watershed of Southern Taiwan under extreme climate change. The study areas in this research are Baolai and Jianshan villages in the watershed of the Laonongxi River Basin in the Southern Taiwan. This study focused on the 3 years after Typhoon Morakot (2009 to 2011). During this period, the study area experienced six heavy rainfall events including five typhoons and one heavy rainfall. The genetic adaptive neural network, texture analysis and GIS were implemented in the analysis techniques for the interpretation of satellite images and to obtain surface information and hazard log data and to analyze land use change. A multivariate hazards evaluation method was applied to quantitatively analyze the weights of various natural environmental and slope development hazard factors. Furthermore, this study established a slope landslide potential assessment model and depicted a slope landslide potential diagram by using the GIS platform. The interaction between (secondary) landslide mechanism, scale, and location was analyzed using association analysis of landslide historical data and regional environmental characteristics. The results of image classification before and after six heavy rainfall events show that the values of coefficient of agreement are at medium-high level. By multivariate hazards evaluation method, geology and the effective accumulative rainfall (EAR) are the most important factors. Slope, distance from fault, aspect, land disturbance, and elevation are the secondary important factors. Under the different rainfall, the greater the average of EAR, the more the landslide occurrence and area increments. The determination coefficients of trend lines on the charts of the average of EAR versus number and area of landslide increment are 0.83 and 0.92, respectively. The relations between landslide potential level, degree of land disturbance, and the ratio of number and area of landslide increment corresponding six heavy rainfall events are positive and the determination coefficients of trend lines are 0.82 and 0.72, respectively. The relation between the average of EAR and the area of landslide increment corresponding five heavy rainfall events (excluding Morakot) is positive and the determination coefficient of trend line is 0.98. Furthermore, the relation between the area increment of secondary landslide, average of EAR or the slope disturbance is positive. Under the same slope disturbance, the greater the EAR, the more the area increment of secondary landslide. Contrarily, under the same EAR, the greater the slope disturbance, the more the area increment of secondary landslide. The results of the analysis of this study can be a reference for the government for subsequent countermeasures for slope sediment disaster sensitive area to reduce the number of casualties and significantly reduce the social cost of post-disaster.

  18. An Incremental Type-2 Meta-Cognitive Extreme Learning Machine.

    PubMed

    Pratama, Mahardhika; Zhang, Guangquan; Er, Meng Joo; Anavatti, Sreenatha

    2017-02-01

    Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.

  19. Strategies for adding adaptive learning mechanisms to rule-based diagnostic expert systems

    NASA Technical Reports Server (NTRS)

    Stclair, D. C.; Sabharwal, C. L.; Bond, W. E.; Hacke, Keith

    1988-01-01

    Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.

  20. 75 FR 30386 - Sunshine Act Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-01

    ... updates on Voting System Testing and Certification programs, including UOCAVA Pilot Program Voting Systems... Voting Systems Threat Analysis. The Board will receive updates on research and studies, including draft...

  1. Simple and Flexible Self-Reproducing Structures in Asynchronous Cellular Automata and Their Dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Lee, Jia; Yang, Rui-Long; Zhu, Qing-Sheng

    2013-03-01

    Self-reproduction on asynchronous cellular automata (ACAs) has attracted wide attention due to the evident artifacts induced by synchronous updating. Asynchronous updating, which allows cells to undergo transitions independently at random times, might be more compatible with the natural processes occurring at micro-scale, but the dark side of the coin is the increment in the complexity of an ACA in order to accomplish stable self-reproduction. This paper proposes a novel model of self-timed cellular automata (STCAs), a special type of ACAs, where unsheathed loops are able to duplicate themselves reliably in parallel. The removal of sheath cannot only allow various loops with more flexible and compact structures to replicate themselves, but also reduce the number of cell states of the STCA as compared to the previous model adopting sheathed loops [Y. Takada, T. Isokawa, F. Peper and N. Matsui, Physica D227, 26 (2007)]. The lack of sheath, on the other hand, often tends to cause much more complicated interactions among loops, when all of them struggle independently to stretch out their constructing arms at the same time. In particular, such intense collisions may even cause the emergence of a mess of twisted constructing arms in the cellular space. By using a simple and natural method, our self-reproducing loops (SRLs) are able to retract their arms successively, thereby disentangling from the mess successfully.

  2. Counting motifs in dynamic networks.

    PubMed

    Mukherjee, Kingshuk; Hasan, Md Mahmudul; Boucher, Christina; Kahveci, Tamer

    2018-04-11

    A network motif is a sub-network that occurs frequently in a given network. Detection of such motifs is important since they uncover functions and local properties of the given biological network. Finding motifs is however a computationally challenging task as it requires solving the costly subgraph isomorphism problem. Moreover, the topology of biological networks change over time. These changing networks are called dynamic biological networks. As the network evolves, frequency of each motif in the network also changes. Computing the frequency of a given motif from scratch in a dynamic network as the network topology evolves is infeasible, particularly for large and fast evolving networks. In this article, we design and develop a scalable method for counting the number of motifs in a dynamic biological network. Our method incrementally updates the frequency of each motif as the underlying network's topology evolves. Our experiments demonstrate that our method can update the frequency of each motif in orders of magnitude faster than counting the motif embeddings every time the network changes. If the network evolves more frequently, the margin with which our method outperforms the existing static methods, increases. We evaluated our method extensively using synthetic and real datasets, and show that our method is highly accurate(≥ 96%) and that it can be scaled to large dense networks. The results on real data demonstrate the utility of our method in revealing interesting insights on the evolution of biological processes.

  3. Dynamic analysis of I cross beam section dissimilar plate joined by TIG welding

    NASA Astrophysics Data System (ADS)

    Sani, M. S. M.; Nazri, N. A.; Rani, M. N. Abdul; Yunus, M. A.

    2018-04-01

    In this paper, finite element (FE) joint modelling technique for prediction of dynamic properties of sheet metal jointed by tungsten inert gas (TTG) will be presented. I cross section dissimilar flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by TTG are used. In order to find the most optimum set of TTG welding dissimilar plate, the finite element model with three types of joint modelling were engaged in this study; bar element (CBAR), beam element and spot weld element connector (CWELD). Experimental modal analysis (EMA) was carried out by impact hammer excitation on the dissimilar plates that welding by TTG method. Modal properties of FE model with joints were compared and validated with model testing. CWELD element was chosen to represent weld model for TTG joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, average percentage of error of the natural frequencies for CWELD model is improved significantly.

  4. Mississippi, 2010 forest inventory and analysis factsheet

    Treesearch

    S.N. Oswalt; J. Bentley

    2011-01-01

    This science update provides an overview of forest resources in Mississippi based on an inventory conducted by the U.S. Department of Agriculture Forest Service, Forest Inventory and Analysis (FIA) program at the Southern Research Station in cooperation with the Mississippi Forestry Commission. This update compares data from the periodic 2006 survey (field dates 2005...

  5. Global foot-and-mouth disease research update and gap analysis: 3 - vaccines

    USDA-ARS?s Scientific Manuscript database

    In 2014, the Global Foot-and-mouth disease Research Alliance (GFRA) conducted a gap analysis of FMD research. In this paper, we report updated findings in the field of FMD vaccine research. This paper consists of the following four sections: 1) Research priorities identified in the 2010 GFRA gap ana...

  6. Global foot-and-mouth disease research update and gap analysis: 5 - biotherapeutics and disinfectants

    USDA-ARS?s Scientific Manuscript database

    In 2014, the Global Foot-and-mouth disease Research Alliance(GFRA)conducted a gap analysis of FMD research. This work has been updated and reported in a series of papers with the focus of this article being (i) biotherapeutics and (ii) disinfectants, including environmental contamination. The paper ...

  7. Global foot-and-mouth disease research update and gap analysis: 4 - diagnostics

    USDA-ARS?s Scientific Manuscript database

    In 2014, the Global Foot-And-Mouth Disease Research Alliance (GFRA) conducted a gap analysis of FMD research. Published as a series of seven papers, in this paper, we report updated findings in the field of diagnostics. The paper consists of the following four sections: 1. Research priorities identi...

  8. Global foot-and-mouth disease research update and gap analysis: 6 - immunology

    USDA-ARS?s Scientific Manuscript database

    In 2014, the Global Foot-and-mouth disease Research Alliance (GFRA) conducted a gap analysis of FMD research. This has been updated with findings reported in a series of papers. Here we present findings for FMD immunology research. The paper consists of the following four sections: 1. Research prior...

  9. Updated analysis of NN elastic scattering to 3 GeV

    NASA Astrophysics Data System (ADS)

    Arndt, R. A.; Briscoe, W. J.; Strakovsky, I. I.; Workman, R. L.

    2007-08-01

    A partial-wave analysis of NN elastic scattering data has been updated to include a number of recent measurements. Experiments carried out at the Cooler Synchrotron (COSY) by the EDDA Collaboration have had a significant impact above 1 GeV. Results are discussed in terms of the partial-wave and direct-reconstruction amplitudes.

  10. Comparison of three annual inventory designs, a periodic design, and a midcycle design

    Treesearch

    Stanford L. Arner

    2000-01-01

    Three annual inventory designs, a periodic design, and a periodic measurement with midcycle update design are compared using a population created from 14,754 remeasured Forest Inventory and Analysis plots. Two of the annual designs and the midcycle update design allow updating of plots using sampling with partial replacement procedures. Individual year and moving...

  11. Background Information Document for Updating AP42 Section 2.4 for Estimating Emissions from Municipal Solid Waste Landfills

    EPA Science Inventory

    This revised draft document was prepared for U.S. EPA's Office of Research and Development, and describes the data analysis undertaken to update the Municipal Solid Waste (MSW) Landfill section of AP-42. This 2008 update includes the addition of data from 62 landfill gas emission...

  12. The use of remote sensing for updating extensive forest inventories

    Treesearch

    John F. Kelly

    1990-01-01

    The Forest Inventory and Analysis unit of the USDA Forest Service Southern Forest Experiment Station (SO-FIA) has the research task of devising an inventory updating system that can be used to provide reliable estimates of forest area, volume, growth, and removals at the State level. These updated inventories must be accomplished within current budgetary restraints....

  13. A Longitudinal Analysis of the Reid List of First Programming Languages

    ERIC Educational Resources Information Center

    Siegfried, Robert M.; Siegfried, Jason P.; Alexandro, Gina

    2016-01-01

    Throughout the 1990s, Richard Reid of Michigan State University maintained a list showing the first programming language used in introductory programming courses taken by computer science and information systems majors; it was updated for several years afterwards with the most recent update done in 2011. This is a follow-up to that last update of…

  14. Anomalous scaling of stochastic processes and the Moses effect

    NASA Astrophysics Data System (ADS)

    Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  15. Incremental online learning in high dimensions.

    PubMed

    Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan

    2005-12-01

    Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.

  16. Anomalous scaling of stochastic processes and the Moses effect.

    PubMed

    Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  17. Red and processed meat consumption and the risk of lung cancer: a dose-response meta-analysis of 33 published studies

    PubMed Central

    Xue, Xiu-Juan; Gao, Qing; Qiao, Jian-Hong; Zhang, Jie; Xu, Cui-Ping; Liu, Ju

    2014-01-01

    This meta-analysis was to summarize the published studies about the association between red/processed meat consumption and the risk of lung cancer. 5 databases were systematically reviewed, and random-effect model was used to pool the study results and to assess dose-response relationships. Results shown that six cohort studies and twenty eight case-control studies were included in this meat-analysis. The pooled Risk Radios (RR) for total red meat and processed meat were 1.44 (95% CI, 1.29-1.61) and 1.23 (95% CI, 1.10-1.37), respectively. Dose-response analysis revealed that for every increment of 120 grams red meat per day the risk of lung cancer increases 35% and for every increment of 50 grams red meat per day the risk of lung cancer increases 20%. The present dose-response meta-analysis suggested that both red and processed meat consumption showed a positive effect on lung cancer risk. PMID:25035778

  18. Preventing Childhood Caries

    PubMed Central

    Albino, J.; Tiwari, T.

    2016-01-01

    The etiology of dental caries reflects a complex interplay of biochemical, microbial, genetic, social and physical environmental, and health-influencing behavioral factors. This review updates the literature on the efficacy of behavioral approaches to caries prevention for children up to 18 y of age. Included were studies of behavioral interventions implemented at individual, family, and community levels that assessed results in terms of reductions in caries increments. Only those reports published since 2011 were considered. Outcomes were variable, although motivational interviewing, which involves individuals in decisions about oral health within the context of their respective life circumstances, proved effective in 3 of 4 reported studies, and more definitive trials are underway. Recommendations for future research include examinations of the cost-effectiveness of interventions, as well as work focused on understanding the mechanisms underlying oral health behavior change and variables that may mediate or moderate responses to interventions. PMID:26438210

  19. [Microbiological verification of a self control plan for a hospital food service].

    PubMed

    Torre, I; Pennino, F; Crispino, M

    2006-01-01

    During the past years, it has been an increment of food related infectious diseases. In order to avoid micro biological food contamination, adherence to good manufacturing is required through control measures of food safety practices. Updated national and European regulations underline the need to apply the HACCP system, overcoming the old concept of sample control on the end user product. This work shows results of microbiological controls made along the whole productive chain. Measurements are made using biomolecular techniques (PFGE) in order to assess the management of the micro biological risk of the self control plan applied to a hospital food service of Naples. The use of the PFGE applied on some micro-organisms gram negative potentially pathogen, underlines the circulation, continued in time, of these micro-organisms within the cooking area. In addition, cross contamination between several matrixes of samples has been detected.

  20. Sparse Coding and Counting for Robust Visual Tracking

    PubMed Central

    Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu

    2016-01-01

    In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474

  1. Tropopause sharpening by data assimilation

    NASA Astrophysics Data System (ADS)

    Pilch Kedzierski, R.; Neef, L.; Matthes, K.

    2016-08-01

    Data assimilation was recently suggested to smooth out the sharp gradients that characterize the tropopause inversion layer (TIL) in systems that did not assimilate TIL-resolving observations. We investigate whether this effect is present in the ERA-Interim reanalysis and the European Centre for Medium-Range Weather Forecasts (ECMWF) operational forecast system (which assimilate high-resolution observations) by analyzing the 4D-Var increments and how the TIL is represented in their data assimilation systems. For comparison, we also diagnose the TIL from high-resolution GPS radio occultation temperature profiles from the COSMIC satellite mission, degraded to the same vertical resolution as ERA-Interim and ECMWF operational analyses. Our results show that more recent reanalysis and forecast systems improve the representation of the TIL, updating the earlier hypothesis. However, the TIL in ERA-Interim and ECMWF operational analyses is still weaker and farther away from the tropopause than GPS radio occultation observations of the same vertical resolution.

  2. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  3. A Bookmarking Service for Organizing and Sharing URLs

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Wolfe, Shawn R.; Chen, James R.; Mathe, Nathalie; Rabinowitz, Joshua L.

    1997-01-01

    Web browser bookmarking facilities predominate as the method of choice for managing URLs. In this paper, we describe some deficiencies of current bookmarking schemes, and examine an alternative to current approaches. We present WebTagger(TM), an implemented prototype of a personal bookmarking service that provides both individuals and groups with a customizable means of organizing and accessing Web-based information resources. In addition, the service enables users to supply feedback on the utility of these resources relative to their information needs, and provides dynamically-updated ranking of resources based on incremental user feedback. Individuals may access the service from anywhere on the Internet, and require no special software. This service greatly simplifies the process of sharing URLs within groups, in comparison with manual methods involving email. The underlying bookmark organization scheme is more natural and flexible than current hierarchical schemes supported by the major Web browsers, and enables rapid access to stored bookmarks.

  4. A Modified Mean Gray Wolf Optimization Approach for Benchmark and Biomedical Problems.

    PubMed

    Singh, Narinder; Singh, S B

    2017-01-01

    A modified variant of gray wolf optimization algorithm, namely, mean gray wolf optimization algorithm has been developed by modifying the position update (encircling behavior) equations of gray wolf optimization algorithm. The proposed variant has been tested on 23 standard benchmark well-known test functions (unimodal, multimodal, and fixed-dimension multimodal), and the performance of modified variant has been compared with particle swarm optimization and gray wolf optimization. Proposed algorithm has also been applied to the classification of 5 data sets to check feasibility of the modified variant. The results obtained are compared with many other meta-heuristic approaches, ie, gray wolf optimization, particle swarm optimization, population-based incremental learning, ant colony optimization, etc. The results show that the performance of modified variant is able to find best solutions in terms of high level of accuracy in classification and improved local optima avoidance.

  5. Constructing Temporally Extended Actions through Incremental Community Detection

    PubMed Central

    Li, Ge

    2018-01-01

    Hierarchical reinforcement learning works on temporally extended actions or skills to facilitate learning. How to automatically form such abstraction is challenging, and many efforts tackle this issue in the options framework. While various approaches exist to construct options from different perspectives, few of them concentrate on options' adaptability during learning. This paper presents an algorithm to create options and enhance their quality online. Both aspects operate on detected communities of the learning environment's state transition graph. We first construct options from initial samples as the basis of online learning. Then a rule-based community revision algorithm is proposed to update graph partitions, based on which existing options can be continuously tuned. Experimental results in two problems indicate that options from initial samples may perform poorly in more complex environments, and our presented strategy can effectively improve options and get better results compared with flat reinforcement learning. PMID:29849543

  6. The 2006 Kennedy Space Center Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the Performance of the National Aeronautics and Space Administration's Space Shuttle Vehicle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Decker, Ryan; Harrington, Brian; Merry, Carl

    2008-01-01

    The Kennedy Space Center (KSC) Range Reference Atmosphere (RRA) is a statistical model that summarizes wind and thermodynamic atmospheric variability from surface to 70 km. The National Aeronautics and Space Administration's (NASA) Space Shuttle program, which launches from KSC, utilizes the KSC RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the KSC RRA was recently completed. As part of the update, the Natural Environments Branch at NASA's Marshall Space Flight Center (MSFC) conducted a validation study and a comparison analysis to the existing KSC RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed by JSC/Ascent Flight Design Division to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  7. In Search of Social Translucence: An Audit Log Analysis of Handoff Documentation Views and Updates.

    PubMed

    Jiang, Silis Y; Hum, R Stanley; Vawdrey, David; Mamykina, Lena

    2015-01-01

    Communication and information sharing are critical parts of teamwork in the hospital; however, achieving open and fluid communication can be challenging. Finding specific patient information within documentation can be difficult. Recent studies on handoff documentation tools show that resident handoff notes are increasingly used as an alternative information source by non-physician clinicians. Previous findings also show that residents have become aware of this unintended use. This study investigated the alignment of resident note updating patterns and team note viewing patterns based on usage log data of handoff notes. Qualitative interviews with clinicians were used to triangulate findings based on the log analysis. The study found that notes that were frequently updated were viewed significantly more frequently than notes updated less often (p < 2.2 × 10(-16)). Almost 44% of all notes had aligned frequency of views and updates. The considerable percentage (56%) of mismatched note utilization suggests an opportunity for improvement.

  8. A triangular thin shell finite element: Nonlinear analysis. [structural analysis

    NASA Technical Reports Server (NTRS)

    Thomas, G. R.; Gallagher, R. H.

    1975-01-01

    Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.

  9. Forest sector and primary forest products industry contributions to the economies of the southern states: 2011 update

    Treesearch

    Consuelo Brandeis; Donald G. Hodges

    2015-01-01

    The analysis in this article provides an update on the southern forest sector economic activity after the downturn experienced in 2008–2009. The analysis was conducted using Impact Analysis for Planning (IMPLAN) software and data sets for 2009 and 2011 and results from the USDA Forest Service Timber Products Output latest survey of primary wood processing mills....

  10. agriGO v2.0: a GO analysis toolkit for the agricultural community, 2017 update

    PubMed Central

    Tian, Tian; Liu, Yue; Yan, Hengyu; You, Qi; Yi, Xin; Du, Zhou

    2017-01-01

    Abstract The agriGO platform, which has been serving the scientific community for >10 years, specifically focuses on gene ontology (GO) enrichment analyses of plant and agricultural species. We continuously maintain and update the databases and accommodate the various requests of our global users. Here, we present our updated agriGO that has a largely expanded number of supporting species (394) and datatypes (865). In addition, a larger number of species have been classified into groups covering crops, vegetables, fish, birds and insects closely related to the agricultural community. We further improved the computational efficiency, including the batch analysis and P-value distribution (PVD), and the user-friendliness of the web pages. More visualization features were added to the platform, including SEACOMPARE (cross comparison of singular enrichment analysis), direct acyclic graph (DAG) and Scatter Plots, which can be merged by choosing any significant GO term. The updated platform agriGO v2.0 is now publicly accessible at http://systemsbiology.cau.edu.cn/agriGOv2/. PMID:28472432

  11. Clinical effectiveness and cost-effectiveness of beta-interferon and glatiramer acetate for treating multiple sclerosis: systematic review and economic evaluation.

    PubMed

    Melendez-Torres, G J; Auguste, Peter; Armoiry, Xavier; Maheswaran, Hendramoorthy; Court, Rachel; Madan, Jason; Kan, Alan; Lin, Stephanie; Counsell, Carl; Patterson, Jacoby; Rodrigues, Jeremy; Ciccarelli, Olga; Fraser, Hannah; Clarke, Aileen

    2017-09-01

    At the time of publication of the most recent National Institute for Health and Care Excellence (NICE) guidance [technology appraisal (TA) 32] in 2002 on beta-interferon (IFN-β) and glatiramer acetate (GA) for multiple sclerosis, there was insufficient evidence of their clinical effectiveness and cost-effectiveness. To undertake (1) systematic reviews of the clinical effectiveness and cost-effectiveness of IFN-β and GA in relapsing-remitting multiple sclerosis (RRMS), secondary progressive multiple sclerosis (SPMS) and clinically isolated syndrome (CIS) compared with best supportive care (BSC) and each other, investigating annualised relapse rate (ARR) and time to disability progression confirmed at 3 months and 6 months and (2) cost-effectiveness assessments of disease-modifying therapies (DMTs) for CIS and RRMS compared with BSC and each other. Searches were undertaken in January and February 2016 in databases including The Cochrane Library, MEDLINE and the Science Citation Index. We limited some database searches to specific start dates based on previous, relevant systematic reviews. Two reviewers screened titles and abstracts with recourse to a third when needed. The Cochrane tool and the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) and Philips checklists were used for appraisal. Narrative synthesis and, when possible, random-effects meta-analysis and network meta-analysis (NMA) were performed. Cost-effectiveness analysis used published literature, findings from the Department of Health's risk-sharing scheme (RSS) and expert opinion. A de novo economic model was built for CIS. The base case used updated RSS data, a NHS and Personal Social Services perspective, a 50-year time horizon, 2014/15 prices and a discount rate of 3.5%. Outcomes are reported as incremental cost-effectiveness ratios (ICERs). We undertook probabilistic sensitivity analysis. In total, 6420 publications were identified, of which 63 relating to 35 randomised controlled trials (RCTs) were included. In total, 86% had a high risk of bias. There was very little difference between drugs in reducing moderate or severe relapse rates in RRMS. All were beneficial compared with BSC, giving a pooled rate ratio of 0.65 [95% confidence interval (CI) 0.56 to 0.76] for ARR and a hazard ratio of 0.70 (95% CI, 0.55 to 0.87) for time to disability progression confirmed at 3 months. NMA suggested that 20 mg of GA given subcutaneously had the highest probability of being the best at reducing ARR. Three separate cost-effectiveness searches identified > 2500 publications, with 26 included studies informing the narrative synthesis and model inputs. In the base case using a modified RSS the mean incremental cost was £31,900 for pooled DMTs compared with BSC and the mean incremental quality-adjusted life-years (QALYs) were 0.943, giving an ICER of £33,800 per QALY gained for people with RRMS. In probabilistic sensitivity analysis the ICER was £34,000 per QALY gained. In sensitivity analysis, using the assessment group inputs gave an ICER of £12,800 per QALY gained for pooled DMTs compared with BSC. Pegylated IFN-β-1 (125 µg) was the most cost-effective option of the individual DMTs compared with BSC (ICER £7000 per QALY gained); GA (20 mg) was the most cost-effective treatment for CIS (ICER £16,500 per QALY gained). Although we built a de novo model for CIS that incorporated evidence from our systematic review of clinical effectiveness, our findings relied on a population diagnosed with CIS before implementation of the revised 2010 McDonald criteria. DMTs were clinically effective for RRMS and CIS but cost-effective only for CIS. Both RCT evidence and RSS data are at high risk of bias. Research priorities include comparative studies with longer follow-up and systematic review and meta-synthesis of qualitative studies. This study is registered as PROSPERO CRD42016043278. The National Institute for Health Research Health Technology Assessment programme.

  12. Serological testing versus other strategies for diagnosis of active tuberculosis in India: a cost-effectiveness analysis.

    PubMed

    Dowdy, David W; Steingart, Karen R; Pai, Madhukar

    2011-08-01

    Undiagnosed and misdiagnosed tuberculosis (TB) drives the epidemic in India. Serological (antibody detection) TB tests are not recommended by any agency, but widely used in many countries, including the Indian private sector. The cost and impact of using serology compared with other diagnostic techniques is unknown. Taking a patient cohort conservatively equal to the annual number of serological tests done in India (1.5 million adults suspected of having active TB), we used decision analysis to estimate costs and effectiveness of sputum smear microscopy (US$3.62 for two smears), microscopy plus automated liquid culture (mycobacterium growth indicator tube [MGIT], US$20/test), and serological testing (anda-tb ELISA, US$20/test). Data on test accuracy and costs were obtained from published literature. We adopted the perspective of the Indian TB control sector and an analysis frame of 1 year. Our primary outcome was the incremental cost per disability-adjusted life year (DALY) averted. We performed one-way sensitivity analysis on all model parameters, with multiway sensitivity analysis on variables to which the model was most sensitive. If used instead of sputum microscopy, serology generated an estimated 14,000 more TB diagnoses, but also 121,000 more false-positive diagnoses, 102,000 fewer DALYs averted, and 32,000 more secondary TB cases than microscopy, at approximately four times the incremental cost (US$47.5 million versus US$11.9 million). When added to high-quality sputum smears, MGIT culture was estimated to avert 130,000 incremental DALYs at an incremental cost of US$213 per DALY averted. Serology was dominated by (i.e., more costly and less effective than) MGIT culture and remained less economically favorable than sputum smear or TB culture in one-way and multiway sensitivity analyses. In India, sputum smear microscopy remains the most cost-effective diagnostic test available for active TB; efforts to increase access to quality-assured microscopy should take priority. In areas where high-quality microscopy exists and resources are sufficient, MGIT culture is more cost-effective than serology as an additional diagnostic test for TB. These data informed a recently published World Health Organization policy statement against serological tests.

  13. Machine-learning-based calving prediction from activity, lying, and ruminating behaviors in dairy cattle.

    PubMed

    Borchers, M R; Chang, Y M; Proudfoot, K L; Wadsworth, B A; Stone, A E; Bewley, J M

    2017-07-01

    The objective of this study was to use automated activity, lying, and rumination monitors to characterize prepartum behavior and predict calving in dairy cattle. Data were collected from 20 primiparous and 33 multiparous Holstein dairy cattle from September 2011 to May 2013 at the University of Kentucky Coldstream Dairy. The HR Tag (SCR Engineers Ltd., Netanya, Israel) automatically collected neck activity and rumination data in 2-h increments. The IceQube (IceRobotics Ltd., South Queensferry, United Kingdom) automatically collected number of steps, lying time, standing time, number of transitions from standing to lying (lying bouts), and total motion, summed in 15-min increments. IceQube data were summed in 2-h increments to match HR Tag data. All behavioral data were collected for 14 d before the predicted calving date. Retrospective data analysis was performed using mixed linear models to examine behavioral changes by day in the 14 d before calving. Bihourly behavioral differences from baseline values over the 14 d before calving were also evaluated using mixed linear models. Changes in daily rumination time, total motion, lying time, and lying bouts occurred in the 14 d before calving. In the bihourly analysis, extreme values for all behaviors occurred in the final 24 h, indicating that the monitored behaviors may be useful in calving prediction. To determine whether technologies were useful at predicting calving, random forest, linear discriminant analysis, and neural network machine-learning techniques were constructed and implemented using R version 3.1.0 (R Foundation for Statistical Computing, Vienna, Austria). These methods were used on variables from each technology and all combined variables from both technologies. A neural network analysis that combined variables from both technologies at the daily level yielded 100.0% sensitivity and 86.8% specificity. A neural network analysis that combined variables from both technologies in bihourly increments was used to identify 2-h periods in the 8 h before calving with 82.8% sensitivity and 80.4% specificity. Changes in behavior and machine-learning alerts indicate that commercially marketed behavioral monitors may have calving prediction potential. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Dynamic model updating based on strain mode shape and natural frequency using hybrid pattern search technique

    NASA Astrophysics Data System (ADS)

    Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping

    2018-05-01

    Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.

  15. Full On-Device Stay Points Detection in Smartphones for Location-Based Mobile Applications

    PubMed Central

    Pérez-Torres, Rafael; Torres-Huitzil, César; Galeana-Zapién, Hiram

    2016-01-01

    The tracking of frequently visited places, also known as stay points, is a critical feature in location-aware mobile applications as a way to adapt the information and services provided to smartphones users according to their moving patterns. Location based applications usually employ the GPS receiver along with Wi-Fi hot-spots and cellular cell tower mechanisms for estimating user location. Typically, fine-grained GPS location data are collected by the smartphone and transferred to dedicated servers for trajectory analysis and stay points detection. Such Mobile Cloud Computing approach has been successfully employed for extending smartphone’s battery lifetime by exchanging computation costs, assuming that on-device stay points detection is prohibitive. In this article, we propose and validate the feasibility of having an alternative event-driven mechanism for stay points detection that is executed fully on-device, and that provides higher energy savings by avoiding communication costs. Our solution is encapsulated in a sensing middleware for Android smartphones, where a stream of GPS location updates is collected in the background, supporting duty cycling schemes, and incrementally analyzed following an event-driven paradigm for stay points detection. To evaluate the performance of the proposed middleware, real world experiments were conducted under different stress levels, validating its power efficiency when compared against a Mobile Cloud Computing oriented solution. PMID:27754388

  16. Oral administration of kefiran exerts a bifidogenic effect on BALB/c mice intestinal microbiota.

    PubMed

    Hamet, M F; Medrano, M; Pérez, P F; Abraham, A G

    2016-01-01

    The activity of kefiran, the exopolysaccharide present in kefir grains, was evaluated on intestinal bacterial populations in BALB/c mice. Animals were orally administered with kefiran and Eubacteria, lactobacilli and bifidobacteria populations were monitored in faeces of mice at days 0, 2, 7, 14 and 21. Profiles obtained by Denaturing Gradient Gel Electrophoresis (DGGE) with primers for Eubacteria were compared by principal component analysis and clearly defined clusters, correlating with the time of kefiran consumption, were obtained. Furthermore, profile analysis of PCR products amplified with specific oligonucleotides for bifidobacteria showed an increment in the number of DGGE bands in the groups administered with kefiran. Fluorescent In Situ Hybridisation (FISH) with specific probes for bifidobacteria showed an increment of this population in faeces, in accordance to DGGE results. The bifidobacteria population was also studied on distal colon content after 0, 2 and 7 days of kefiran administration. Analysis of PCR products by DGGE with Eubacteria primers showed an increment in the number and intensity of bands with high GC content of mice administered with kefiran. Sequencing of DGGE bands confirmed that bifidobacteria were one of the bacterial populations modified by kefiran administration. DGGE profiles of PCR amplicons obtained by using Bifidobacterium or Lactobacillus specific primers confirmed that kefiran administration enhances bifidobacteria, however no changes were observed in Lactobacillus populations. The results of the analysis of bifidobacteria populations assessed on different sampling sites in a murine model support the use of this exopolysaccharide as a bifidogenic functional ingredient.

  17. Risk factors for mortality in patients undergoing hemodialysis: A systematic review and meta-analysis.

    PubMed

    Ma, Lijie; Zhao, Sumei

    2017-07-01

    No consensus exists regarding the factors influencing mortality in patients undergoing hemodialysis (HD). This meta-analysis aimed to evaluate the impact of various patient characteristics on the risk of mortality in such patients. PubMed, Embase, and Cochrane Central were searched for studies evaluating the risk factors for mortality in patients undergoing HD. The factors included age, gender, diabetes mellitus (DM), body mass index (BMI), previous cardiovascular disease (CVD), HD duration, hemoglobin, albumin, white blood cell, C-reactive protein (CRP), parathyroid hormone, total iron binding capacity (TIBC), iron, ln ferritin, adiponectin, apolipoprotein A1 (ApoA1), ApoA2, ApoA3, high-density lipoprotein (HDL), total cholesterol, hemoglobin A1c (HbA1c), serum phosphate, troponin T (TnT), and B-type natriuretic peptide (BNP). Relative risks with 95% confidence intervals were derived. Data were synthesized using the random-effects model. Age (per 1-year increment), DM, previous CVD, CRP (higher versus lower), ln ferritin, adiponectin (per 10.0μg/mL increment), HbA1c (higher versus lower), TnT, and BNP were associated with an increased risk of all-cause mortality. BMI (per 1kg/m 2 increment), hemoglobin (per 1d/dL increment), albumin (higher versus lower), TIBC, iron, ApoA2, and ApoA3 were associated with reduced risk of all-cause mortality. Age (per 1-year increment), gender (women versus men), DM, previous CVD, HD duration, ln ferritin, HDL, and HbA1c (higher versus lower) significantly increased the risk of cardiac death. Albumin (higher versus lower), TIBC, and ApoA2 had a beneficial impact on the risk of cardiac death. Multiple markers and factors influence the risk of mortality and cardiac death in patients undergoing HD. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Cross-correlation of instantaneous phase increments in pressure-flow fluctuations: Applications to cerebral autoregulation

    NASA Astrophysics Data System (ADS)

    Chen, Zhi; Hu, Kun; Stanley, H. Eugene; Novak, Vera; Ivanov, Plamen Ch.

    2006-03-01

    We investigate the relationship between the blood flow velocities (BFV) in the middle cerebral arteries and beat-to-beat blood pressure (BP) recorded from a finger in healthy and post-stroke subjects during the quasisteady state after perturbation for four different physiologic conditions: supine rest, head-up tilt, hyperventilation, and CO2 rebreathing in upright position. To evaluate whether instantaneous BP changes in the steady state are coupled with instantaneous changes in the BFV, we compare dynamical patterns in the instantaneous phases of these signals, obtained from the Hilbert transform, as a function of time. We find that in post-stroke subjects the instantaneous phase increments of BP and BFV exhibit well-pronounced patterns that remain stable in time for all four physiologic conditions, while in healthy subjects these patterns are different, less pronounced, and more variable. We propose an approach based on the cross-correlation of the instantaneous phase increments to quantify the coupling between BP and BFV signals. We find that the maximum correlation strength is different for the two groups and for the different conditions. For healthy subjects the amplitude of the cross-correlation between the instantaneous phase increments of BP and BFV is small and attenuates within 3-5 heartbeats. In contrast, for post-stroke subjects, this amplitude is significantly larger and cross-correlations persist up to 20 heartbeats. Further, we show that the instantaneous phase increments of BP and BFV are cross-correlated even within a single heartbeat cycle. We compare the results of our approach with three complementary methods: direct BP-BFV cross-correlation, transfer function analysis, and phase synchronization analysis. Our findings provide insight into the mechanism of cerebral vascular control in healthy subjects, suggesting that this control mechanism may involve rapid adjustments (within a heartbeat) of the cerebral vessels, so that BFV remains steady in response to changes in peripheral BP.

  19. Cross-correlation of instantaneous phase increments in pressure-flow fluctuations: applications to cerebral autoregulation.

    PubMed

    Chen, Zhi; Hu, Kun; Stanley, H Eugene; Novak, Vera; Ivanov, Plamen Ch

    2006-03-01

    We investigate the relationship between the blood flow velocities (BFV) in the middle cerebral arteries and beat-to-beat blood pressure (BP) recorded from a finger in healthy and post-stroke subjects during the quasisteady state after perturbation for four different physiologic conditions: supine rest, head-up tilt, hyperventilation, and CO2 rebreathing in upright position. To evaluate whether instantaneous BP changes in the steady state are coupled with instantaneous changes in the BFV, we compare dynamical patterns in the instantaneous phases of these signals, obtained from the Hilbert transform, as a function of time. We find that in post-stroke subjects the instantaneous phase increments of BP and BFV exhibit well-pronounced patterns that remain stable in time for all four physiologic conditions, while in healthy subjects these patterns are different, less pronounced, and more variable. We propose an approach based on the cross-correlation of the instantaneous phase increments to quantify the coupling between BP and BFV signals. We find that the maximum correlation strength is different for the two groups and for the different conditions. For healthy subjects the amplitude of the cross-correlation between the instantaneous phase increments of BP and BFV is small and attenuates within 3-5 heartbeats. In contrast, for post-stroke subjects, this amplitude is significantly larger and cross-correlations persist up to 20 heartbeats. Further, we show that the instantaneous phase increments of BP and BFV are cross-correlated even within a single heartbeat cycle. We compare the results of our approach with three complementary methods: direct BP-BFV cross-correlation, transfer function analysis, and phase synchronization analysis. Our findings provide insight into the mechanism of cerebral vascular control in healthy subjects, suggesting that this control mechanism may involve rapid adjustments (within a heartbeat) of the cerebral vessels, so that BFV remains steady in response to changes in peripheral BP.

  20. International Space Station Increment-2 Microgravity Environment Summary Report

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Hrovat, Kenneth; Kelly, Eric; McPherson, Kevin; Reckart, Timothy

    2002-01-01

    This summary report presents the results of some of the processed acceleration data, collected aboard the International Space Station during the period of May to August 2001, the Increment-2 phase of the station. Two accelerometer systems were used to measure the acceleration levels during activities that took place during the Increment-2 segment. However, not all of the activities were analyzed for this report due to time constraints, lack of precise information regarding some payload operations and other station activities. The National Aeronautics and Space Administration sponsors the Microgravity Acceleration Measurement System and the Space Acceleration Microgravity System to support microgravity science experiments, which require microgravity acceleration measurements. On April 19, 2001, both the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System units were launched on STS-100 from the Kennedy Space Center for installation on the International Space Station. The Microgravity Acceleration Measurement System unit was flown to the station in support of science experiments requiring quasi-steady acceleration measurements, while the Space Acceleration Measurement System unit was flown to support experiments requiring vibratory acceleration measurement. Both acceleration systems are also used in support of vehicle microgravity requirements verification. The International Space Station Increment-2 reduced gravity environment analysis presented in this report uses acceleration data collected by both sets of accelerometer systems: 1) The Microgravity Acceleration Measurement System, which consists of two sensors: the Orbital Acceleration Research Experiment Sensor Subsystem, a low frequency range sensor (up to 1 Hz), is used to characterize the quasi-steady environment for payloads and the vehicle, and the High Resolution Accelerometer Package, which is used to characterize the vibratory environment up to 100 Hz. 2) The Space Acceleration Measurement System, which is a high frequency sensor, measures vibratory acceleration data in the range of 0.01 to 300 Hz. This summary report presents analysis of some selected quasisteady and vibratory activities measured by these accelerometers during Increment-2 from May to August 20, 2001.

  1. Automated Image Analysis Corrosion Working Group Update: February 1, 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    These are slides for the automated image analysis corrosion working group update. The overall goals were: automate the detection and quantification of features in images (faster, more accurate), how to do this (obtain data, analyze data), focus on Laser Scanning Confocal Microscope (LCM) data (laser intensity, laser height/depth, optical RGB, optical plus laser RGB).

  2. 78 FR 47336 - Privacy Act of 1974; Computer Matching Program Between the Department of Housing and Urban...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-05

    ... provides an updated cost/benefit analysis providing an assessment of the benefits attained by HUD through... the scope of the existing computer matching program to now include the updated cost/ benefit analysis... change, and find a continued favorable examination of benefit/cost results; and (2) All parties certify...

  3. The Recreational Fee Demonstration Program on the national forests: and updated analysis of public attitudes and beliefs, 1996-2001.

    Treesearch

    David N. Bengston; David P. Fan

    2002-01-01

    Analyzes trends in favorable and unfavorable attitudes toward the Recreational Fee Demonstration Program (RFDP) in the national forests, updating an earlier study using computer content analysis of the public debate. About 65 percent of the attitudes toward the RFDP were favorable, comparable to the findings of survey research.

  4. Dell Hymes and the New Language Policy Studies: Update from an Underdeveloped Country

    ERIC Educational Resources Information Center

    McCarty, Teresa L.; Collins, James; Hopson, Rodney K.

    2011-01-01

    This essay updates Dell Hymes's "Report from an Underdeveloped Country" (the USA), positioning our analysis in the New Language Policy Studies. Taking up Hymes's call for comparative, critical studies of language use, we examine three cases, organizing our analysis around Hymes's questions: What "counts" as a language, a language problem, and…

  5. Louisiana, 2010 forest inventory and analysis factsheet

    Treesearch

    Sonja N. Oswalt; Tony G. Johnson

    2012-01-01

    This science update provides an overview of forest resources in Louisiana based on an inventory conducted by the U.S. Department of Agriculture Forest Service, Forest Inventory and Analysis (FIA) program at the Southern Research Station. This update compares data from the 2005 periodic and 2010 annualized data. The 2010 annualized data consists of 70 percent data...

  6. Global foot-and-mouth disease research update and gap analysis: 7 - pathogenesis and molecular biology

    USDA-ARS?s Scientific Manuscript database

    In 2014, the GFRA (Global Foot-and-mouth disease Research Alliance) conducted a gap analysis of FMD (Foot-and-Mouth Disease) research. This work has been updated and reported in a series of papers, in this article we report findings in the fields of 1) pathogenesis and 2) molecular biology. The arti...

  7. A maximal incremental effort alters tear osmolarity depending on the fitness level in military helicopter pilots.

    PubMed

    Vera, Jesús; Jiménez, Raimundo; Madinabeitia, Iker; Masiulis, Nerijus; Cárdenas, David

    2017-10-01

    Fitness level modulates the physiological responses to exercise for a variety of indices. While intense bouts of exercise have been demonstrated to increase tear osmolarity (Tosm), it is not known if fitness level can affect the Tosm response to acute exercise. This study aims to compare the effect of a maximal incremental test on Tosm between trained and untrained military helicopter pilots. Nineteen military helicopter pilots (ten trained and nine untrained) performed a maximal incremental test on a treadmill. A tear sample was collected before and after physical effort to determine the exercise-induced changes on Tosm. The Bayesian statistical analysis demonstrated that Tosm significantly increased from 303.72 ± 6.76 to 310.56 ± 8.80 mmol/L after performance of a maximal incremental test. However, while the untrained group showed an acute Tosm rise (12.33 mmol/L of increment), the trained group experienced a stable Tosm physical effort (1.45 mmol/L). There was a significant positive linear association between fat indices and Tosm changes (correlation coefficients [r] range: 0.77-0.89), whereas the Tosm changes displayed a negative relationship with the cardiorespiratory capacity (VO2 max; r = -0.75) and performance parameters (r = -0.75 for velocity, and r = -0.67 for time to exhaustion). The findings from this study provide evidence that fitness level is a major determinant of Tosm response to maximal incremental physical effort, showing a fairly linear association with several indices related to fitness level. High fitness level seems to be beneficial to avoid Tosm changes as consequence of intense exercise. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Response of a Circular Tunnel Through Rock to a Harmonic Rayleigh Wave

    NASA Astrophysics Data System (ADS)

    Kung, Chien-Lun; Wang, Tai-Tien; Chen, Cheng-Hsun; Huang, Tsan-Hwei

    2018-02-01

    A factor that combines tunnel depth and incident wavelength has been numerically determined to dominate the seismic responses of a tunnel in rocks that are subjected to harmonic P- and S-waves. This study applies the dynamic finite element method to investigate the seismic response of shallow overburden tunnels. Seismically induced stress increments in the lining of a circular tunnel that is subjected to an incident harmonic R-wave are examined. The determination of R-wave considers the dominant frequency of acceleration history of the 1999 Chi-Chi earthquake measured near the site with damage to two case tunnels at specifically shallow depth. An analysis reveals that the normalized seismically induced axial, shear and flexural stress increments in the lining of a tunnel reach their respective peaks at the depth h/ λ = 0.15, where the ground motion that is generated by an incident of R-wave has its maximum. The tunnel radius has a stronger effect on seismically induced stress increments than does tunnel depth. A greater tunnel radius yields higher normalized seismically induced axial stress increments and lower normalized seismically induced shear and flexural stress increments. The inertia of the thin overburden layer above the tunnel impedes the propagation of the wave and affects the motion of the ground around the tunnel. With an extremely shallow overburden, such an effect can change the envelope of the normalized seismically induced stress increments from one with a symmetric four-petal pattern into one with a non-symmetric three-petal pattern. The simulated results may partially elucidate the spatial distributions of cracks that were observed in the lining of the case tunnels.

  9. Experimental and numerical study on optimization of the single point incremental forming of AINSI 304L stainless steel sheet

    NASA Astrophysics Data System (ADS)

    Saidi, B.; Giraud-Moreau, L.; Cherouat, A.; Nasri, R.

    2017-09-01

    AINSI 304L stainless steel sheets are commonly formed into a variety of shapes for applications in the industrial, architectural, transportation and automobile fields, it’s also used for manufacturing of denture base. In the field of dentistry, there is a need for personalized devises that are custom made for the patient. The single point incremental forming process is highly promising in this area for manufacturing of denture base. The single point incremental forming process (ISF) is an emerging process based on the use of a spherical tool, which is moved along CNC controlled tool path. One of the major advantages of this process is the ability to program several punch trajectories on the same machine in order to obtain different shapes. Several applications of this process exist in the medical field for the manufacturing of personalized titanium prosthesis (cranial plate, knee prosthesis...) due to the need of product customization to each patient. The objective of this paper is to study the incremental forming of AISI 304L stainless steel sheets for future applications in the dentistry field. During the incremental forming process, considerable forces can occur. The control of the forming force is particularly important to ensure the safe use of the CNC milling machine and preserve the tooling and machinery. In this paper, the effect of four different process parameters on the maximum force is studied. The proposed approach consists in using an experimental design based on experimental results. An analysis of variance was conducted with ANOVA to find the input parameters allowing to minimize the maximum forming force. A numerical simulation of the incremental forming process is performed with the optimal input process parameters. Numerical results are compared with the experimental ones.

  10. Regulatory Impact Analysis (RIA) for the Proposed Revisions to the Sulfur Dioxide National Ambient Air Quality Standards (NAAQS)

    EPA Pesticide Factsheets

    This Regulatory Impact Analysis (RIA) provides estimates of the incremental costs and monetized human health benefits of attaining a revised short‐term Sulfur Dioxide (SO2) NAAQS within the current monitoring network.

  11. Objective analysis of observational data from the FGGE observing systems

    NASA Technical Reports Server (NTRS)

    Baker, W.; Edelmann, D.; Iredell, M.; Han, D.; Jakkempudi, S.

    1981-01-01

    An objective analysis procedure for updating the GLAS second and fourth order general atmospheric circulation models using observational data from the first GARP global experiment is described. The objective analysis procedure is based on a successive corrections method and the model is updated in a data assimilation cycle. Preparation of the observational data for analysis and the objective analysis scheme are described. The organization of the program and description of the required data sets are presented. The program logic and detailed descriptions of each subroutine are given.

  12. Space transfer concepts and analysis for exploration missions

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The progress and results are summarized for mission/system requirements database; mission analysis; GN and C (Guidance, Navigation, and Control), aeroheating, Mars landing; radiation protection; aerobrake mass analysis; Shuttle-Z, TMIS (Trans-Mars Injection Stage); Long Duration Habitat Trade Study; evolutionary lunar and Mars options; NTR (Nuclear Thermal Rocket); NEP (Nuclear Electric Propulsion) update; SEP (Solar Electric Propulsion) update; orbital and space-based requirements; technology; piloted rover; programmatic task; and evolutionary and innovative architecture.

  13. Global Deployment Anaylsis System Algorithm Description (With Updates)

    DTIC Science & Technology

    1998-09-01

    Global Deployment Analysis System Algorithm Description (with Updates) By Noetics , Inc. For U.S. Army Concepts Analysis Agency Contract...t "O -Q £5.3 Q 20000224 107 aQU’no-bi-o^f r This Algorithm Description for the Global Deployment Analysis System (GDAS) was prepared by Noetics ...support for Paradox Runtime will be provided by the GDAS developers, CAA and Noetics Inc., and not by Borland International. GDAS for Windows has

  14. Spatial-Temporal Analysis of Openstreetmap Data after Natural Disasters: a Case Study of Haiti Under Hurricane Matthew

    NASA Astrophysics Data System (ADS)

    Xu, J.; Li, L.; Zhou, Q.

    2017-09-01

    Volunteered geographic information (VGI) has been widely adopted as an alternative for authoritative geographic information in disaster management considering its up-to-date data. OpenStreetMap, in particular, is now aiming at crisis mapping for humanitarian purpose. This paper illustrated that natural disaster played an essential role in updating OpenStreetMap data after Haiti was hit by Hurricane Matthew in October, 2016. Spatial-temporal analysis of updated OSM data was conducted in this paper. Correlation of features was also studied to figure out whether updates of data were coincidence or the results of the hurricane. Spatial pattern matched the damaged areas and temporal changes fitted the time when disaster occurred. High level of correlation values of features were recorded when hurricane occurred, suggesting that updates in data were led by the hurricane.

  15. Cost-effectiveness analysis of two treatment strategies for chronic hepatitis C before and after access to direct-acting antivirals in Spain.

    PubMed

    Turnes, Juan; Domínguez-Hernández, Raquel; Casado, Miguel Ángel

    To evaluate the cost-effectiveness of a strategy based on direct-acting antivirals (DAAs) following the marketing of simeprevir and sofosbuvir (post-DAA) versus a pre-direct-acting antiviral strategy (pre-DAA) in patients with chronic hepatitis C, from the perspective of the Spanish National Health System. A decision tree combined with a Markov model was used to estimate the direct health costs (€, 2016) and health outcomes (quality-adjusted life years, QALYs) throughout the patient's life, with an annual discount rate of 3%. The sustained virological response, percentage of patients treated or not treated in each strategy, clinical characteristics of the patients, annual likelihood of transition, costs of treating and managing the disease, and utilities were obtained from the literature. The cost-effectiveness analysis was expressed as an incremental cost-effectiveness ratio (incremental cost per QALY gained). A deterministic sensitivity analysis and a probabilistic sensitivity analysis were performed. The post-DAA strategy showed higher health costs per patient (€30,944 vs. €23,707) than the pre-DAA strategy. However, it was associated with an increase of QALYs gained (15.79 vs. 12.83), showing an incremental cost-effectiveness ratio of €2,439 per QALY. The deterministic sensitivity analysis and the probabilistic sensitivity analysis showed the robustness of the results, with the post-DAA strategy being cost-effective in 99% of cases compared to the pre-DAA strategy. Compared to the pre-DAA strategy, the post-DAA strategy is efficient for the treatment of chronic hepatitis C in Spain, resulting in a much lower cost per QALY than the efficiency threshold used in Spain (€30,000 per QALY). Copyright © 2017 Elsevier España, S.L.U., AEEH y AEG. All rights reserved.

  16. Performance analysis of ‘Perturb and Observe’ and ‘Incremental Conductance’ MPPT algorithms for PV system

    NASA Astrophysics Data System (ADS)

    Lodhi, Ehtisham; Lodhi, Zeeshan; Noman Shafqat, Rana; Chen, Fieda

    2017-07-01

    Photovoltaic (PV) system usually employed The Maximum power point tracking (MPPT) techniques for increasing its efficiency. The performance of the PV system perhaps boosts by controlling at its apex point of power, in this way maximal power can be given to load. The proficiency of a PV system usually depends upon irradiance, temperature and array architecture. PV array shows a non-linear style for V-I curve and maximal power point on V-P curve also varies with changing environmental conditions. MPPT methods grantees that a PV module is regulated at reference voltage and to produce entire usage of the maximal output power. This paper gives analysis between two widely employed Perturb and Observe (P&O) and Incremental Conductance (INC) MPPT techniques. Their performance is evaluated and compared through theoretical analysis and digital simulation on the basis of response time and efficiency under varying irradiance and temperature condition using Matlab/Simulink.

  17. On nonstationarity and antipersistency in global temperature series

    NASA Astrophysics Data System (ADS)

    KäRner, O.

    2002-10-01

    Statistical analysis is carried out for satellite-based global daily tropospheric and stratospheric temperature anomaly and solar irradiance data sets. Behavior of the series appears to be nonstationary with stationary daily increments. Estimating long-range dependence between the increments reveals a remarkable difference between the two temperature series. Global average tropospheric temperature anomaly behaves similarly to the solar irradiance anomaly. Their daily increments show antipersistency for scales longer than 2 months. The property points at a cumulative negative feedback in the Earth climate system governing the tropospheric variability during the last 22 years. The result emphasizes a dominating role of the solar irradiance variability in variations of the tropospheric temperature and gives no support to the theory of anthropogenic climate change. The global average stratospheric temperature anomaly proceeds like a 1-dim random walk at least up to 11 years, allowing good presentation by means of the autoregressive integrated moving average (ARIMA) models for monthly series.

  18. Thermal elastoplastic structural analysis of non-metallic thermal protection systems

    NASA Technical Reports Server (NTRS)

    Chung, T. J.; Yagawa, G.

    1972-01-01

    An incremental theory and numerical procedure to analyze a three-dimensional thermoelastoplastic structure subjected to high temperature, surface heat flux, and volume heat supply as well as mechanical loadings are presented. Heat conduction equations and equilibrium equations are derived by assuming a specific form of incremental free energy, entropy, stresses and heat flux together with the first and second laws of thermodynamics, von Mises yield criteria and Prandtl-Reuss flow rule. The finite element discretization using the linear isotropic three-dimensional element for the space domain and a difference operator corresponding to a linear variation of temperature within a small time increment for the time domain lead to systematic solutions of temperature distribution and displacement and stress fields. Various boundary conditions such as insulated surfaces and convection through uninsulated surface can be easily treated. To demonstrate effectiveness of the present formulation a number of example problems are presented.

  19. Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition with a clinical sample.

    PubMed

    Nelson, Jason M; Canivez, Gary L; Watkins, Marley W

    2013-06-01

    Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  20. The political contradictions of incremental innovation: lessons from pharmaceutical patent examination in Brazil.

    PubMed

    Shadlen, Kenneth C

    2011-01-01

    Neodevelopmental patent regimes aim to facilitate local actors’ access to knowledge and also encourage incremental innovations. The case of pharmaceutical patent examination in Brazil illustrates political contradictions between these objectives. Brazil’s patent law includes the Ministry of Health in the examination of pharmaceutical patent applications. Though widely celebrated as a health-oriented policy, the Brazilian experience has become fraught with tensions and subject to decreasing levels of both stability and enforcement. I show how one pillar of the neodevelopmental regime, the array of initiatives to encourage incremental innovations, has fostered the acquisition of innovative capabilities in the Brazilian pharmaceutical sector, and how these new capabilities have altered actors’ policy preferences and thus contributed to the erosion of the coalition in support of the other pillar of the neodevelopmental regime, the health-oriented approach to examining pharmaceutical patents. The analysis of capability-derived preference formation points to an endogenous process of coalitional change.

  1. CRITICAL THERMAL INCREMENTS FOR RHYTHMIC RESPIRATORY MOVEMENTS OF INSECTS

    PubMed Central

    Crozier, W. J.; Stier, T. B.

    1925-01-01

    The rhythm of abdominal respiratory movements in various insects, aquatic and terrestrial, is shown to possess critical increments 11,500± or 16,500± calories (Libellula, Dixippus, Anax). These are characteristic of processes involved in respiration, and definitely differ from the increment 12,200 calories which is found in a number of instances of (non-respiratory) rhythmic neuromuscular activities of insects and other arthropods. With grasshoppers (Melanoplus), normal or freshly decapitated, the critical increment is 7,900, again a value encountered in connection with some phenomena of gaseous exchange and agreeing well with the value obtained for CO2 output in Melanoplus. It is shown that by decapitation the temperature characteristic for abdominal rhythm, in Melanoplus, is changed to 16,500, then to 11,300—depending upon the time since decapitation; intermediate values do not appear. The frequency of the respiratory movements seems to be controlled by a metabolically distinct group of neurones. The bearing of these results upon the theory of functional analysis by means of temperature characteristics is discussed, and it is pointed out that a definite standpoint becomes available from which to attempt the specific control of vital processes. PMID:19872148

  2. CRITICAL THERMAL INCREMENTS FOR RHYTHMIC RESPIRATORY MOVEMENTS OF INSECTS.

    PubMed

    Crozier, W J; Stier, T B

    1925-01-20

    The rhythm of abdominal respiratory movements in various insects, aquatic and terrestrial, is shown to possess critical increments 11,500+/- or 16,500+/- calories (Libellula, Dixippus, Anax). These are characteristic of processes involved in respiration, and definitely differ from the increment 12,200 calories which is found in a number of instances of (non-respiratory) rhythmic neuromuscular activities of insects and other arthropods. With grasshoppers (Melanoplus), normal or freshly decapitated, the critical increment is 7,900, again a value encountered in connection with some phenomena of gaseous exchange and agreeing well with the value obtained for CO(2) output in Melanoplus. It is shown that by decapitation the temperature characteristic for abdominal rhythm, in Melanoplus, is changed to 16,500, then to 11,300-depending upon the time since decapitation; intermediate values do not appear. The frequency of the respiratory movements seems to be controlled by a metabolically distinct group of neurones. The bearing of these results upon the theory of functional analysis by means of temperature characteristics is discussed, and it is pointed out that a definite standpoint becomes available from which to attempt the specific control of vital processes.

  3. Variable diffusion in stock market fluctuations

    NASA Astrophysics Data System (ADS)

    Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2015-02-01

    We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.

  4. Structured Case Analysis: Developing Critical Thinking Skills in a Marketing Case Course

    ERIC Educational Resources Information Center

    Klebba, Joanne M.; Hamilton, Janet G.

    2007-01-01

    Structured case analysis is a hybrid pedagogy that flexibly combines diverse instructional methods with comprehensive case analysis as a mechanism to develop critical thinking skills. An incremental learning framework is proposed that allows instructors to develop and monitor content-specific theory and the corresponding critical thinking skills.…

  5. Investigation of the nonlinear seismic behavior of knee braced frames using the incremental dynamic analysis method

    NASA Astrophysics Data System (ADS)

    Sheidaii, Mohammad Reza; TahamouliRoudsari, Mehrzad; Gordini, Mehrdad

    2016-06-01

    In knee braced frames, the braces are attached to the knee element rather than the intersection of beams and columns. This bracing system is widely used and preferred over the other commonly used systems for reasons such as having lateral stiffness while having adequate ductility, damage concentration on the second degree convenience of repairing and replacing of these elements after Earthquake. The lateral stiffness of this system is supplied by the bracing member and the ductility of the frame attached to the knee length is supplied through the bending or shear yield of the knee member. In this paper, the nonlinear seismic behavior of knee braced frame systems has been investigated using incremental dynamic analysis (IDA) and the effects of the number of stories in a building, length and the moment of inertia of the knee member on the seismic behavior, elastic stiffness, ductility and the probability of failure of these systems has been determined. In the incremental dynamic analysis, after plotting the IDA diagrams of the accelerograms, the collapse diagrams in the limit states are determined. These diagrams yield that for a constant knee length with reduced moment of inertia, the probability of collapse in limit states heightens and also for a constant knee moment of inertia with increasing length, the probability of collapse in limit states increases.

  6. Cost-effectiveness analysis of apatinib treatment for chemotherapy-refractory advanced gastric cancer.

    PubMed

    Chen, Hong-Dou; Zhou, Jing; Wen, Feng; Zhang, Peng-Fei; Zhou, Ke-Xun; Zheng, Han-Rui; Yang, Yu; Li, Qiu

    2017-02-01

    Apatinib, a third-line or later treatment for advanced gastric cancer (aGC), was shown to improve overall survival and progression-free survival (PFS) compared with placebo in the phase III trial. Given the modest benefit with high costs, we further evaluated the cost-effectiveness of apatinib for patients with chemotherapy-refractory aGC. A Markov model was developed to simulate the disease process of aGC (PFS, progressive disease, and death) and estimate the incremental cost-effectiveness ratio (ICER) of apatinib to placebo. The health outcomes and utility scores were derived from the phase III trial and previously published sources, respectively. Total costs were calculated from the perspective of the Chinese health-care payer. Sensitivity analysis was used to explore model uncertainties. Treatment with apatinib was estimated to provide an incremental 0.09 quality-adjusted life years (QALYs) at an incremental cost of $8113.86 compared with placebo, which resulted in an ICER of $90,154.00 per QALY. Sensitivity analysis showed that across the wide variation of parameters, the ICER exceeded the willingness-to-pay threshold of $23,700.00 per QALY which was three times the Gross Domestic Product per Capita in China. Apatinib is not a cost-effective option for patients with aGC who experienced failure of at least two lines chemotherapy in China. However, for its positive clinical value and subliminal demand, apatinib can provide a new therapeutic option.

  7. Diagnostic reliability of the cervical vertebral maturation method and standing height in the identification of the mandibular growth spurt.

    PubMed

    Perinetti, Giuseppe; Contardo, Luca; Castaldo, Attilio; McNamara, James A; Franchi, Lorenzo

    2016-07-01

    To evaluate the capability of both cervical vertebral maturation (CVM) stages 3 and 4 (CS3-4 interval) and the peak in standing height to identify the mandibular growth spurt throughout diagnostic reliability analysis. A previous longitudinal data set derived from 24 untreated growing subjects (15 females and nine males,) detailed elsewhere were reanalyzed. Mandibular growth was defined as annual increments in Condylion (Co)-Gnathion (Gn) (total mandibular length) and Co-Gonion Intersection (Goi) (ramus height) and their arithmetic mean (mean mandibular growth [mMG]). Subsequently, individual annual increments in standing height, Co-Gn, Co-Goi, and mMG were arranged according to annual age intervals, with the first and last intervals defined as 7-8 years and 15-16 years, respectively. An analysis was performed to establish the diagnostic reliability of the CS3-4 interval or the peak in standing height in the identification of the maximum individual increments of each Co-Gn, Co-Goi, and mMG measurement at each annual age interval. CS3-4 and standing height peak show similar but variable accuracy across annual age intervals, registering values between 0.61 (standing height peak, Co-Gn) and 0.95 (standing height peak and CS3-4, mMG). Generally, satisfactory diagnostic reliability was seen when the mandibular growth spurt was identified on the basis of the Co-Goi and mMG increments. Both CVM interval CS3-4 and peak in standing height may be used in routine clinical practice to enhance efficiency of treatments requiring identification of the mandibular growth spurt.

  8. Cost effectiveness of an intensive blood glucose control policy in patients with type 2 diabetes: economic analysis alongside randomised controlled trial (UKPDS 41)

    PubMed Central

    Gray, Alastair; Raikou, Maria; McGuire, Alistair; Fenn, Paul; Stevens, Richard; Cull, Carole; Stratton, Irene; Adler, Amanda; Holman, Rury; Turner, Robert

    2000-01-01

    Objective To estimate the cost effectiveness of conventional versus intensive blood glucose control in patients with type 2 diabetes. Design Incremental cost effectiveness analysis alongside randomised controlled trial. Setting 23 UK hospital clinic based study centres. Participants 3867 patients with newly diagnosed type 2 diabetes (mean age 53 years). Interventions Conventional (primarily diet) glucose control policy versus intensive control policy with a sulphonylurea or insulin. Main outcome measures Incremental cost per event-free year gained within the trial period. Results Intensive glucose control increased trial treatment costs by £695 (95% confidence interval £555 to £836) per patient but reduced the cost of complications by £957 (£233 to £1681) compared with conventional management. If standard practice visit patterns were assumed rather than trial conditions, the incremental cost of intensive management was £478 (−£275 to £1232) per patient. The within trial event-free time gained in the intensive group was 0.60 (0.12 to 1.10) years and the lifetime gain 1.14 (0.69 to 1.61) years. The incremental cost per event-free year gained was £1166 (costs and effects discounted at 6% a year) and £563 (costs discounted at 6% a year and effects not discounted). Conclusions Intensive blood glucose control in patients with type 2 diabetes significantly increased treatment costs but substantially reduced the cost of complications and increased the time free of complications. PMID:10818026

  9. Finite Element Analysis of Magnetoelastic Plate Problems.

    DTIC Science & Technology

    1981-08-01

    deformation and in the incremental large deformation analysis, respectively. The classical Kirchhoff assumption of the undeformable normal to the midsurface is...current density , is constant across the thickness of the plate and is parallel to the midsurface of the plate; (2) the normal component of the

  10. The p-version of the finite element method in incremental elasto-plastic analysis

    NASA Technical Reports Server (NTRS)

    Holzer, Stefan M.; Yosibash, Zohar

    1993-01-01

    Whereas the higher-order versions of the finite elements method (the p- and hp-version) are fairly well established as highly efficient methods for monitoring and controlling the discretization error in linear problems, little has been done to exploit their benefits in elasto-plastic structural analysis. Aspects of incremental elasto-plastic finite element analysis which are particularly amenable to improvements by the p-version is discussed. These theoretical considerations are supported by several numerical experiments. First, an example for which an analytical solution is available is studied. It is demonstrated that the p-version performs very well even in cycles of elasto-plastic loading and unloading, not only as compared to the traditional h-version but also in respect to the exact solution. Finally, an example of considerable practical importance - the analysis of a cold-worked lug - is presented which demonstrates how the modeling tools offered by higher-order finite element techniques can contribute to an improved approximation of practical problems.

  11. Concept analysis: lack of anonymity.

    PubMed

    Swan, Marilyn A; Hobbs, Barbara B

    2017-05-01

    To re-examine and expand understanding of the concept 'lack of anonymity' as a component of rural nursing theory. Early healthcare literature reports lack of anonymity as part of social and working environments, particularly rural nursing. Rural nursing theory included the first published concept analysis on lack of anonymity but lacked empirical referents. Workforce, societal and rural healthcare changes support an updated analysis. To further understand lack of anonymity, its present day use and applicability to diverse environments, research from multiple disciplines was reviewed. Concept analysis. A literature search using eight terms in eleven databases was conducted of literature published between 2008-2013. Walker and Avant's concept analysis methodology guided the analysis. The previous concept analysis is supported in part by current literature. The defining attributes, 'identifiable', 'establishing boundaries for public and private self and interconnectedness' in a community were updated. Updated antecedents include: (i) environmental context; (ii) opportunities to become visible; (iii) developing relationships and (iv) unconscious or limited awareness of public or personal privacy. Consequences are: (i) familiarity; (ii) visibility; (iii) awareness of privacy and (iv) manage or balance of lack of anonymity. Cases were constructed and empirical referents identified. The concept of lack of anonymity was updated; portions of the original definition remain unchanged. Empirical referents reveal the defining attributes in daily life and may guide future research on the effect of lack of anonymity on nursing practice. This analysis advances the conceptual understanding of rural nursing theory. © 2016 John Wiley & Sons Ltd.

  12. Virginia, 2012 - forest inventory and analysis factsheet

    Treesearch

    Anita K. Rose

    2014-01-01

    This science update is a brief look at some of the basic metrics that describe the status of and changes in forest resources in Virginia. Estimates presented here are for the measurement year 2012. Information for the factsheets is updated by means of the Forest Inventory and Analysis (FIA) annualized sample design. Each year 20 percent of the sample plots (one panel)...

  13. Virginia, 2011 forest inventory and analysis factsheet

    Treesearch

    Anita K. Rose

    2013-01-01

    This science update is a brief look at some of the basic metrics that describe the status and trends of forest resources in Virginia. Estimates presented here are for the measurement year 2011. Information for the factsheets is updated by means of the Forest Inventory and Analysis (FIA) annualized sample design. Each year 20 percent of the sample plots (one panel) in...

  14. Virginia, 2010 forest inventory and analysis factsheet

    Treesearch

    Anita K. Rose

    2012-01-01

    This science update is a brief look at some of the basic metrics that describe the status of forest resources in Virginia. Estimates presented here are for the measurement year 2010. Information for this factsheet is updated by means of the Forest Inventory and Analysis (FIA) annualized sample design. Virginia has about 4,600 sample plots across the State and each year...

  15. Virginia, 2009 forest inventory and analysis factsheet

    Treesearch

    Anita K. Rose

    2011-01-01

    This science update is a brief look at some of the basic metrics that describe forest resources in Virginia. Estimates presented here are for the measurement year 2009. Information for the factsheet is updated by means of the Forest Inventory and Analysis (FIA) annualized sample design. Virginia has about 4,600 sample plots across the State, and each year 20 percent of...

  16. Reassessing the Skills Required of Graduates of an Information Systems Program: An Updated Analysis

    ERIC Educational Resources Information Center

    Legier, John; Woodward, Belle; Martin, Nancy

    2013-01-01

    The study involves an updated analysis of the job characteristics of information systems graduates based on the status of the job market as well as the perceptions of 72 graduates from an information systems program of a Midwestern university. Approximately one-third of the graduates were working in positions related to technical support.…

  17. The role of HCG increment in the 48h prior to methotrexate treatment as a predictor for treatment success.

    PubMed

    Cohen, Aviad; Almog, Benny; Cohen, Yoni; Bibi, Guy; Rimon, Eli; Levin, Ishai

    2017-04-01

    To evaluate the role HCG change in the 48h prior to methotrexate treatment as a predictor for treatment success. Medical records of all women who were diagnosed with ectopic pregnancy between January 2001 and June 2013 were reviewed. Four hundred and nine patients received methotrexate due to ectopic pregnancy. The "single dose" methotrexate protocol with 50mg/m 2 was administered to patients with progressing ectopic pregnancy. HCG levels in days 1, 4 and 7 were used to evaluate methotrexate treatment success. The percentage of HCG change in the 48h prior to methotrexate treatment was compared between patients who were successfully treated and those who failed treatment with methotrexate. Single dose methotrexate was successful in 309 patients (75.4%, success group). The medians of HCG change in the 48h prior to methotrexate administration were significantly higher in the "failure group" (21% vs. 4%, p<0.01). In a logistic regression analysis, the of HCG percent increment prior to methotrexate administration was shown to be an independent predictor for treatment outcome. Receiver operator characteristic curve for HCG percent change was 0.751, at a cutoff value of HCG increment <12% the positive predictive value for treatment success reached 86%. Percentage of HCG increment in the 48h prior to methotrexate administration is an independent predictor for methotrexate treatment success. HCG increment <12% prior to methotrexate treatment is a good predictor for treatment success. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The value of crossdating to retain high-frequency variability, climate signals, and extreme events in environmental proxies.

    PubMed

    Black, Bryan A; Griffin, Daniel; van der Sleen, Peter; Wanamaker, Alan D; Speer, James H; Frank, David C; Stahle, David W; Pederson, Neil; Copenheaver, Carolyn A; Trouet, Valerie; Griffin, Shelly; Gillanders, Bronwyn M

    2016-07-01

    High-resolution biogenic and geologic proxies in which one increment or layer is formed per year are crucial to describing natural ranges of environmental variability in Earth's physical and biological systems. However, dating controls are necessary to ensure temporal precision and accuracy; simple counts cannot ensure that all layers are placed correctly in time. Originally developed for tree-ring data, crossdating is the only such procedure that ensures all increments have been assigned the correct calendar year of formation. Here, we use growth-increment data from two tree species, two marine bivalve species, and a marine fish species to illustrate sensitivity of environmental signals to modest dating error rates. When falsely added or missed increments are induced at one and five percent rates, errors propagate back through time and eliminate high-frequency variability, climate signals, and evidence of extreme events while incorrectly dating and distorting major disturbances or other low-frequency processes. Our consecutive Monte Carlo experiments show that inaccuracies begin to accumulate in as little as two decades and can remove all but decadal-scale processes after as little as two centuries. Real-world scenarios may have even greater consequence in the absence of crossdating. Given this sensitivity to signal loss, the fundamental tenets of crossdating must be applied to fully resolve environmental signals, a point we underscore as the frontiers of growth-increment analysis continue to expand into tropical, freshwater, and marine environments. © 2016 John Wiley & Sons Ltd.

  19. A table of intensity increments.

    DOT National Transportation Integrated Search

    1966-01-01

    Small intensity increments can be produced by adding larger intensity increments. A table is presented covering the range of small intensity increments from 0.008682 through 6.020 dB in 60 large intensity increments of 1 dB.

  20. Incremental Upgrade of Legacy Systems (IULS)

    DTIC Science & Technology

    2001-04-01

    analysis task employed SEI’s Feature-Oriented Domain Analysis methodology (see FODA reference) and included several phases: • Context Analysis • Establish...Legacy, new Host and upgrade system and software. The Feature Oriented Domain Analysis approach ( FODA , see SUM References) was used for this step...Feature-Oriented Domain Analysis ( FODA ) Feasibility Study (CMU/SEI-90-TR- 21, ESD-90-TR-222); Software Engineering Institute, Carnegie Mellon University

  1. Test and analysis procedures for updating math models of Space Shuttle payloads

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.

    1991-01-01

    Over the next decade or more, the Space Shuttle will continue to be the primary transportation system for delivering payloads to Earth orbit. Although a number of payloads have already been successfully carried by the Space Shuttle in the payload bay of the Orbiter vehicle, there continues to be a need for evaluation of the procedures used for verifying and updating the math models of the payloads. The verified payload math models is combined with an Orbiter math model for the coupled-loads analysis, which is required before any payload can fly. Several test procedures were employed for obtaining data for use in verifying payload math models and for carrying out the updating of the payload math models. Research was directed at the evaluation of test/update procedures for use in the verification of Space Shuttle payload math models. The following research tasks are summarized: (1) a study of free-interface test procedures; (2) a literature survey and evaluation of model update procedures; and (3) the design and construction of a laboratory payload simulator.

  2. Machine learning in updating predictive models of planning and scheduling transportation projects

    DOT National Transportation Integrated Search

    1997-01-01

    A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...

  3. A status report on the characterization of the microgravity environment of the International Space Station

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; McPherson, Kevin; Hrovat, Kenneth; Kelly, Eric; Reckart, Timothy

    2004-01-01

    A primary objective of the International Space Station is to provide a long-term quiescent environment for the conduct of scientific research for a variety of microgravity science disciplines. Since continuous human presence on the space station began in November 2000 through the end of Increment-6, over 1260 hours of crew time have been allocated to research. However, far more research time has been accumulated by experiments controlled on the ground. By the end of the time period covered by this paper (end of Increment-6), the total experiment hours performed on the station are well over 100,000 hours (Expedition 6 Press Kit: Station Begins Third Year of Human Occupation, Boeing/USA/NASA, October 25, 2002). This paper presents the results of the on-going effort by the Principal Investigator Microgravity Services project, at NASA Glenn Research Center, in Cleveland, Ohio, to characterize the microgravity environment of the International Space Station in order to keep the microgravity scientific community apprised of the reduced gravity environment provided by the station for the performance of space experiments. This paper focuses on the station microgravity environment for Increments 5 and 6. During that period over 580 Gbytes of acceleration data were collected, out of which over 34,790 hours were analyzed. The results presented in this paper are divided into two sections: quasi-steady and vibratory. For the quasi-steady analysis, over 7794 hours of acceleration data were analyzed, while over 27,000 hours were analyzed for the vibratory analysis. The results of the data analysis are presented in this paper in the form of a grand summary for the period under consideration. For the quasi-steady acceleration response, results are presented in the form of a 95% confidence interval for the station during "normal microgravity mode operations" for the following three attitudes: local vertical local horizontal, X-axis perpendicular to the orbit plane and the Russian torque equilibrium attitude. The same analysis was performed for the station during "non-microgravity mode operations" to assess the station quasi-steady acceleration environment over a long period of time. The same type of analysis was performed for the vibratory, but a 95th percentile benchmark was used, which shows the overall acceleration magnitude during Increments 5 and 6. The results, for both quasi-steady and vibratory acceleration response, show that the station is not yet meeting the microgravity requirements during the microgravity mode operations. However, it should be stressed that the requirements apply only at assembly complete, whereas the results presented below apply up to the station's configuration at the end of Increment-6. c2004 Elsevier Ltd. All rights reserved.

  4. Use of indexing to update United States annual timber harvest by state

    Treesearch

    James Howard; Enrique Quevedo; Andrew Kramp

    2009-01-01

    This report provides an index method that can be used to update recent estimates of timber harvest by state to a common current year and to make 5-year projections. The Forest Service Forest Inventory and Analysis (FIA) program makes estimates of harvest for each state in differing years. The purpose of this updating method is to bring each state-level estimate up to a...

  5. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  6. agriGO v2.0: a GO analysis toolkit for the agricultural community, 2017 update.

    PubMed

    Tian, Tian; Liu, Yue; Yan, Hengyu; You, Qi; Yi, Xin; Du, Zhou; Xu, Wenying; Su, Zhen

    2017-07-03

    The agriGO platform, which has been serving the scientific community for >10 years, specifically focuses on gene ontology (GO) enrichment analyses of plant and agricultural species. We continuously maintain and update the databases and accommodate the various requests of our global users. Here, we present our updated agriGO that has a largely expanded number of supporting species (394) and datatypes (865). In addition, a larger number of species have been classified into groups covering crops, vegetables, fish, birds and insects closely related to the agricultural community. We further improved the computational efficiency, including the batch analysis and P-value distribution (PVD), and the user-friendliness of the web pages. More visualization features were added to the platform, including SEACOMPARE (cross comparison of singular enrichment analysis), direct acyclic graph (DAG) and Scatter Plots, which can be merged by choosing any significant GO term. The updated platform agriGO v2.0 is now publicly accessible at http://systemsbiology.cau.edu.cn/agriGOv2/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. International Space Station Increment-6/8 Microgravity Environment Summary Report November 2002 to April 2004

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Hrovat, Kenneth; Kelly, Eric; Reckart, Timothy

    2006-01-01

    This summary report presents the analysis results of some of the processed acceleration data measured aboard the International Space Station during the period of November 2002 to April 2004. Two accelerometer systems were used to measure the acceleration levels for the activities that took place during Increment-6/8. However, not all of the activities during that period were analyzed in order to keep the size of the report manageable. The National Aeronautics and Space Administration sponsors the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System to support microgravity science experiments that require microgravity acceleration measurements. On April 19, 2001, both the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System units were launched on STS-100 from the Kennedy Space Center for installation on the International Space Station. The Microgravity Acceleration Measurement System unit was flown to the station in support of science experiments requiring quasi-steady acceleration measurements, while the Space Acceleration Measurement System unit was flown to support experiments requiring vibratory acceleration measurement. Both acceleration systems are also used in support of the vehicle microgravity requirements verification as well as in support of the International Space Station support cadre. The International Space Station Increment-6/8 reduced gravity environment analysis presented in this report uses acceleration data collected by both sets of accelerometer systems: 1. The Microgravity Acceleration Measurement System, which consists of two sensors: the Orbital Acceleration Research Experiment Sensor Subsystem, a low frequency range sensor (up to 1 Hz), is used to characterize the quasi-steady environment for payloads and vehicle, and the High Resolution Accelerometer Package, which is used to characterize the vibratory environment up to 100 Hz. 2. The Space Acceleration Measurement System measures vibratory acceleration data in the range of 0.01 to 400 Hz. This summary report presents analysis of some selected quasi-steady and vibratory activities measured by these accelerometers during Increment-6/8 from November 2002 to April 2004.

  8. Cost-Effectiveness Analysis of the Introduction of HPV Vaccination of 9-Year-Old-Girls in Iran.

    PubMed

    Yaghoubi, Mohsen; Nojomi, Marzieh; Vaezi, Atefeh; Erfani, Vida; Mahmoudi, Susan; Ezoji, Khadijeh; Zahraei, Seyed Mohsen; Chaudhri, Irtaza; Moradi-Lakeh, Maziar

    2018-04-23

    To estimate the cost effectiveness of introducing the quadrivalent human papillomavirus (HPV) vaccine into the national immunization program of Iran. The CERVIVAC cost-effectiveness model was used to calculate incremental cost per averted disability-adjusted life-year by vaccination compared with no vaccination from both governmental and societal perspectives. Calculations were based on epidemiologic parameters from the Iran National Cancer Registry and other national data sources as well as from literature review. We estimated all direct and indirect costs of cervical cancer treatment and vaccination program. All future costs and benefits were discounted at 3% per year and deterministic sensitivity analysis was used. During a 10-year period, HPV vaccination was estimated to avert 182 cervical cancer cases and 20 deaths at a total vaccination cost of US $23,459,897; total health service cost prevented because of HPV vaccination was estimated to be US $378,646 and US $691,741 from the governmental and societal perspective, respectively. Incremental cost per disability-adjusted life-year averted within 10 years was estimated to be US $15,205 and US $14,999 from the governmental and societal perspective, respectively, and both are higher than 3 times the gross domestic product per capita of Iran (US $14,289). Sensitivity analysis showed variation in vaccine price, and the number of doses has the greatest volatility on the incremental cost-effectiveness ratio. Using a two-dose vaccination program could be cost-effective from the societal perspective (incremental cost-effectiveness ratio = US $11,849). Introducing a three-dose HPV vaccination program is currently not cost-effective in Iran. Because vaccine supplies cost is the most important parameter in this evaluation, considering a two-dose schedule or reducing vaccine prices has an impact on final conclusions. Copyright © 2018. Published by Elsevier Inc.

  9. Initial evaluation of rectal bleeding in young persons: a cost-effectiveness analysis.

    PubMed

    Lewis, James D; Brown, Alphonso; Localio, A Russell; Schwartz, J Sanford

    2002-01-15

    Evaluation of rectal bleeding in young patients is a frequent diagnostic challenge. To determine the relative cost-effectiveness of alternative diagnostic strategies for young patients with rectal bleeding. Cost-effectiveness analysis using a Markov model. Probability estimates were based on published medical literature. Cost estimates were based on Medicare reimbursement rates and published medical literature. Persons 25 to 45 years of age with otherwise asymptomatic rectal bleeding. The patient's lifetime. Modified societal perspective. Diagnostic strategies included no evaluation, colonoscopy, flexible sigmoidoscopy, barium enema, anoscopy, or any feasible combination of these procedures. Life expectancy and costs. For 35-year-old patients, the no-evaluation strategy yielded the least life expectancy. The incremental cost-effectiveness of flexible sigmoidoscopy compared with no evaluation or with any strategy incorporating anoscopy (followed by further evaluation if no anal disease was found on anoscopy) was less than $5300 per year of life gained. A strategy of flexible sigmoidoscopy plus barium enema yielded the greatest life expectancy, with an incremental cost of $23 918 per additional life-year gained compared with flexible sigmoidoscopy alone. As patient age at presentation of rectal bleeding increased, evaluation of the entire colon became more cost-effective. The incremental cost-effectiveness of flexible sigmoidoscopy plus barium enema compared with colonoscopy was sensitive to estimates of the sensitivity of the tests. In a probabilistic sensitivity analysis comparing flexible sigmoidoscopy with anoscopy followed by flexible sigmoidoscopy if needed, the middle 95th percentile of the distribution of the incremental cost-effectiveness ratios ranged from flexible sigmoidoscopy yielding an increased life expectancy at reduced cost to $52 158 per year of life gained (mean, $11 461 per year of life saved). Evaluation of the colon of persons 25 to 45 years of age with otherwise asymptomatic rectal bleeding increases the life expectancy at a cost comparable to that of colon cancer screening.

  10. Levonorgestrel Intrauterine Device as an Endometrial Cancer Prevention Strategy in Obese Women: A Cost-Effectiveness Analysis.

    PubMed

    Dottino, Joseph A; Hasselblad, Vic; Secord, Angeles Alvarez; Myers, Evan R; Chino, Junzo; Havrilesky, Laura J

    2016-10-01

    To estimate the cost-effectiveness of the levonorgestrel intrauterine device (IUD) as an endometrial cancer prevention strategy in obese women. A modified Markov model was used to compare IUD placement at age 50 with usual care among women with a body mass index (BMI, kg/m) 40 or greater or BMI 30 or greater. The effects of obesity on incidence and survival were incorporated. The IUD was assumed to confer a 50% reduction in cancer incidence over 5 years. Costs of IUD and cancer care were included. Clinical outcomes were cancer diagnosis and deaths from cancer. Incremental cost-effectiveness ratios were calculated in 2015 U.S. dollars per year of life saved. One-way and two-way sensitivity analyses and Monte Carlo probabilistic analyses were performed. For a 50 year old with BMI 40 or greater, the IUD strategy is costlier and more effective than usual care with an incremental cost-effectiveness ratio of $74,707 per year of life saved. If the protective effect of the levonorgestrel IUD is assumed to be 10 years, the incremental cost-effectiveness ratio decreases to $37,858 per year of life saved. In sensitivity analysis, a levonorgestrel IUD that reduces cancer incidence by at least 68% in women with BMIs of 40 or greater or costs less than $500 is potentially cost-effective. For BMI 30 or greater, the incremental cost-effectiveness ratio of IUD strategy is $137,223 per year of life saved compared with usual care. In Monte Carlo analysis, IUD placement for BMI 40 or greater is cost-effective in 50% of simulations at a willingness-to-pay threshold of $100,000 per year of life saved. The levonorgestrel IUD is a potentially cost-effective strategy for prevention of deaths from endometrial cancer in obese women.

  11. Determination of post-shakedown quantities of a pipe bend via the simplified theory of plastic zones compared with load history dependent incremental analysis

    NASA Astrophysics Data System (ADS)

    Vollrath, Bastian; Hübel, Hartwig

    2018-01-01

    The Simplified Theory of Plastic Zones (STPZ) may be used to determine post-shakedown quantities such as strain ranges and accumulated strains at plastic or elastic shakedown. The principles of the method are summarized. Its practical applicability is shown by the example of a pipe bend subjected to constant internal pressure along with cyclic in-plane bending or/and cyclic radial temperature gradient. The results are compared with incremental analyses performed step-by-step throughout the entire load history until the state of plastic shakedown is achieved.

  12. Body mass index and hand osteoarthritis susceptibility: an updated meta-analysis.

    PubMed

    Jiang, Liying; Xie, Xiaohua; Wang, Yidan; Wang, Yingchen; Lu, Yihua; Tian, Tian; Chu, Minjie; Shen, Yi

    2016-12-01

    Numerous epidemiologic studies have evaluated the association between overweight and hand osteoarthritis; However, the existing results are inconsistent. Systematic searches were performed and reference lists from the retrieved trials were searched. This meta-analysis and meta-regression was executed to identify all English-language articles that quantitatively assess the strength of associations between body mass index and hand osteoarthritis risk. Study-specific incremental estimates were standardized to determine the risk associated with a 5 kg/m 2 increase in body mass index. We conducted the study according to the guidelines for the meta-analysis of observational studies in epidemiology. Of the 21 studies included, 13 were cross-sectional studies, three were case control studies and five were cohort studies. The pooled summary estimates were 1.10 (95%CI: 0.98-1.24) with no significant difference (P = 0.09). Subgroup analysis shows that body mass index was positively associated with hand osteoarthritis in cross-sectional studies (1.05 [95%CI: 1.02-1.08] P < 0.01), while with no significant difference was found in case-control studies (1.28 [95%CI: 0.87-1.88]) and in cohort studies (1.06 [95%CI: 0.71-1.58]) (P = 0.21 and P = 0.77, respectively). A weak but significant effect on radiographic hand osteoarthritis risk was found. The summary estimates were 1.06 (95%CI: 1.02-1.10) in studies defined by radiography and 1.25 (95%CI: 1.06-1.49) by radiography and clinically (P < 0 .01 and P = 0.01, respectively). It appears that increased body mass index contributes to a positively moderate effect on susceptibility to hand osteoarthritis, as defined radiographically and/or radiographically and clinically. The effects vary by study design and osteoarthritis definition. © 2016 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  13. Dabigatran for the Treatment and Secondary Prevention of Venous Thromboembolism; A Cost-Effectiveness Analysis for the Netherlands.

    PubMed

    Stevanović, J; de Jong, L A; Kappelhoff, B S; Dvortsin, E P; Voorhaar, M; Postma, M J

    2016-01-01

    Dabigatran was proven to have similar effect on the prevention of recurrence of venous thromboembolism (VTE) and a lower risk of bleeding compared to vitamin K antagonists (VKA). The aim of this study is to assess the cost-effectiveness (CE) of dabigatran for the treatment and secondary prevention in patients with VTE compared to VKAs in the Dutch setting. Previously published Markov model was modified and updated to assess the CE of dabigatran and VKAs for the treatment and secondary prevention in patients with VTE from a societal perspective in the base-case analysis. The model was populated with efficacy and safety data from major dabigatran trials (i.e. RE-COVER, RECOVER II, RE-MEDY and RE-SONATE), Dutch specific costs, and utilities derived from dabigatran trials or other published literature. Univariate, probabilistic sensitivity and a number of scenario analyses evaluating various decision-analytic settings (e.g. the perspective of analysis, use of anticoagulants only for treatment or only for secondary prevention, or comparison to no treatment) were tested on the incremental cost-effectiveness ratio (ICER). In the base-case scenario, patients on dabigatran gained an additional 0.034 quality adjusted life year (QALY) while saving €1,598. Results of univariate sensitivity analysis were quite robust. The probability that dabigatran is cost-effective at a willingness-to-pay threshold of €20,000/QALY was 98.1%. From the perspective of healthcare provider, extended anticoagulation with dabigatran compared to VKAs was estimated at €2,158 per QALY gained. The ICER for anticoagulation versus no treatment in patients with equipoise risk of recurrent VTE was estimated at €33,379 per QALY gained. Other scenarios showed dabigatran was cost-saving. From a societal perspective, dabigatran is likely to be a cost-effective or even cost-saving strategy for treatment and secondary prevention of VTE compared to VKAs in the Netherlands.

  14. Space station (modular) mission analysis. Volume 1: Mission analysis

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The mission analysis on the modular space station considers experimental requirements and options characterized by low initial cost and incremental manning. Features that affect initial development and early operating costs are identified and their impacts on the program are assessed. Considered are the areas of experiment, mission, operations, information management, and long life and safety analyses.

  15. 2007 State Policies on Assessment Participation and Accommodations for Students with Disabilities. Synthesis Report 69

    ERIC Educational Resources Information Center

    Christensen, Laurene L.; Lazarus, Sheryl S.; Crone, Melissa; Thurlow, Martha L.

    2008-01-01

    This document presents an update of a 2006 report from NCEO tracking and analyzing state policies on assessment participation and accommodations since 1992. The purpose of the current analysis is to update information on these policies that was last reported by NCEO in 2006 (based on 2005 data). In this analysis, policies from all 50 states, plus…

  16. Food Irradiation Update and Cost Analysis

    DTIC Science & Technology

    1991-11-01

    Natick). Significant contributions were made by Dr. Irwin Taub and Mr. Christopher Rees of the Technology Acquisition Division, Food Engineering...stability. 5 Food Irradiation Update C-ost Analysis I. Introduction In the book The Physioloqy of Taste (1825), one of the pioneers of gastronomy ...review of the utility that radiation preserved foods might offer the military food service system. To date, this technology has seen limited use in the

  17. Florida, 2010 forest inventory and analysis factsheet

    Treesearch

    Mark J. Brown; Jarek Nowak

    2012-01-01

    Forest Inventory and Analysis (FIA) factsheets are produced periodically to keep the public up to date on the extent and condition of the forest lands in each State. This factsheet is an annualized update of the full 5-year cycle of panel data completed in 2007 and updated by reprocessing with new 2009 and 2010 panel data. It represents 5 years of data, 40 percent of...

  18. Structural Finite Element Model Updating Using Vibration Tests and Modal Analysis for NPL footbridge - SHM demonstrator

    NASA Astrophysics Data System (ADS)

    Barton, E.; Middleton, C.; Koo, K.; Crocker, L.; Brownjohn, J.

    2011-07-01

    This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.

  19. Cost-utility analysis of percutaneous mitral valve repair in inoperable patients with functional mitral regurgitation in German settings.

    PubMed

    Borisenko, Oleg; Haude, Michael; Hoppe, Uta C; Siminiak, Tomasz; Lipiecki, Janusz; Goldberg, Steve L; Mehta, Nawzer; Bouknight, Omari V; Bjessmo, Staffan; Reuter, David G

    2015-05-14

    To determine the cost-effectiveness of the percutaneous mitral valve repair (PMVR) using Carillon® Mitral Contour System® (Cardiac Dimensions Inc., Kirkland, WA, USA) in patients with congestive heart failure accompanied by moderate to severe functional mitral regurgitation (FMR) compared to the prolongation of optimal medical treatment (OMT). Cost-utility analysis using a combination of a decision tree and Markov process was performed. The clinical effectiveness was determined based on the results of the Transcatheter Implantation of Carillon Mitral Annuloplasty Device (TITAN) trial. The mean age of the target population was 62 years, 77% of the patients were males, 64% of the patients had severe FMR and all patients had New York Heart Association functional class III. The epidemiological, cost and utility data were derived from the literature. The analysis was performed from the German statutory health insurance perspective over 10-year time horizon. Over 10 years, the total cost was €36,785 in the PMVR arm and €18,944 in the OMT arm. However, PMVR provided additional benefits to patients with an 1.15 incremental quality-adjusted life years (QALY) and an 1.41 incremental life years. The percutaneous procedure was cost-effective in comparison to OMT with an incremental cost-effectiveness ratio of €15,533/QALY. Results were robust in the deterministic sensitivity analysis. In the probabilistic sensitivity analysis with a willingness-to-pay threshold of €35,000/QALY, PMVR had a 84 % probability of being cost-effective. Percutaneous mitral valve repair may be cost-effective in inoperable patients with FMR due to heart failure.

  20. 48 CFR 3452.232-71 - Incremental funding.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...

Top