Sample records for reference model based

  1. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  2. LinkEHR-Ed: a multi-reference model archetype editor based on formal semantics.

    PubMed

    Maldonado, José A; Moner, David; Boscá, Diego; Fernández-Breis, Jesualdo T; Angulo, Carlos; Robles, Montserrat

    2009-08-01

    To develop a powerful archetype editing framework capable of handling multiple reference models and oriented towards the semantic description and standardization of legacy data. The main prerequisite for implementing tools providing enhanced support for archetypes is the clear specification of archetype semantics. We propose a formalization of the definition section of archetypes based on types over tree-structured data. It covers the specialization of archetypes, the relationship between reference models and archetypes and conformance of data instances to archetypes. LinkEHR-Ed, a visual archetype editor based on the former formalization with advanced processing capabilities that supports multiple reference models, the editing and semantic validation of archetypes, the specification of mappings to data sources, and the automatic generation of data transformation scripts, is developed. LinkEHR-Ed is a useful tool for building, processing and validating archetypes based on any reference model.

  3. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2010-01-01

    This paper is devoted to the design and analysis of a predictor-based model reference adaptive control. Stable adaptive laws are derived using Lyapunov framework. The proposed architecture is compared with the now classical model reference adaptive control. A simulation example is presented in which numerical evidence indicates that the proposed controller yields improved transient characteristics.

  4. Guide to solar reference spectra and irradiance models

    NASA Astrophysics Data System (ADS)

    Tobiska, W. Kent

    The international standard for determining solar irradiances was published by the International Standards Organization (ISO) in May 2007. The document, ISO 21348 Space Environment (natural and artificial) - Process for determining solar irradiances, describes the process for representing solar irradiances. We report on the next progression of standards work, i.e., the development of a guide that identifies solar reference spectra and irradiance models for use in engineering design or scientific research. This document will be produced as an AIAA Guideline and ISO Technical Report. It will describe the content of the reference spectra and models, uncertainties and limitations, technical basis, data bases from which the reference spectra and models are formed, publication references, and sources of computer code for reference spectra and solar irradiance models, including those which provide spectrally-resolved lines as well as solar indices and proxies and which are generally recognized in the solar sciences. The document is intended to assist aircraft and space vehicle designers and developers, heliophysicists, geophysicists, aeronomers, meteorologists, and climatologists in understanding available models, comparing sources of data, and interpreting engineering and scientific results based on different solar reference spectra and irradiance models.

  5. Spectral Classes for FAA's Integrated Noise Model Version 6.0.

    DOT National Transportation Integrated Search

    1999-12-07

    The starting point in any empirical model such as the Federal Aviation Administrations (FAA) : Integrated Noise Model (INM) is a reference data base. In Version 5.2 and in previous versions : the reference data base consisted solely of a set of no...

  6. Command generator tracker based direct model reference adaptive tracking guidance for Mars atmospheric entry

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Peng, Yuming

    2012-01-01

    In order to accurately deliver an entry vehicle through the Martian atmosphere to the prescribed parachute deployment point, active Mars entry guidance is essential. This paper addresses the issue of Mars atmospheric entry guidance using the command generator tracker (CGT) based direct model reference adaptive control to reduce the adverse effect of the bounded uncertainties on atmospheric density and aerodynamic coefficients. Firstly, the nominal drag acceleration profile meeting a variety of constraints is planned off-line in the longitudinal plane as the reference model to track. Then, the CGT based direct model reference adaptive controller and the feed-forward compensator are designed to robustly track the aforementioned reference drag acceleration profile and to effectively reduce the downrange error. Afterwards, the heading alignment logic is adopted in the lateral plane to reduce the crossrange error. Finally, the validity of the guidance algorithm proposed in this paper is confirmed by Monte Carlo simulation analysis.

  7. Requirements for data integration platforms in biomedical research networks: a reference model.

    PubMed

    Ganzinger, Matthias; Knaup, Petra

    2015-01-01

    Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper.

  8. Model reference adaptive control (MRAC)-based parameter identification applied to surface-mounted permanent magnet synchronous motor

    NASA Astrophysics Data System (ADS)

    Zhong, Chongquan; Lin, Yaoyao

    2017-11-01

    In this work, a model reference adaptive control-based estimated algorithm is proposed for online multi-parameter identification of surface-mounted permanent magnet synchronous machines. By taking the dq-axis equations of a practical motor as the reference model and the dq-axis estimation equations as the adjustable model, a standard model-reference-adaptive-system-based estimator was established. Additionally, the Popov hyperstability principle was used in the design of the adaptive law to guarantee accurate convergence. In order to reduce the oscillation of identification result, this work introduces a first-order low-pass digital filter to improve precision regarding the parameter estimation. The proposed scheme was then applied to an SPM synchronous motor control system without any additional circuits and implemented using a DSP TMS320LF2812. For analysis, the experimental results reveal the effectiveness of the proposed method.

  9. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2009-01-01

    This paper is devoted to robust, Predictor-based Model Reference Adaptive Control (PMRAC) design. The proposed adaptive system is compared with the now-classical Model Reference Adaptive Control (MRAC) architecture. Simulation examples are presented. Numerical evidence indicates that the proposed PMRAC tracking architecture has better than MRAC transient characteristics. In this paper, we presented a state-predictor based direct adaptive tracking design methodology for multi-input dynamical systems, with partially known dynamics. Efficiency of the design was demonstrated using short period dynamics of an aircraft. Formal proof of the reported PMRAC benefits constitute future research and will be reported elsewhere.

  10. Model-based control strategies for systems with constraints of the program type

    NASA Astrophysics Data System (ADS)

    Jarzębowska, Elżbieta

    2006-08-01

    The paper presents a model-based tracking control strategy for constrained mechanical systems. Constraints we consider can be material and non-material ones referred to as program constraints. The program constraint equations represent tasks put upon system motions and they can be differential equations of orders higher than one or two, and be non-integrable. The tracking control strategy relies upon two dynamic models: a reference model, which is a dynamic model of a system with arbitrary order differential constraints and a dynamic control model. The reference model serves as a motion planner, which generates inputs to the dynamic control model. It is based upon a generalized program motion equations (GPME) method. The method enables to combine material and program constraints and merge them both into the motion equations. Lagrange's equations with multipliers are the peculiar case of the GPME, since they can be applied to systems with constraints of first orders. Our tracking strategy referred to as a model reference program motion tracking control strategy enables tracking of any program motion predefined by the program constraints. It extends the "trajectory tracking" to the "program motion tracking". We also demonstrate that our tracking strategy can be extended to a hybrid program motion/force tracking.

  11. An aggregate method to calibrate the reference point of cumulative prospect theory-based route choice model for urban transit network

    NASA Astrophysics Data System (ADS)

    Zhang, Yufeng; Long, Man; Luo, Sida; Bao, Yu; Shen, Hanxia

    2015-12-01

    Transit route choice model is the key technology of public transit systems planning and management. Traditional route choice models are mostly based on expected utility theory which has an evident shortcoming that it cannot accurately portray travelers' subjective route choice behavior for their risk preferences are not taken into consideration. Cumulative prospect theory (CPT), a brand new theory, can be used to describe travelers' decision-making process under the condition of uncertainty of transit supply and risk preferences of multi-type travelers. The method to calibrate the reference point, a key parameter to CPT-based transit route choice model, determines the precision of the model to a great extent. In this paper, a new method is put forward to obtain the value of reference point which combines theoretical calculation and field investigation results. Comparing the proposed method with traditional method, it shows that the new method can promote the quality of CPT-based model by improving the accuracy in simulating travelers' route choice behaviors based on transit trip investigation from Nanjing City, China. The proposed method is of great significance to logical transit planning and management, and to some extent makes up the defect that obtaining the reference point is solely based on qualitative analysis.

  12. Requirements for data integration platforms in biomedical research networks: a reference model

    PubMed Central

    Knaup, Petra

    2015-01-01

    Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper. PMID:25699205

  13. Command generator tracker based direct model reference adaptive control of a PUMA 560 manipulator. Thesis

    NASA Technical Reports Server (NTRS)

    Swift, David C.

    1992-01-01

    This project dealt with the application of a Direct Model Reference Adaptive Control algorithm to the control of a PUMA 560 Robotic Manipulator. This chapter will present some motivation for using Direct Model Reference Adaptive Control, followed by a brief historical review, the project goals, and a summary of the subsequent chapters.

  14. Generating Multimodal References

    ERIC Educational Resources Information Center

    van der Sluis, Ielka; Krahmer, Emiel

    2007-01-01

    This article presents a new computational model for the generation of multimodal referring expressions (REs), based on observations in human communication. The algorithm is an extension of the graph-based algorithm proposed by Krahmer, van Erk, and Verleg (2003) and makes use of a so-called Flashlight Model for pointing. The Flashlight Model…

  15. Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation.

    PubMed

    Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien

    2010-06-01

    The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and obtain more accurate segmentation results automatically. Moreover, realistic hand motion animations can be generated based on the bone segmentation results. The proposed method is found helpful for understanding hand bone geometries in dynamic postures that can be used in simulating 3D hand motion through multipostural MR images.

  16. Analytical performance specifications for changes in assay bias (Δbias) for data with logarithmic distributions as assessed by effects on reference change values.

    PubMed

    Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György

    2016-11-01

    Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.

  17. Dental age estimation in Japanese individuals combining permanent teeth and third molars.

    PubMed

    Ramanan, Namratha; Thevissen, Patrick; Fleuws, Steffen; Willems, G

    2012-12-01

    The study aim was, firstly, to verify the Willems et al. model on a Japanese reference sample. Secondly to develop a Japanese reference model based on the Willems et al. method and to verify it. Thirdly to analyze the age prediction performance adding tooth development information of third molars to permanent teeth. Retrospectively 1877 panoramic radiographs were selected in the age range between 1 and 23 years (1248 children, 629 sub-adults). Dental development was registered applying Demirjian 's stages of the mandibular left permanent teeth in children and Köhler stages on the third molars. The children's data were, firstly, used to validate the Willems et al. model (developed a Belgian reference sample), secondly, split ino a training and a test sample. On the training sample a Japanese reference model was developed based on the Willems method. The developed model and the Willems et al; model were verified on the test sample. Regression analysis was used to detect the age prediction performance adding third molar scores to permanent tooth scores. The validated Willems et al. model provided a mean absolute error of 0.85 and 0.75 years in females and males, respectively. The mean absolute error in the verified Willems et al. and the developed Japanese reference model was 0.85, 0.77 and 0.79, 0.75 years in females and males, respectively. On average a negligible change in root mean square error values was detected adding third molar scores to permanent teeth scores. The Belgian sample could be used as a reference model to estimate the age of the Japanese individuals. Combining information from the third molars and permanent teeth was not providing clinically significant improvement of age predictions based on permanent teeth information alone.

  18. Development of discrete choice model considering internal reference points and their effects in travel mode choice context

    NASA Astrophysics Data System (ADS)

    Sarif; Kurauchi, Shinya; Yoshii, Toshio

    2017-06-01

    In the conventional travel behavior models such as logit and probit, decision makers are assumed to conduct the absolute evaluations on the attributes of the choice alternatives. On the other hand, many researchers in cognitive psychology and marketing science have been suggesting that the perceptions of attributes are characterized by the benchmark called “reference points” and the relative evaluations based on them are often employed in various choice situations. Therefore, this study developed a travel behavior model based on the mental accounting theory in which the internal reference points are explicitly considered. A questionnaire survey about the shopping trip to the CBD in Matsuyama city was conducted, and then the roles of reference points in travel mode choice contexts were investigated. The result showed that the goodness-of-fit of the developed model was higher than that of the conventional model, indicating that the internal reference points might play the major roles in the choice of travel mode. Also shown was that the respondents seem to utilize various reference points: some tend to adopt the lowest fuel price they have experienced, others employ fare price they feel in perceptions of the travel cost.

  19. Scoring annual earthquake predictions in China

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Jiang, Changsheng

    2012-02-01

    The Annual Consultation Meeting on Earthquake Tendency in China is held by the China Earthquake Administration (CEA) in order to provide one-year earthquake predictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest earthquake expected during the next year. Evaluating the performance of these earthquake predictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these earthquake predictions. Based on a reference model, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson model, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for earthquake magnitudes as the reference model, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference model (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference model. The predictions made by the Annual Consultation Meetings on Earthquake Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference model.

  20. Land-Use History and Contemporary Management Inform an Ecological Reference Model for Longleaf Pine Woodland Understory Plant Communities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brudvig, Lars A.; Orrock, John L.; Damschen, Ellen I.

    Ecological restoration is frequently guided by reference conditions describing a successfully restored ecosystem; however, the causes and magnitude of ecosystem degradation vary, making simple knowledge of reference conditions insufficient for prioritizing and guiding restoration. Ecological reference models provide further guidance by quantifying reference conditions, as well as conditions at degraded states that deviate from reference conditions. Many reference models remain qualitative, however, limiting their utility. We quantified and evaluated a reference model for southeastern U.S. longleaf pine woodland understory plant communities. We used regression trees to classify 232 longleaf pine woodland sites at three locations along the Atlantic coastal plainmore » based on relationships between understory plant community composition, soils lol(which broadly structure these communities), and factors associated with understory degradation, including fire frequency, agricultural history, and tree basal area. To understand the spatial generality of this model, we classified all sites together. and for each of three study locations separately. Both the regional and location-specific models produced quantifiable degradation gradients–i.e., progressive deviation from conditions at 38 reference sites, based on understory species composition, diversity and total cover, litter depth, and other attributes. Regionally, fire suppression was the most important degrading factor, followed by agricultural history, but at individual locations, agricultural history or tree basal area was most important. At one location, the influence of a degrading factor depended on soil attributes. We suggest that our regional model can help prioritize longleaf pine woodland restoration across our study region; however, due to substantial landscape-to-landscape variation, local management decisions should take into account additional factors (e.g., soil attributes). Our study demonstrates the utility of quantifying degraded states and provides a series of hypotheses for future experimental restoration work. More broadly, our work provides a framework for developing and evaluating reference models that incorporate multiple, interactive anthropogenic drivers of ecosystem degradation.« less

  1. Global daily reference evapotranspiration modeling and evaluation

    USGS Publications Warehouse

    Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.

    2008-01-01

    Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration’s Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ∼100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world. While the study revealed the potential of GDAS ETo for large-scale hydrological applications, site-specific use of GDAS ETo in complex hydro-climatic regions such as coastal areas and rugged terrain may require the application of bias correction and/or disaggregation of the GDAS ETo using downscaling techniques.

  2. Identification of reliable gridded reference data for statistical downscaling methods in Alberta

    NASA Astrophysics Data System (ADS)

    Eum, H. I.; Gupta, A.

    2017-12-01

    Climate models provide essential information to assess impacts of climate change at regional and global scales. However, statistical downscaling methods have been applied to prepare climate model data for various applications such as hydrologic and ecologic modelling at a watershed scale. As the reliability and (spatial and temporal) resolution of statistically downscaled climate data mainly depend on a reference data, identifying the most reliable reference data is crucial for statistical downscaling. A growing number of gridded climate products are available for key climate variables which are main input data to regional modelling systems. However, inconsistencies in these climate products, for example, different combinations of climate variables, varying data domains and data lengths and data accuracy varying with physiographic characteristics of the landscape, have caused significant challenges in selecting the most suitable reference climate data for various environmental studies and modelling. Employing various observation-based daily gridded climate products available in public domain, i.e. thin plate spline regression products (ANUSPLIN and TPS), inverse distance method (Alberta Townships), and numerical climate model (North American Regional Reanalysis) and an optimum interpolation technique (Canadian Precipitation Analysis), this study evaluates the accuracy of the climate products at each grid point by comparing with the Adjusted and Homogenized Canadian Climate Data (AHCCD) observations for precipitation, minimum and maximum temperature over the province of Alberta. Based on the performance of climate products at AHCCD stations, we ranked the reliability of these publically available climate products corresponding to the elevations of stations discretized into several classes. According to the rank of climate products for each elevation class, we identified the most reliable climate products based on the elevation of target points. A web-based system was developed to allow users to easily select the most reliable reference climate data at each target point based on the elevation of grid cell. By constructing the best combination of reference data for the study domain, the accurate and reliable statistically downscaled climate projections could be significantly improved.

  3. Linking in situ LAI and fine resolution remote sensing data to map reference LAI over cropland and grassland using geostatistical regression method

    NASA Astrophysics Data System (ADS)

    He, Yaqian; Bo, Yanchen; Chai, Leilei; Liu, Xiaolong; Li, Aihua

    2016-08-01

    Leaf Area Index (LAI) is an important parameter of vegetation structure. A number of moderate resolution LAI products have been produced in urgent need of large scale vegetation monitoring. High resolution LAI reference maps are necessary to validate these LAI products. This study used a geostatistical regression (GR) method to estimate LAI reference maps by linking in situ LAI and Landsat TM/ETM+ and SPOT-HRV data over two cropland and two grassland sites. To explore the discrepancies of employing different vegetation indices (VIs) on estimating LAI reference maps, this study established the GR models for different VIs, including difference vegetation index (DVI), normalized difference vegetation index (NDVI), and ratio vegetation index (RVI). To further assess the performance of the GR model, the results from the GR and Reduced Major Axis (RMA) models were compared. The results show that the performance of the GR model varies between the cropland and grassland sites. At the cropland sites, the GR model based on DVI provides the best estimation, while at the grassland sites, the GR model based on DVI performs poorly. Compared to the RMA model, the GR model improves the accuracy of reference LAI maps in terms of root mean square errors (RMSE) and bias.

  4. Impact of the choice of the precipitation reference data set on climate model selection and the resulting climate change signal

    NASA Astrophysics Data System (ADS)

    Gampe, D.; Ludwig, R.

    2017-12-01

    Regional Climate Models (RCMs) that downscale General Circulation Models (GCMs) are the primary tool to project future climate and serve as input to many impact models to assess the related changes and impacts under such climate conditions. Such RCMs are made available through the Coordinated Regional climate Downscaling Experiment (CORDEX). The ensemble of models provides a range of possible future climate changes around the ensemble mean climate change signal. The model outputs however are prone to biases compared to regional observations. A bias correction of these deviations is a crucial step in the impact modelling chain to allow the reproduction of historic conditions of i.e. river discharge. However, the detection and quantification of model biases are highly dependent on the selected regional reference data set. Additionally, in practice due to computational constraints it is usually not feasible to consider the entire ensembles of climate simulations with all members as input for impact models which provide information to support decision-making. Although more and more studies focus on model selection based on the preservation of the climate model spread, a selection based on validity, i.e. the representation of the historic conditions is still a widely applied approach. In this study, several available reference data sets for precipitation are selected to detect the model bias for the reference period 1989 - 2008 over the alpine catchment of the Adige River located in Northern Italy. The reference data sets originate from various sources, such as station data or reanalysis. These data sets are remapped to the common RCM grid at 0.11° resolution and several indicators, such as dry and wet spells, extreme precipitation and general climatology, are calculate to evaluate the capability of the RCMs to produce the historical conditions. The resulting RCM spread is compared against the spread of the reference data set to determine the related uncertainties and detect potential model biases with respect to each reference data set. The RCMs are then ranked based on various statistical measures for each indicator and a score matrix is derived to select a subset of RCMs. We show the impact and importance of the reference data set with respect to the resulting climate change signal on the catchment scale.

  5. The implementation of aerial object recognition algorithm based on contour descriptor in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Babayan, Pavel; Smirnov, Sergey; Strotov, Valery

    2017-10-01

    This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  6. Application of positive-real functions in hyperstable discrete model-reference adaptive system design.

    NASA Technical Reports Server (NTRS)

    Karmarkar, J. S.

    1972-01-01

    Proposal of an algorithmic procedure, based on mathematical programming methods, to design compensators for hyperstable discrete model-reference adaptive systems (MRAS). The objective of the compensator is to render the MRAS insensitive to initial parameter estimates within a maximized hypercube in the model parameter space.

  7. How Does Similarity-Based Interference Affect the Choice of Referring Expression?

    ERIC Educational Resources Information Center

    Fukumura, Kumiko; van Gompel, Roger P. G.; Harley, Trevor; Pickering, Martin J.

    2011-01-01

    We tested a cue-based retrieval model that predicts how similarity between discourse entities influences the speaker's choice of referring expressions. In Experiment 1, speakers produced fewer pronouns (relative to repeated noun phrases) when the competitor was in the same situation as the referent (both on a horse) rather than in a different…

  8. Automatic control of the NMB level in general anaesthesia with a switching total system mass control strategy.

    PubMed

    Teixeira, Miguel; Mendonça, Teresa; Rocha, Paula; Rabiço, Rui

    2014-12-01

    This paper presents a model based switching control strategy to drive the neuromuscular blockade (NMB) level of patients undergoing general anesthesia to a predefined reference. A single-input single-output Wiener system with only two parameters is used to model the effect of two different muscle relaxants, atracurium and rocuronium, and a switching controller is designed based on a bank of total system mass control laws. Each of such laws is tuned for an individual model from a bank chosen to represent the behavior of the whole population. The control law to be applied at each instant corresponds to the model whose NMB response is closer to the patient's response. Moreover a scheme to improve the reference tracking quality based on the analysis of the patient's response, as well as, a comparison between the switching strategy and the Extended Kalman Kilter (EKF) technique are presented. The results are illustrated by means of several simulations, where switching shows to provide good results, both in theory and in practice, with a desirable reference tracking. The reference tracking improvement technique is able to produce a better reference tracking. Also, this technique showed a better performance than the (EKF). Based on these results, the switching control strategy with a bank of total system mass control laws proved to be robust enough to be used as an automatic control system for the NMB level.

  9. Calibration of mass transfer-based models to predict reference crop evapotranspiration

    NASA Astrophysics Data System (ADS)

    Valipour, Mohammad

    2017-05-01

    The present study aims to compare mass transfer-based models to determine the best model under different weather conditions. The results showed that the Penman model estimates reference crop evapotranspiration better than other models in most provinces of Iran (15 provinces). However, the values of R 2 were less than 0.90 for 24 provinces of Iran. Therefore, the models were calibrated, and precision of estimation was increased (the values of R 2 were less than 0.90 for only ten provinces in the modified models). The mass transfer-based models estimated reference crop evapotranspiration in the northern (near the Caspian Sea) and southern (near the Persian Gulf) Iran (annual relative humidity more than 65 %) better than other provinces. The best values of R 2 were 0.96 and 0.98 for the Trabert and Rohwer models in Ardabil (AR) and Mazandaran (MZ) provinces before and after calibration, respectively. Finally, a list of the best performances of each model was presented to use other regions and next studies according to values of mean, maximum, and minimum temperature, relative humidity, and wind speed. The best weather conditions to use mass transfer-based equations are 8-18 °C (with the exception of Ivanov), <25.5 °C, <15 °C, >55 % for mean, maximum, and minimum temperature, and relative humidity, respectively.

  10. The NASA/MSFC global reference atmospheric model: 1990 version (GRAM-90). Part 2: Program/data listings

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Alyea, F. N.; Cunnold, D. M.; Jeffries, W. R., III; Johnson, D. L.

    1991-01-01

    A new (1990) version of the NASA/MSFC Global Reference Atmospheric Model (GRAM-90) was completed and the program and key data base listing are presented. GRAM-90 incorporate extensive new data, mostly collected under the Middle Atmosphere Program, to produce a completely revised middle atmosphere model (20 to 120 km). At altitudes greater than 120 km, GRAM-90 uses the NASA Marshall Engineering Thermosphere model. Complete listings of all program and major data bases are presented. Also, a test case is included.

  11. A Nonlinear Dynamic Inversion Predictor-Based Model Reference Adaptive Controller for a Generic Transport Model

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Kaneshige, John T.

    2010-01-01

    Presented here is a Predictor-Based Model Reference Adaptive Control (PMRAC) architecture for a generic transport aircraft. At its core, this architecture features a three-axis, non-linear, dynamic-inversion controller. Command inputs for this baseline controller are provided by pilot roll-rate, pitch-rate, and sideslip commands. This paper will first thoroughly present the baseline controller followed by a description of the PMRAC adaptive augmentation to this control system. Results are presented via a full-scale, nonlinear simulation of NASA s Generic Transport Model (GTM).

  12. A Practical Approach to Governance and Optimization of Structured Data Elements.

    PubMed

    Collins, Sarah A; Gesner, Emily; Morgan, Steven; Mar, Perry; Maviglia, Saverio; Colburn, Doreen; Tierney, Diana; Rocha, Roberto

    2015-01-01

    Definition and configuration of clinical content in an enterprise-wide electronic health record (EHR) implementation is highly complex. Sharing of data definitions across applications within an EHR implementation project may be constrained by practical limitations, including time, tools, and expertise. However, maintaining rigor in an approach to data governance is important for sustainability and consistency. With this understanding, we have defined a practical approach for governance of structured data elements to optimize data definitions given limited resources. This approach includes a 10 step process: 1) identification of clinical topics, 2) creation of draft reference models for clinical topics, 3) scoring of downstream data needs for clinical topics, 4) prioritization of clinical topics, 5) validation of reference models for clinical topics, and 6) calculation of gap analyses of EHR compared against reference model, 7) communication of validated reference models across project members, 8) requested revisions to EHR based on gap analysis, 9) evaluation of usage of reference models across project, and 10) Monitoring for new evidence requiring revisions to reference model.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hao; Ren, Shangping; Garzoglio, Gabriele

    Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overheadmore » is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.« less

  14. Hybrid pregnant reference phantom series based on adult female ICRP reference phantom

    NASA Astrophysics Data System (ADS)

    Rafat-Motavalli, Laleh; Miri-Hakimabad, Hashem; Hoseinian-Azghadi, Elie

    2018-03-01

    This paper presents boundary representation (BREP) models of pregnant female and her fetus at the end of each trimester. The International Commission on Radiological Protection (ICRP) female reference voxel phantom was used as a base template in development process of the pregnant hybrid phantom series. The differences in shape and location of the displaced maternal organs caused by enlarging uterus were also taken into account. The CT and MR images of fetus specimens and pregnant patients of various ages were used to replace the maternal abdominal pelvic organs of template phantom and insert the fetus inside the gravid uterus. Each fetal model contains 21 different organs and tissues. The skeletal model of the fetus also includes age-dependent cartilaginous and ossified skeletal components. The replaced maternal organ models were converted to NURBS surfaces and then modified to conform to reference values of ICRP Publication 89. The particular feature of current series compared to the previously developed pregnant phantoms is being constructed upon the basis of ICRP reference phantom. The maternal replaced organ models are NURBS surfaces. With this great potential, they might have the feasibility of being converted to high quality polygon mesh phantoms.

  15. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  16. Direct model reference adaptive control with application to flexible robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory W.

    1992-01-01

    A modification to a direct command generator tracker-based model reference adaptive control (MRAC) system is suggested in this paper. This modification incorporates a feedforward into the reference model's output as well as the plant's output. Its purpose is to eliminate the bounded model following error present in steady state when previous MRAC systems were used. The algorithm was evaluated using the dynamics for a single-link flexible-joint arm. The results of these simulations show a response with zero steady state model following error. These results encourage further use of MRAC for various types of nonlinear plants.

  17. A knowledge-based system for patient image pre-fetching in heterogeneous database environments--modeling, design, and evaluation.

    PubMed

    Wei, C P; Hu, P J; Sheng, O R

    2001-03-01

    When performing primary reading on a newly taken radiological examination, a radiologist often needs to reference relevant prior images of the same patient for confirmation or comparison purposes. Support of such image references is of clinical importance and may have significant effects on radiologists' examination reading efficiency, service quality, and work satisfaction. To effectively support such image reference needs, we proposed and developed a knowledge-based patient image pre-fetching system, addressing several challenging requirements of the application that include representation and learning of image reference heuristics and management of data-intensive knowledge inferencing. Moreover, the system demands an extensible and maintainable architecture design capable of effectively adapting to a dynamic environment characterized by heterogeneous and autonomous data source systems. In this paper, we developed a synthesized object-oriented entity- relationship model, a conceptual model appropriate for representing radiologists' prior image reference heuristics that are heuristic oriented and data intensive. We detailed the system architecture and design of the knowledge-based patient image pre-fetching system. Our architecture design is based on a client-mediator-server framework, capable of coping with a dynamic environment characterized by distributed, heterogeneous, and highly autonomous data source systems. To adapt to changes in radiologists' patient prior image reference heuristics, ID3-based multidecision-tree induction and CN2-based multidecision induction learning techniques were developed and evaluated. Experimentally, we examined effects of the pre-fetching system we created on radiologists' examination readings. Preliminary results show that the knowledge-based patient image pre-fetching system more accurately supports radiologists' patient prior image reference needs than the current practice adopted at the study site and that radiologists may become more efficient, consultatively effective, and better satisfied when supported by the pre-fetching system than when relying on the study site's pre-fetching practice.

  18. DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pernigotti, D.; Belis, C. A.

    2018-05-01

    DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.

  19. Model reference, sliding mode adaptive control for flexible structures

    NASA Technical Reports Server (NTRS)

    Yurkovich, S.; Ozguner, U.; Al-Abbass, F.

    1988-01-01

    A decentralized model reference adaptive approach using a variable-structure sliding model control has been developed for the vibration suppression of large flexible structures. Local models are derived based upon the desired damping and response time in a model-following scheme, and variable structure controllers are then designed which employ colocated angular rate and position feedback. Numerical simulations have been performed using NASA's flexible grid experimental apparatus.

  20. Model-based local density sharpening of cryo-EM maps

    PubMed Central

    Jakobi, Arjen J; Wilmanns, Matthias

    2017-01-01

    Atomic models based on high-resolution density maps are the ultimate result of the cryo-EM structure determination process. Here, we introduce a general procedure for local sharpening of cryo-EM density maps based on prior knowledge of an atomic reference structure. The procedure optimizes contrast of cryo-EM densities by amplitude scaling against the radially averaged local falloff estimated from a windowed reference model. By testing the procedure using six cryo-EM structures of TRPV1, β-galactosidase, γ-secretase, ribosome-EF-Tu complex, 20S proteasome and RNA polymerase III, we illustrate how local sharpening can increase interpretability of density maps in particular in cases of resolution variation and facilitates model building and atomic model refinement. PMID:29058676

  1. The environmental zero-point problem in evolutionary reaction norm modeling.

    PubMed

    Ergon, Rolf

    2018-04-01

    There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.

  2. Tracer Kinetic Analysis of (S)-¹⁸F-THK5117 as a PET Tracer for Assessing Tau Pathology.

    PubMed

    Jonasson, My; Wall, Anders; Chiotis, Konstantinos; Saint-Aubert, Laure; Wilking, Helena; Sprycha, Margareta; Borg, Beatrice; Thibblin, Alf; Eriksson, Jonas; Sörensen, Jens; Antoni, Gunnar; Nordberg, Agneta; Lubberink, Mark

    2016-04-01

    Because a correlation between tau pathology and the clinical symptoms of Alzheimer disease (AD) has been hypothesized, there is increasing interest in developing PET tracers that bind specifically to tau protein. The aim of this study was to evaluate tracer kinetic models for quantitative analysis and generation of parametric images for the novel tau ligand (S)-(18)F-THK5117. Nine subjects (5 with AD, 4 with mild cognitive impairment) received a 90-min dynamic (S)-(18)F-THK5117 PET scan. Arterial blood was sampled for measurement of blood radioactivity and metabolite analysis. Volume-of-interest (VOI)-based analysis was performed using plasma-input models; single-tissue and 2-tissue (2TCM) compartment models and plasma-input Logan and reference tissue models; and simplified reference tissue model (SRTM), reference Logan, and SUV ratio (SUVr). Cerebellum gray matter was used as the reference region. Voxel-level analysis was performed using basis function implementations of SRTM, reference Logan, and SUVr. Regionally averaged voxel values were compared with VOI-based values from the optimal reference tissue model, and simulations were made to assess accuracy and precision. In addition to 90 min, initial 40- and 60-min data were analyzed. Plasma-input Logan distribution volume ratio (DVR)-1 values agreed well with 2TCM DVR-1 values (R(2)= 0.99, slope = 0.96). SRTM binding potential (BP(ND)) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 (R(2)= 1.00, slope ≈ 1.00) whereas SUVr(70-90)-1 values correlated less well and overestimated binding. Agreement between parametric methods and SRTM was best for reference Logan (R(2)= 0.99, slope = 1.03). SUVr(70-90)-1 values were almost 3 times higher than BP(ND) values in white matter and 1.5 times higher in gray matter. Simulations showed poorer accuracy and precision for SUVr(70-90)-1 values than for the other reference methods. SRTM BP(ND) and reference Logan DVR-1 values were not affected by a shorter scan duration of 60 min. SRTM BP(ND) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 values. VOI-based data analyses indicated robust results for scan durations of 60 min. Reference Logan generated quantitative (S)-(18)F-THK5117 DVR-1 parametric images with the greatest accuracy and precision and with a much lower white-matter signal than seen with SUVr(70-90)-1 images. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  3. [Nursing care systematization in rehabilitation unit, in accordance to Horta's conceptual model].

    PubMed

    Neves, Rinaldo de Souza

    2006-01-01

    The utilization of a conceptual model in the Nursing Attendance Systemization allows the development of activities based on theoretical references that can guide the implantation and the implementation of nursing proceedings in hospitals. In this article we examine the option made for the implementation of the Horta's conceptual model in the construction of a nursing attendance system in the Rehabilitation Unit of a public hospital located in the Federal District of Brazil. Through the utilization of these theoretical references it was possible to make available a data collection tool based on the basic human needs. The identification of these needs made possible the construction of the hierarchically disposed pyramid of the neurological patients' modified basic needs. Through this reference paper we intend to elaborate the prescription and nursing evolution based in the concepts and standards of the Horta's nursing process, making possible the inter-relationship of all phases of this attendance methodology.

  4. Model reference adaptive control of flexible robots in the presence of sudden load changes

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory

    1991-01-01

    Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.

  5. Analyses on hydrophobicity and attractiveness of all-atom distance-dependent potentials

    PubMed Central

    Shirota, Matsuyuki; Ishida, Takashi; Kinoshita, Kengo

    2009-01-01

    Accurate model evaluation is a crucial step in protein structure prediction. For this purpose, statistical potentials, which evaluate a model structure based on the observed atomic distance frequencies in comparison with those in reference states, have been widely used. The reference state is a virtual state where all of the atomic interactions are turned off, and it provides a standard to measure the observed frequencies. In this study, we examined seven all-atom distance-dependent potentials with different reference states. As results, we observed that the variations of atom pair composition and those of distance distributions in the reference states produced systematic changes in the hydrophobic and attractive characteristics of the potentials. The performance evaluations with the CASP7 structures indicated that the preference of hydrophobic interactions improved the correlation between the energy and the GDT-TS score, but decreased the Z-score of the native structure. The attractiveness of potential improved both the correlation and Z-score for template-based modeling targets, but the benefit was smaller in free modeling targets. These results indicated that the performances of the potentials were more strongly influenced by their characteristics than by the accuracy of the definitions of the reference states. PMID:19588493

  6. One-Dimensional Modeling Studies of the Gaseous Electronics Conference RF Reference Cell

    PubMed Central

    Govindan, T. R.; Meyyappan, M.

    1995-01-01

    A review of the one-dimensional modeling studies in the literature of the Gaseous Electronics Conference (GEC) reference plasma reactor is presented. Most of the studies are based on the fluid model description of the discharge and some utilize hybrid fluid-kinetic schemes. Both models are discussed here briefly. The models provide a basic understanding of the discharge mechanisms and reproduce several critical discharge features observed experimentally. PMID:29151755

  7. Construction of a pulse-coupled dipole network capable of fear-like and relief-like responses

    NASA Astrophysics Data System (ADS)

    Lungsi Sharma, B.

    2016-07-01

    The challenge for neuroscience as an interdisciplinary programme is the integration of ideas among the disciplines to achieve a common goal. This paper deals with the problem of deriving a pulse-coupled neural network that is capable of demonstrating behavioural responses (fear-like and relief-like). Current pulse-coupled neural networks are designed mostly for engineering applications, particularly image processing. The discovered neural network was constructed using the method of minimal anatomies approach. The behavioural response of a level-coded activity-based model was used as a reference. Although the spiking-based model and the activity-based model are of different scales, the use of model-reference principle means that the characteristics that is referenced is its functional properties. It is demonstrated that this strategy of dissection and systematic construction is effective in the functional design of pulse-coupled neural network system with nonlinear signalling. The differential equations for the elastic weights in the reference model are replicated in the pulse-coupled network geometrically. The network reflects a possible solution to the problem of punishment and avoidance. The network developed in this work is a new network topology for pulse-coupled neural networks. Therefore, the model-reference principle is a powerful tool in connecting neuroscience disciplines. The continuity of concepts and phenomena is further maintained by systematic construction using methods like the method of minimal anatomies.

  8. Next-Generation NATO Reference Mobility Model (NRMM) Development (Developpement de la nouvella generation du modele de mobilite de reference de l’OTAN (NRMM))

    DTIC Science & Technology

    2018-01-01

    Profile Database E-17 Attachment 2: NRMM Data Input Requirements E-25 Attachment 3: General Physics -Based Model Data Input Requirements E-28...E-15 Figure E-11 Examples of Unique Surface Types E-20 Figure E-12 Correlating Physical Testing with Simulation E-21 Figure E-13 Simplified Tire...Table 10-8 Scoring Values 10-19 Table 10-9 Accuracy – Physics -Based 10-20 Table 10-10 Accuracy – Validation Through Measurement 10-22 Table 10-11

  9. Geoid undulations and gravity anomalies over the Aral Sea, the Black Sea and the Caspian Sea from a combined GEOS-3/SEASAT/GEOSAT altimeter data set

    NASA Technical Reports Server (NTRS)

    Au, Andrew Y.; Brown, Richard D.; Welker, Jean E.

    1991-01-01

    Satellite-based altimetric data taken by GOES-3, SEASAT, and GEOSAT over the Aral Sea, the Black Sea, and the Caspian Sea are analyzed and a least squares collocation technique is used to predict the geoid undulations on a 0.25x0.25 deg. grid and to transform these geoid undulations to free air gravity anomalies. Rapp's 180x180 geopotential model is used as the reference surface for the collocation procedure. The result of geoid to gravity transformation is, however, sensitive to the information content of the reference geopotential model used. For example, considerable detailed surface gravity data were incorporated into the reference model over the Black Sea, resulting in a reference model with significant information content at short wavelengths. Thus, estimation of short wavelength gravity anomalies from gridded geoid heights is generally reliable over regions such as the Black Sea, using the conventional collocation technique with local empirical covariance functions. Over regions such as the Caspian Sea, where detailed surface data are generally not incorporated into the reference model, unconventional techniques are needed to obtain reliable gravity anomalies. Based on the predicted gravity anomalies over these inland seas, speculative tectonic structures are identified and geophysical processes are inferred.

  10. Muscle parameters estimation based on biplanar radiography.

    PubMed

    Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W

    2016-11-01

    The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography.

  11. 0-6759 : developing a business process and logical model to support a tour-based travel demand model design for TxDOT.

    DOT National Transportation Integrated Search

    2013-08-01

    The Texas Department of Transportation : (TxDOT) created a standardized trip-based : modeling approach for travel demand modeling : called the Texas Package Suite of Travel Demand : Models (referred to as the Texas Package) to : oversee the travel de...

  12. Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information

    DOEpatents

    Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert

    2015-12-08

    Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.

  13. A mapping closure for turbulent scalar mixing using a time-evolving reference field

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1992-01-01

    A general mapping-closure approach for modeling scalar mixing in homogeneous turbulence is developed. This approach is different from the previous methods in that the reference field also evolves according to the same equations as the physical scalar field. The use of a time-evolving Gaussian reference field results in a model that is similar to the mapping closure model of Pope (1991), which is based on the methodology of Chen et al. (1989). Both models yield identical relationships between the scalar variance and higher-order moments, which are in good agreement with heat conduction simulation data and can be consistent with any type of epsilon(phi) evolution. The present methodology can be extended to any reference field whose behavior is known. The possibility of a beta-pdf reference field is explored. The shortcomings of the mapping closure methods are discussed, and the limit at which the mapping becomes invalid is identified.

  14. Quality assessment for color reproduction using a blind metric

    NASA Astrophysics Data System (ADS)

    Bringier, B.; Quintard, L.; Larabi, M.-C.

    2007-01-01

    This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.

  15. Using physiologically based pharmacokinetic modeling to address nonlinear kinetics and changes in rodent physiology and metabolism due to aging and adaptation in deriving reference values for propylene glycol methyl ether and propylene glycol methyl ether acetate.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirman, C R.; Sweeney, Lisa M.; Corley, Rick A.

    2005-04-01

    Reference values, including an oral reference dose (RfD) and an inhalation reference concentration (RfC), were derived for propylene glycol methyl ether (PGME), and an oral RfD was derived for its acetate (PGMEA). These values were based upon transient sedation observed in F344 rats and B6C3F1 mice during a two-year inhalation study. The dose-response relationship for sedation was characterized using internal dose measures as predicted by a physiologically based pharmacokinetic (PBPK) model for PGME and its acetate. PBPK modeling was used to account for changes in rodent physiology and metabolism due to aging and adaptation, based on data collected during weeksmore » 1, 2, 26, 52, and 78 of a chronic inhalation study. The peak concentration of PGME in richly perfused tissues was selected as the most appropriate internal dose measure based upon a consideration of the mode of action for sedation and similarities in tissue partitioning between brain and other richly perfused tissues. Internal doses (peak tissue concentrations of PGME) were designated as either no-observed-adverse-effect levels (NOAELs) or lowest-observed-adverse-effect levels (LOAELs) based upon the presence or absence of sedation at each time-point, species, and sex in the two year study. Distributions of the NOAEL and LOAEL values expressed in terms of internal dose were characterized using an arithmetic mean and standard deviation, with the mean internal NOAEL serving as the basis for the reference values, which was then divided by appropriate uncertainty factors. Where data were permitting, chemical-specific adjustment factors were derived to replace default uncertainty factor values of ten. Nonlinear kinetics are were predicted by the model in all species at PGME concentrations exceeding 100 ppm, which complicates interspecies and low-dose extrapolations. To address this complication, reference values were derived using two approaches which differ with respect to the order in which these extrapolations were performed: (1) uncertainty factor application followed by interspecies extrapolation (PBPK modeling); and (2) interspecies extrapolation followed by uncertainty factor application. The resulting reference values for these two approaches are substantially different, with values from the former approach being 7-fold higher than those from the latter approach. Such a striking difference between the two approaches reveals an underlying issue that has received little attention in the literature regarding the application of uncertainty factors and interspecies extrapolations to compounds where saturable kinetics occur in the range of the NOAEL. Until such discussions have taken place, reference values based on the latter approach are recommended for risk assessments involving human exposures to PGME and PGMEA.« less

  16. Terminological reference of a knowledge-based system: the data dictionary.

    PubMed

    Stausberg, J; Wormek, A; Kraut, U

    1995-01-01

    The development of open and integrated knowledge bases makes new demands on the definition of the used terminology. The definition should be realized in a data dictionary separated from the knowledge base. Within the works done at a reference model of medical knowledge, a data dictionary has been developed and used in different applications: a term definition shell, a documentation tool and a knowledge base. The data dictionary includes that part of terminology, which is largely independent of a certain knowledge model. For that reason, the data dictionary can be used as a basis for integrating knowledge bases into information systems, for knowledge sharing and reuse and for modular development of knowledge-based systems.

  17. THE FUTURE OF COMPUTER-BASED TOXICITY PREDICTION: MECHANISM-BASED MODELS VS. INFORMATION MINING APPROACHES

    EPA Science Inventory


    The Future of Computer-Based Toxicity Prediction:
    Mechanism-Based Models vs. Information Mining Approaches

    When we speak of computer-based toxicity prediction, we are generally referring to a broad array of approaches which rely primarily upon chemical structure ...

  18. Chemical-specific screening criteria for interpretation of biomonitoring data for volatile organic compounds (VOCs)--application of steady-state PBPK model solutions.

    PubMed

    Aylward, Lesa L; Kirman, Chris R; Blount, Ben C; Hays, Sean M

    2010-10-01

    The National Health and Nutrition Examination Survey (NHANES) generates population-representative biomonitoring data for many chemicals including volatile organic compounds (VOCs) in blood. However, no health or risk-based screening values are available to evaluate these data from a health safety perspective or to use in prioritizing among chemicals for possible risk management actions. We gathered existing risk assessment-based chronic exposure reference values such as reference doses (RfDs), reference concentrations (RfCs), tolerable daily intakes (TDIs), cancer slope factors, etc. and key pharmacokinetic model parameters for 47 VOCs. Using steady-state solutions to a generic physiologically-based pharmacokinetic (PBPK) model structure, we estimated chemical-specific steady-state venous blood concentrations across chemicals associated with unit oral and inhalation exposure rates and with chronic exposure at the identified exposure reference values. The geometric means of the slopes relating modeled steady-state blood concentrations to steady-state exposure to a unit oral dose or unit inhalation concentration among 38 compounds with available pharmacokinetic parameters were 12.0 microg/L per mg/kg-d (geometric standard deviation [GSD] of 3.2) and 3.2 microg/L per mg/m(3) (GSD=1.7), respectively. Chemical-specific blood concentration screening values based on non-cancer reference values for both oral and inhalation exposure range from 0.0005 to 100 microg/L; blood concentrations associated with cancer risk-specific doses at the 1E-05 risk level ranged from 5E-06 to 6E-02 microg/L. The distribution of modeled steady-state blood concentrations associated with unit exposure levels across VOCs may provide a basis for estimating blood concentration screening values for VOCs that lack chemical-specific pharmacokinetic data. The screening blood concentrations presented here provide a tool for risk assessment-based evaluation of population biomonitoring data for VOCs and are most appropriately applied to central tendency estimates for such datasets. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  19. Nonlinear time-series-based adaptive control applications

    NASA Technical Reports Server (NTRS)

    Mohler, R. R.; Rajkumar, V.; Zakrzewski, R. R.

    1991-01-01

    A control design methodology based on a nonlinear time-series reference model is presented. It is indicated by highly nonlinear simulations that such designs successfully stabilize troublesome aircraft maneuvers undergoing large changes in angle of attack as well as large electric power transients due to line faults. In both applications, the nonlinear controller was significantly better than the corresponding linear adaptive controller. For the electric power network, a flexible AC transmission system with series capacitor power feedback control is studied. A bilinear autoregressive moving average reference model is identified from system data, and the feedback control is manipulated according to a desired reference state. The control is optimized according to a predictive one-step quadratic performance index. A similar algorithm is derived for control of rapid changes in aircraft angle of attack over a normally unstable flight regime. In the latter case, however, a generalization of a bilinear time-series model reference includes quadratic and cubic terms in angle of attack.

  20. Jobs and Economic Development Impact (JEDI) Model Geothermal User Reference Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, C.; Augustine, C.; Goldberg, M.

    2012-09-01

    The Geothermal Jobs and Economic Development Impact (JEDI) model, developed through the National Renewable Energy Laboratory (NREL), is an Excel-based user-friendly tools that estimates the economic impacts of constructing and operating hydrothermal and Enhanced Geothermal System (EGS) power generation projects at the local level for a range of conventional and renewable energy technologies. The JEDI Model Geothermal User Reference Guide was developed to assist users in using and understanding the model. This guide provides information on the model's underlying methodology, as well as the parameters and references used to develop the cost data utilized in the model. This guide alsomore » provides basic instruction on model add-in features, operation of the model, and a discussion of how the results should be interpreted.« less

  1. An image-based skeletal dosimetry model for the ICRP reference adult female—internal electron sources

    NASA Astrophysics Data System (ADS)

    O'Reilly, Shannon E.; DeWeese, Lindsay S.; Maynard, Matthew R.; Rajon, Didier A.; Wayson, Michael B.; Marshall, Emily L.; Bolch, Wesley E.

    2016-12-01

    An image-based skeletal dosimetry model for internal electron sources was created for the ICRP-defined reference adult female. Many previous skeletal dosimetry models, which are still employed in commonly used internal dosimetry software, do not properly account for electron escape from trabecular spongiosa, electron cross-fire from cortical bone, and the impact of marrow cellularity on active marrow self-irradiation. Furthermore, these existing models do not employ the current ICRP definition of a 50 µm bone endosteum (or shallow marrow). Each of these limitations was addressed in the present study. Electron transport was completed to determine specific absorbed fractions to both active and shallow marrow of the skeletal regions of the University of Florida reference adult female. The skeletal macrostructure and microstructure were modeled separately. The bone macrostructure was based on the whole-body hybrid computational phantom of the UF series of reference models, while the bone microstructure was derived from microCT images of skeletal region samples taken from a 45 years-old female cadaver. The active and shallow marrow are typically adopted as surrogate tissue regions for the hematopoietic stem cells and osteoprogenitor cells, respectively. Source tissues included active marrow, inactive marrow, trabecular bone volume, trabecular bone surfaces, cortical bone volume, and cortical bone surfaces. Marrow cellularity was varied from 10 to 100 percent for active marrow self-irradiation. All other sources were run at the defined ICRP Publication 70 cellularity for each bone site. A total of 33 discrete electron energies, ranging from 1 keV to 10 MeV, were either simulated or analytically modeled. The method of combining skeletal macrostructure and microstructure absorbed fractions assessed using MCNPX electron transport was found to yield results similar to those determined with the PIRT model applied to the UF adult male skeletal dosimetry model. Calculated skeletal averaged absorbed fractions for each source-target combination were found to follow similar trends of more recent dosimetry models (image-based models) but did not follow results from skeletal models based upon assumptions of an infinite expanse of trabecular spongiosa.

  2. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions of hierarchical indexing. (i.e. the specificity, adjunct, and parent indices) It supports the notion that multiple canonical views of an object may have to be stored in memory to enable its efficient identification. The use of variable fields in the state space vectors appears to keep the number of required nodes in the network down to a tractable number while imposing a semantic value on different areas of the state space. This semantic imposition supports an interface between the analogical aspects of neural networks and the propositional paradigms of symbolic processing.

  3. Modelling of Vortex-Induced Loading on a Single-Blade Installation Setup

    NASA Astrophysics Data System (ADS)

    Skrzypiński, Witold; Gaunaa, Mac; Heinz, Joachim

    2016-09-01

    Vortex-induced integral loading fluctuations on a single suspended blade at various inflow angles were modeled in the presents work by means of stochastic modelling methods. The reference time series were obtained by 3D DES CFD computations carried out on the DTU 10MW reference wind turbine blade. In the reference time series, the flapwise force component, Fx, showed both higher absolute values and variation than the chordwise force component, Fz, for every inflow angle considered. For this reason, the present paper focused on modelling of the Fx and not the Fz whereas Fz would be modelled using exactly the same procedure. The reference time series were significantly different, depending on the inflow angle. This made the modelling of all the time series with a single and relatively simple engineering model challenging. In order to find model parameters, optimizations were carried out, based on the root-mean-square error between the Single-Sided Amplitude Spectra of the reference and modelled time series. In order to model well defined frequency peaks present at certain inflow angles, optimized sine functions were superposed on the stochastically modelled time series. The results showed that the modelling accuracy varied depending on the inflow angle. None the less, the modelled and reference time series showed a satisfactory general agreement in terms of their visual and frequency characteristics. This indicated that the proposed method is suitable to model loading fluctuations on suspended blades.

  4. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Performance Guaranteed Inertia Emulation forDiesel-Wind System Feed Microgrid via ModelReference Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melin, Alexander M.; Zhang, Yichen; Djouadi, Seddik

    In this paper, a model reference control based inertia emulation strategy is proposed. Desired inertia can be precisely emulated through this control strategy so that guaranteed performance is ensured. A typical frequency response model with parametrical inertia is set to be the reference model. A measurement at a specific location delivers the information of disturbance acting on the diesel-wind system to the referencemodel. The objective is for the speed of the diesel-wind system to track the reference model. Since active power variation is dominantly governed by mechanical dynamics and modes, only mechanical dynamics and states, i.e., a swing-engine-governor system plusmore » a reduced-order wind turbine generator, are involved in the feedback control design. The controller is implemented in a three-phase diesel-wind system feed microgrid. The results show exact synthetic inertia is emulated, leading to guaranteed performance and safety bounds.« less

  6. Fast auto-focus scheme based on optical defocus fitting model

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min

    2018-04-01

    An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.

  7. A Novel Multilayer Correlation Maximization Model for Improving CCA-Based Frequency Recognition in SSVEP Brain-Computer Interface.

    PubMed

    Jiao, Yong; Zhang, Yu; Wang, Yu; Wang, Bei; Jin, Jing; Wang, Xingyu

    2018-05-01

    Multiset canonical correlation analysis (MsetCCA) has been successfully applied to optimize the reference signals by extracting common features from multiple sets of electroencephalogram (EEG) for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface application. To avoid extracting the possible noise components as common features, this study proposes a sophisticated extension of MsetCCA, called multilayer correlation maximization (MCM) model for further improving SSVEP recognition accuracy. MCM combines advantages of both CCA and MsetCCA by carrying out three layers of correlation maximization processes. The first layer is to extract the stimulus frequency-related information in using CCA between EEG samples and sine-cosine reference signals. The second layer is to learn reference signals by extracting the common features with MsetCCA. The third layer is to re-optimize the reference signals set in using CCA with sine-cosine reference signals again. Experimental study is implemented to validate effectiveness of the proposed MCM model in comparison with the standard CCA and MsetCCA algorithms. Superior performance of MCM demonstrates its promising potential for the development of an improved SSVEP-based brain-computer interface.

  8. Effectiveness of Training Model Capacity Building for Entrepreneurship Women Based Empowerment Community

    ERIC Educational Resources Information Center

    Idawati; Mahmud, Alimuddin; Dirawan, Gufran Darma

    2016-01-01

    The purpose of this research was to determine the effectiveness of a training model for capacity building of women entrepreneurship community-based. Research type approach Research and Development Model, which refers to the model of development research that developed by Romiszowki (1996) combined with a model of development Sugiono (2011) it was…

  9. A comparison between the example reference biosphere model ERB 2B and a process-based model: simulation of a natural release scenario.

    PubMed

    Almahayni, T

    2014-12-01

    The BIOMASS methodology was developed with the objective of constructing defensible assessment biospheres for assessing potential radiological impacts of radioactive waste repositories. To this end, a set of Example Reference Biospheres were developed to demonstrate the use of the methodology and to provide an international point of reference. In this paper, the performance of the Example Reference Biosphere model ERB 2B associated with the natural release scenario, discharge of contaminated groundwater to the surface environment, was evaluated by comparing its long-term projections of radionuclide dynamics and distribution in a soil-plant system to those of a process-based, transient advection-dispersion model (AD). The models were parametrised with data characteristic of a typical rainfed winter wheat crop grown on a sandy loam soil under temperate climate conditions. Three safety-relevant radionuclides, (99)Tc, (129)I and (237)Np with different degree of sorption were selected for the study. Although the models were driven by the same hydraulic (soil moisture content and water fluxes) and radiological (Kds) input data, their projections were remarkably different. On one hand, both models were able to capture short and long-term variation in activity concentration in the subsoil compartment. On the other hand, the Reference Biosphere model did not project any radionuclide accumulation in the topsoil and crop compartments. This behaviour would underestimate the radiological exposure under natural release scenarios. The results highlight the potential role deep roots play in soil-to-plant transfer under a natural release scenario where radionuclides are released into the subsoil. When considering the relative activity and root depth profiles within the soil column, much of the radioactivity was taken up into the crop from the subsoil compartment. Further improvements were suggested to address the limitations of the Reference Biosphere model presented in this paper. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Output Feedback Adaptive Control of Non-Minimum Phase Systems Using Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2018-01-01

    This paper describes output feedback adaptive control approaches for non-minimum phase SISO systems with relative degree 1 and non-strictly positive real (SPR) MIMO systems with uniform relative degree 1 using the optimal control modification method. It is well-known that the standard model-reference adaptive control (MRAC) cannot be used to control non-SPR plants to track an ideal SPR reference model. Due to the ideal property of asymptotic tracking, MRAC attempts an unstable pole-zero cancellation which results in unbounded signals for non-minimum phase SISO systems. The optimal control modification can be used to prevent the unstable pole-zero cancellation which results in a stable adaptation of non-minimum phase SISO systems. However, the tracking performance using this approach could suffer if the unstable zero is located far away from the imaginary axis. The tracking performance can be recovered by using an observer-based output feedback adaptive control approach which uses a Luenberger observer design to estimate the state information of the plant. Instead of explicitly specifying an ideal SPR reference model, the reference model is established from the linear quadratic optimal control to account for the non-minimum phase behavior of the plant. With this non-minimum phase reference model, the observer-based output feedback adaptive control can maintain stability as well as tracking performance. However, in the presence of the mismatch between the SPR reference model and the non-minimum phase plant, the standard MRAC results in unbounded signals, whereas a stable adaptation can be achieved with the optimal control modification. An application of output feedback adaptive control for a flexible wing aircraft illustrates the approaches.

  11. Dynamic updating atlas for heart segmentation with a nonlinear field-based model.

    PubMed

    Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng

    2017-09-01

    Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Reference values of fractional excretion of exhaled nitric oxide among non-smokers and current smokers.

    PubMed

    Torén, Kjell; Murgia, Nicola; Schiöler, Linus; Bake, Björn; Olin, Anna-Carin

    2017-08-25

    Fractional exhaled nitric oxide (FE NO ) is used to assess of airway inflammation; diagnose asthma and monitor adherence to advised therapy. Reliable and accurate reference values for FE NO are needed for both non-smoking and current smoking adults in the clinical setting. The present study was performed to establish reference adult FE NO values among never-smokers, former smokers and current smokers. FE NO was measured in 5265 subjects aged 25-75 years in a general-population study, using a chemiluminescence (Niox ™) analyser according to the guidelines of the American Thoracic Society and the European Respiratory Society. Atopy was based on the presence of immunoglobulin E (IgE) antibodies to common inhalant allergens (measured using Phadiatop® test). Spirometry without bronchodilation was performed and forced vital capacity (FVC), forced expired volume in 1 s (FEV 1 ) and the ratio of FEV 1 to FVC values were obtained. After excluding subjects with asthma, chronic bronchitis, spirometric airway obstruction and current cold, 3378 subjects remained. Equations for predictions of FE NO values were modelled using nonparametric regression models. FE NO levels were similar in never-smokers and former smokers, and these two groups were therefore merged into a group termed "non-smokers". Reference equations, including the 5th and 95th percentiles, were generated for female and male non-smokers, based on age, height and atopy. Regression models for current smokers were unstable. Hence, the proposed reference values for current smokers are based on the univariate distribution of FE NO and fixed cut-off limits. Reference values for FE NO among respiratory healthy non-smokers should be outlined stratified for gender using individual reference values. For current smokers separate cut-off limits are proposed.

  13. OWL references in ORM conceptual modelling

    NASA Astrophysics Data System (ADS)

    Matula, Jiri; Belunek, Roman; Hunka, Frantisek

    2017-07-01

    Object Role Modelling methodology is the fact-based type of conceptual modelling. The aim of the paper is to emphasize a close connection to OWL documents and its possible mutual cooperation. The definition of entities or domain values is an indispensable part of the conceptual schema design procedure defined by the ORM methodology. Many of these entities are already defined in OWL documents. Therefore, it is not necessary to declare entities again, whereas it is possible to utilize references from OWL documents during modelling of information systems.

  14. A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers.

    PubMed

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca

    2015-10-31

    To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.

  15. Antecedents of Academic Emotions: Testing the Internal/External Frame of Reference Model for Academic Enjoyment

    ERIC Educational Resources Information Center

    Goetz, Thomas; Frenzel, Anne C.; Hall, Nathan C.; Pekrun, Reinhard

    2008-01-01

    The present study focused on students' academic enjoyment as predicted by achievement in multiple academic domains. Assumptions were based on Marsh's internal/external (I/E) frame of reference model and Pekrun's control-value theory of achievement emotions, and were tested in a sample of 1380 German students from grades 5 to 10. Students' academic…

  16. Mathematical Practice in Textbooks Analysis: Praxeological Reference Models, the Case of Proportion

    ERIC Educational Resources Information Center

    Wijayanti, Dyana; Winsløw, Carl

    2017-01-01

    We present a new method in textbook analysis, based on so-called praxeological reference models focused on specific content at task level. This method implies that the mathematical contents of a textbook (or textbook part) is analyzed in terms of the tasks and techniques which are exposed to or demanded from readers; this can then be interpreted…

  17. The 3D Reference Earth Model: Status and Preliminary Results

    NASA Astrophysics Data System (ADS)

    Moulik, P.; Lekic, V.; Romanowicz, B. A.

    2017-12-01

    In the 20th century, seismologists constructed models of how average physical properties (e.g. density, rigidity, compressibility, anisotropy) vary with depth in the Earth's interior. These one-dimensional (1D) reference Earth models (e.g. PREM) have proven indispensable in earthquake location, imaging of interior structure, understanding material properties under extreme conditions, and as a reference in other fields, such as particle physics and astronomy. Over the past three decades, new datasets motivated more sophisticated efforts that yielded models of how properties vary both laterally and with depth in the Earth's interior. Though these three-dimensional (3D) models exhibit compelling similarities at large scales, differences in the methodology, representation of structure, and dataset upon which they are based, have prevented the creation of 3D community reference models. As part of the REM-3D project, we are compiling and reconciling reference seismic datasets of body wave travel-time measurements, fundamental mode and overtone surface wave dispersion measurements, and normal mode frequencies and splitting functions. These reference datasets are being inverted for a long-wavelength, 3D reference Earth model that describes the robust long-wavelength features of mantle heterogeneity. As a community reference model with fully quantified uncertainties and tradeoffs and an associated publically available dataset, REM-3D will facilitate Earth imaging studies, earthquake characterization, inferences on temperature and composition in the deep interior, and be of improved utility to emerging scientific endeavors, such as neutrino geoscience. Here, we summarize progress made in the construction of the reference long period dataset and present a preliminary version of REM-3D in the upper-mantle. In order to determine the level of detail warranted for inclusion in REM-3D, we analyze the spectrum of discrepancies between models inverted with different subsets of the reference dataset. This procedure allows us to evaluate the extent of consistency in imaging heterogeneity at various depths and between spatial scales.

  18. Evaluation of different models to segregate Pelibuey and Katahdin ewes into resistant or susceptible to gastrointestinal nematodes.

    PubMed

    Palomo-Couoh, Jovanny Gaspar; Aguilar-Caballero, Armando Jacinto; Torres-Acosta, Juan Felipe de Jesús; Magaña-Monforte, Juan Gabriel

    2016-12-01

    This study evaluated four models based on the number of eggs per gram of faeces (EPG) to segregate Pelibuey or Katahdin ewes during the lactation period into resistant or susceptible to gastrointestinal nematodes (GIN) in tropical Mexico. Nine hundred and thirty EPG counts of Pelibuey ewes and 710 of Katahdin ewes were obtained during 10 weeks of lactation. Ewes were segregated into resistant, intermediate and susceptible, using their individual EPG every week. Then, data of every ewe was used to provide a reference classification, which included all the EPG values of each animal. Then, four models were evaluated against such reference. Model 1 was based on the 10-week mean EPG count ± 2 SE. Models 2, 3 and 4 were based on the mean EPG count of 10, 5 and 2 weeks of lactation. The cutoff points for the segregation of ewe in those three models were the quartiles ≤Q1 (low elimination) and ≥Q3 (high elimination). In all the models evaluated, the ewes classified as resistant had lower EPG than intermediates and susceptible (P < 0.001) while ewes classified as susceptible had higher EPG than intermediate and resistant (P < 0.001). According to J Youden test, models presented concordance with the reference group (>70 %). Model 3 tended to show higher sensitivity and specificity with the reference data, but no difference was found with other models. The present study showed that the phenotypic marker EPG might serve to identify and segregate populations of adult ewes during the lactation period. All models used served to segregate Pelibuey and Katahdin ewes into resistant, intermediate and susceptible. The model 3 (mean of 5 weeks) could be used because it required less sampling effort without losing sensitivity or specificity in the segregation of animals. However, model 2 (mean of 2 weeks) was less labour-intensive.

  19. An Internet Protocol-Based Software System for Real-Time, Closed-Loop, Multi-Spacecraft Mission Simulation Applications

    NASA Technical Reports Server (NTRS)

    Davis, George; Cary, Everett; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  20. Expert Systems for Libraries at SCIL [Small Computers in Libraries]'88.

    ERIC Educational Resources Information Center

    Kochtanek, Thomas R.; And Others

    1988-01-01

    Six brief papers on expert systems for libraries cover (1) a knowledge-based approach to database design; (2) getting started in expert systems; (3) using public domain software to develop a business reference system; (4) a music cataloging inquiry system; (5) linguistic analysis of reference transactions; and (6) a model of a reference librarian.…

  1. In silico selection of expression reference genes with demonstrated stability in barley among a diverse set of tissues and cultivars

    USDA-ARS?s Scientific Manuscript database

    Premise of the study: Reference genes are selected based on the assumption of temporal and spatial expression stability and on their widespread use in model species. They are often used in new target species without validation, presumed as stable. For barley, reference gene validation is lacking, bu...

  2. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    NASA Astrophysics Data System (ADS)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  3. National geodetic satellite program, part 2

    NASA Technical Reports Server (NTRS)

    Schmid, H.

    1977-01-01

    Satellite geodesy and the creation of worldwide geodetic reference systems is discussed. The geometric description of the surface and the analytical description of the gravity field of the earth by means of worldwide reference systems, with the aid of satellite geodesy, are presented. A triangulation method based on photogrammetric principles is described in detail. Results are derived in the form of three dimensional models. These mathematical models represent the frame of reference into which one can fit the existing geodetic results from the various local datums, as well as future measurements.

  4. Primary Care-Based Memory Clinics: Expanding Capacity for Dementia Care.

    PubMed

    Lee, Linda; Hillier, Loretta M; Heckman, George; Gagnon, Micheline; Borrie, Michael J; Stolee, Paul; Harvey, David

    2014-09-01

    The implementation in Ontario of 15 primary-care-based interprofessional memory clinics represented a unique model of team-based case management aimed at increasing capacity for dementia care at the primary-care level. Each clinic tracked referrals; in a subset of clinics, charts were audited by geriatricians, clinic members were interviewed, and patients, caregivers, and referring physicians completed satisfaction surveys. Across all clinics, 582 patients were assessed, and 8.9 per cent were referred to a specialist. Patients and caregivers were very satisfied with the care received, as were referring family physicians, who reported increased capacity to manage dementia. Geriatricians' chart audits revealed a high level of agreement with diagnosis and management. This study demonstrated acceptability, feasibility, and preliminary effectiveness of the primary-care memory clinic model. Led by specially trained family physicians, it provided timely access to high-quality collaborative dementia care, impacting health service utilization by more-efficient use of scarce geriatric specialist resources.

  5. The assessment of the transformation of global tectonic plate models and the global terrestrial reference frames using the Velocity Decomposition Analysis

    NASA Astrophysics Data System (ADS)

    Ampatzidis, Dimitrios; König, Rolf; Glaser, Susanne; Heinkelmann, Robert; Schuh, Harald; Flechtner, Frank; Nilsson, Tobias

    2016-04-01

    The aim of our study is to assess the classical Helmert similarity transformation using the Velocity Decomposition Analysis (VEDA). The VEDA is a new methodology, developed by GFZ for the assessment of the reference frames' temporal variation and it is based on the separation of the velocities into two specified parts: The first is related to the reference system choice (the so called datum effect) and the latter one which refers to the real deformation of the terrestrial points. The advantage of the VEDA is its ability to detect the relative biases and reference system effects between two different frames or two different realizations of the same frame, respectively. We apply the VEDA for the assessment between several modern tectonic plate models and the recent global terrestrial reference frames.

  6. Reference gene selection for quantitative gene expression studies during biological invasions: A test on multiple genes and tissues in a model ascidian Ciona savignyi.

    PubMed

    Huang, Xuena; Gao, Yangchun; Jiang, Bei; Zhou, Zunchun; Zhan, Aibin

    2016-01-15

    As invasive species have successfully colonized a wide range of dramatically different local environments, they offer a good opportunity to study interactions between species and rapidly changing environments. Gene expression represents one of the primary and crucial mechanisms for rapid adaptation to local environments. Here, we aim to select reference genes for quantitative gene expression analysis based on quantitative Real-Time PCR (qRT-PCR) for a model invasive ascidian, Ciona savignyi. We analyzed the stability of ten candidate reference genes in three tissues (siphon, pharynx and intestine) under two key environmental stresses (temperature and salinity) in the marine realm based on three programs (geNorm, NormFinder and delta Ct method). Our results demonstrated only minor difference for stability rankings among the three methods. The use of different single reference gene might influence the data interpretation, while multiple reference genes could minimize possible errors. Therefore, reference gene combinations were recommended for different tissues - the optimal reference gene combination for siphon was RPS15 and RPL17 under temperature stress, and RPL17, UBQ and TubA under salinity treatment; for pharynx, TubB, TubA and RPL17 were the most stable genes under temperature stress, while TubB, TubA and UBQ were the best under salinity stress; for intestine, UBQ, RPS15 and RPL17 were the most reliable reference genes under both treatments. Our results suggest that the necessity of selection and test of reference genes for different tissues under varying environmental stresses. The results obtained here are expected to reveal mechanisms of gene expression-mediated invasion success using C. savignyi as a model species. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Towards generalised reference condition models for environmental assessment: a case study on rivers in Atlantic Canada.

    PubMed

    Armanini, D G; Monk, W A; Carter, L; Cote, D; Baird, D J

    2013-08-01

    Evaluation of the ecological status of river sites in Canada is supported by building models using the reference condition approach. However, geography, data scarcity and inter-operability constraints have frustrated attempts to monitor national-scale status and trends. This issue is particularly true in Atlantic Canada, where no ecological assessment system is currently available. Here, we present a reference condition model based on the River Invertebrate Prediction and Classification System approach with regional-scale applicability. To achieve this, we used biological monitoring data collected from wadeable streams across Atlantic Canada together with freely available, nationally consistent geographic information system (GIS) environmental data layers. For the first time, we demonstrated that it is possible to use data generated from different studies, even when collected using different sampling methods, to generate a robust predictive model. This model was successfully generated and tested using GIS-based rather than local habitat variables and showed improved performance when compared to a null model. In addition, ecological quality ratio data derived from the model responded to observed stressors in a test dataset. Implications for future large-scale implementation of river biomonitoring using a standardised approach with global application are presented.

  8. A reference model for model-based design of critical infrastructure protection systems

    NASA Astrophysics Data System (ADS)

    Shin, Young Don; Park, Cheol Young; Lee, Jae-Chon

    2015-05-01

    Today's war field environment is getting versatile as the activities of unconventional wars such as terrorist attacks and cyber-attacks have noticeably increased lately. The damage caused by such unconventional wars has also turned out to be serious particularly if targets are critical infrastructures that are constructed in support of banking and finance, transportation, power, information and communication, government, and so on. The critical infrastructures are usually interconnected to each other and thus are very vulnerable to attack. As such, to ensure the security of critical infrastructures is very important and thus the concept of critical infrastructure protection (CIP) has come. The program to realize the CIP at national level becomes the form of statute in each country. On the other hand, it is also needed to protect each individual critical infrastructure. The objective of this paper is to study on an effort to do so, which can be called the CIP system (CIPS). There could be a variety of ways to design CIPS's. Instead of considering the design of each individual CIPS, a reference model-based approach is taken in this paper. The reference model represents the design of all the CIPS's that have many design elements in common. In addition, the development of the reference model is also carried out using a variety of model diagrams. The modeling language used therein is the systems modeling language (SysML), which was developed and is managed by Object Management Group (OMG) and a de facto standard. Using SysML, the structure and operational concept of the reference model are designed to fulfil the goal of CIPS's, resulting in the block definition and activity diagrams. As a case study, the operational scenario of the nuclear power plant while being attacked by terrorists is studied using the reference model. The effectiveness of the results is also analyzed using multiple analysis models. It is thus expected that the approach taken here has some merits over the traditional design methodology of repeating requirements analysis and system design.

  9. Ozone reference models for the middle atmosphere (new CIRA)

    NASA Technical Reports Server (NTRS)

    Keating, G. M.; Pitts, M. C.; Young, D. F.

    1989-01-01

    Models of ozone vertical structure were generated that were based on multiple data sets from satellites. The very good absolute accuracy of the individual data sets allowed the data to be directly combined to generate these models. The data used for generation of these models are from some of the most recent satellite measurements over the period 1978 to 1983. A discussion is provided of validation and error analyses of these data sets. Also, inconsistencies in data sets brought about by temporal variations or other factors are indicated. The models cover the pressure range from from 20 to 0.003 mb (25 to 90 km). The models for pressures less than 0.5 mb represent only the day side and are only provisional since there was limited longitudinal coverage at these levels. The models start near 25 km in accord with previous COSPAR international reference atmosphere (CIRA) models. Models are also provided of ozone mixing ratio as a function of height. The monthly standard deviation and interannual variations relative to zonal means are also provided. In addition to the models of monthly latitudinal variations in vertical structure based on satellite measurements, monthly models of total column ozone and its characteristic variability as a function of latitude based on four years of Nimbus 7 measurements, models of the relationship between vertical structure and total column ozone, and a midlatitude annual mean model are incorporated in this set of ozone reference atmospheres. Various systematic variations are discussed including the annual, semiannual, and quasibiennial oscillations, and diurnal, longitudinal, and response to solar activity variations.

  10. Global plate motion frames: Toward a unified model

    NASA Astrophysics Data System (ADS)

    Torsvik, Trond H.; Müller, R. Dietmar; van der Voo, Rob; Steinberger, Bernhard; Gaina, Carmen

    2008-09-01

    Plate tectonics constitutes our primary framework for understanding how the Earth works over geological timescales. High-resolution mapping of relative plate motions based on marine geophysical data has followed the discovery of geomagnetic reversals, mid-ocean ridges, transform faults, and seafloor spreading, cementing the plate tectonic paradigm. However, so-called "absolute plate motions," describing how the fragments of the outer shell of the Earth have moved relative to a reference system such as the Earth's mantle, are still poorly understood. Accurate absolute plate motion models are essential surface boundary conditions for mantle convection models as well as for understanding past ocean circulation and climate as continent-ocean distributions change with time. A fundamental problem with deciphering absolute plate motions is that the Earth's rotation axis and the averaged magnetic dipole axis are not necessarily fixed to the mantle reference system. Absolute plate motion models based on volcanic hot spot tracks are largely confined to the last 130 Ma and ideally would require knowledge about the motions within the convecting mantle. In contrast, models based on paleomagnetic data reflect plate motion relative to the magnetic dipole axis for most of Earth's history but cannot provide paleolongitudes because of the axial symmetry of the Earth's magnetic dipole field. We analyze four different reference frames (paleomagnetic, African fixed hot spot, African moving hot spot, and global moving hot spot), discuss their uncertainties, and develop a unifying approach for connecting a hot spot track system and a paleomagnetic absolute plate reference system into a "hybrid" model for the time period from the assembly of Pangea (˜320 Ma) to the present. For the last 100 Ma we use a moving hot spot reference frame that takes mantle convection into account, and we connect this to a pre-100 Ma global paleomagnetic frame adjusted 5° in longitude to smooth the reference frame transition. Using plate driving force arguments and the mapping of reconstructed large igneous provinces to core-mantle boundary topography, we argue that continental paleolongitudes can be constrained with reasonable confidence.

  11. Reference set design for relational modeling of fuzzy systems

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    One of the keys to the successful relational modeling of fuzzy systems is the proper design of fuzzy reference sets. This has been discussed throughout the literature. In the frame of modeling a stochastic system, we analyze the problem numerically. First, we briefly describe the relational model and present the performance of the modeling in the most trivial case: the reference sets are triangle shaped. Next, we present a known fuzzy reference set generator algorithm (FRSGA) which is based on the fuzzy c-means (Fc-M) clustering algorithm. In the second section of this chapter we improve the previous FRSGA by adding a constraint to the Fc-M algorithm (modified Fc-M or MFc-M): two cluster centers are forced to coincide with the domain limits. This is needed to obtain properly shaped extreme linguistic reference values. We apply this algorithm to uniformly discretized domains of the variables involved. The fuzziness of the reference sets produced by both Fc-M and MFc-M is determined by a parameter, which in our experiments is modified iteratively. Each time, a new model is created and its performance analyzed. For certain algorithm parameter values both of these two algorithms have shortcomings. To eliminate the drawbacks of these two approaches, we develop a completely new generator algorithm for reference sets which we call Polyline. This algorithm and its performance are described in the last section. In all three cases, the modeling is performed for a variety of operators used in the inference engine and two defuzzification methods. Therefore our results depend neither on the system model order nor the experimental setup.

  12. Physiologically-based kinetic modelling in risk assessment

    EPA Science Inventory

    The European Union Reference Laboratory for Alternatives to Animal Testing (EURL ECVAM) hosted a two-day workshop with an aim to discuss the role and application of Physiologically Based Kinetic (PBK) models in regulatory decision making. The EURL ECVAM strategy document on Toxic...

  13. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  14. Study of mobile satellite network based on GEO/LEO satellite constellation

    NASA Astrophysics Data System (ADS)

    Hu, Xiulin; Zeng, Yujiang; Wang, Ying; Wang, Xianhui

    2005-11-01

    Mobile satellite network with Inter Satellite Links (ISLs), which consists of non-geostationary satellites, has the characteristic of network topology's variability. This is a great challenge to the design and management of mobile satellite network. This paper analyzes the characteristics of mobile satellite network, takes multimedia Quality of Service (QoS) as the chief object and presents a reference model based on Geostationary Earth Orbit (GEO)/ Low Earth Orbit (LEO) satellite constellation which adapts to the design and management of mobile satellite network. In the reference model, LEO satellites constitute service subnet with responsibility for the access, transmission and switch of the multimedia services for mobile users, while GEO satellites constitute management subnet taking on the centralized management to service subnet. Additionally ground control centre realizes the whole monitoring and control via management subnet. Comparing with terrestrial network, the above reference model physically separates management subnet from service subnet, which not only enhances the advantage of centralized management but also overcomes the shortcoming of low reliability in terrestrial network. Routing of mobile satellite network based on GEO/LEO satellite constellation is also discussed in this paper.

  15. An inductance Fourier decomposition-based current-hysteresis control strategy for switched reluctance motors

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Qi, Ji; Jia, Meng

    2017-05-01

    Switched reluctance machines (SRMs) have attracted extensive attentions due to the inherent advantages, including simple and robust structure, low cost, excellent fault-tolerance and wide speed range, etc. However, one of the bottlenecks limiting the SRMs for further applications is its unfavorable torque ripple, and consequently noise and vibration due to the unique doubly-salient structure and pulse-current-based power supply method. In this paper, an inductance Fourier decomposition-based current-hysteresis-control (IFD-CHC) strategy is proposed to reduce torque ripple of SRMs. After obtaining a nonlinear inductance-current-position model based Fourier decomposition, reference currents can be calculated by reference torque and the derived inductance model. Both the simulations and experimental results confirm the effectiveness of the proposed strategy.

  16. Assessing Graduate Attributes: Building a Criteria-Based Competency Model

    ERIC Educational Resources Information Center

    Ipperciel, Donald; ElAtia, Samira

    2014-01-01

    Graduate attributes (GAs) have become a necessary framework of reference for the 21st century competency-based model of higher education. However, the issue of evaluating and assessing GAs still remains unchartered territory. In this article, we present a criteria-based method of assessment that allows for an institution-wide comparison of the…

  17. Toward “optimal” integration of terrestrial biosphere models

    DOE PAGES

    Schwalm, Christopher R.; Huntzinger, Deborah N.; Fisher, Joshua B.; ...

    2015-06-10

    Multimodel ensembles (MME) are commonplace in Earth system modeling. Here we perform MME integration using a 10-member ensemble of terrestrial biosphere models (TBMs) from the Multiscale synthesis and Terrestrial Model Intercomparison Project (MsTMIP). We contrast optimal (skill based for present-day carbon cycling) versus naive (one model-one vote) integration. MsTMIP optimal and naive mean land sink strength estimates (-1.16 versus -1.15 Pg C per annum respectively) are statistically indistinguishable. This holds also for grid cell values and extends to gross uptake, biomass, and net ecosystem productivity. TBM skill is similarly indistinguishable. The added complexity of skill-based integration does not materially changemore » MME values. This suggests that carbon metabolism has predictability limits and/or that all models and references are misspecified. Finally, resolving this issue requires addressing specific uncertainty types (initial conditions, structure, and references) and a change in model development paradigms currently dominant in the TBM community.« less

  18. State-and-transition models for heterogeneous landscapes: A strategy for development and application

    USDA-ARS?s Scientific Manuscript database

    Interpretation of assessment and monitoring data requires information about reference conditions and ecological resilience. Reference conditions used as benchmarks can be specified via potential-based land classifications (e.g., ecological sites) that describe the plant communities potentially obser...

  19. Evaluation of Global Observations-Based Evapotranspiration Datasets and IPCC AR4 Simulations

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Seneviratne, S. I.; Jimenez, C.; Corti, T.; Hirschi, M.; Balsamo, G.; Ciais, P.; Dirmeyer, P.; Fisher, J. B.; Guo, Z.; hide

    2011-01-01

    Quantification of global land evapotranspiration (ET) has long been associated with large uncertainties due to the lack of reference observations. Several recently developed products now provide the capacity to estimate ET at global scales. These products, partly based on observational data, include satellite ]based products, land surface model (LSM) simulations, atmospheric reanalysis output, estimates based on empirical upscaling of eddycovariance flux measurements, and atmospheric water balance datasets. The LandFlux-EVAL project aims to evaluate and compare these newly developed datasets. Additionally, an evaluation of IPCC AR4 global climate model (GCM) simulations is presented, providing an assessment of their capacity to reproduce flux behavior relative to the observations ]based products. Though differently constrained with observations, the analyzed reference datasets display similar large-scale ET patterns. ET from the IPCC AR4 simulations was significantly smaller than that from the other products for India (up to 1 mm/d) and parts of eastern South America, and larger in the western USA, Australia and China. The inter-product variance is lower across the IPCC AR4 simulations than across the reference datasets in several regions, which indicates that uncertainties may be underestimated in the IPCC AR4 models due to shared biases of these simulations.

  20. MRAC Revisited: Guaranteed Performance with Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmaje

    2010-01-01

    This paper presents modification of the conventional model reference adaptive control (MRAC) architecture in order to achieve guaranteed transient performance both in the output and input signals of an uncertain system. The proposed modification is based on the tracking error feedback to the reference model. It is shown that approach guarantees tracking of a given command and the ideal control signal (one that would be designed if the system were known) not only asymptotically but also in transient by a proper selection of the error feedback gain. The method prevents generation of high frequency oscillations that are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference command of any magnitude form any initial position without re-tuning. The benefits of the method are demonstrated in simulations.

  1. The 3D Reference Earth Model (REM-3D): Update and Outlook

    NASA Astrophysics Data System (ADS)

    Lekic, V.; Moulik, P.; Romanowicz, B. A.; Dziewonski, A. M.

    2016-12-01

    Elastic properties of the Earth's interior (e.g. density, rigidity, compressibility, anisotropy) vary spatially due to changes in temperature, pressure, composition, and flow. In the 20th century, seismologists have constructed reference models of how these quantities vary with depth, notably the PREM model of Dziewonski and Anderson (1981). These 1D reference earth models have proven indispensable in earthquake location, imaging of interior structure, understanding material properties under extreme conditions, and as a reference in other fields, such as particle physics and astronomy. Over the past three decades, more sophisticated efforts by seismologists have yielded several generations of models of how properties vary not only with depth, but also laterally. Yet, though these three-dimensional (3D) models exhibit compelling similarities at large scales, differences in the methodology, representation of structure, and dataset upon which they are based, have prevented the creation of 3D community reference models. We propose to overcome these challenges by compiling, reconciling, and distributing a long period (>15 s) reference seismic dataset, from which we will construct a 3D seismic reference model (REM-3D) for the Earth's mantle, which will come in two flavors: a long wavelength smoothly parameterized model and a set of regional profiles. Here, we summarize progress made in the construction of the reference long period dataset, and present preliminary versions of the REM-3D in order to illustrate the two flavors of REM-3D and their relative advantages and disadvantages. As a community reference model and with fully quantified uncertainties and tradeoffs, REM-3D will facilitate Earth imaging studies, earthquake characterization, inferences on temperature and composition in the deep interior, and be of improved utility to emerging scientific endeavors, such as neutrino geoscience. In this presentation, we outline the outlook for setting up advisory community working groups and the community workshop that would assess progress, evaluate model and dataset performance, identify avenues for improvement, and recommend strategies for maximizing model adoption in and utility for the deep Earth community.

  2. On the role of modeling choices in estimation of cerebral aneurysm wall tension.

    PubMed

    Ramachandran, Manasi; Laakso, Aki; Harbaugh, Robert E; Raghavan, Madhavan L

    2012-11-15

    To assess various approaches to estimating pressure-induced wall tension in intracranial aneurysms (IA) and their effect on the stratification of subjects in a study population. Three-dimensional models of 26 IAs (9 ruptured and 17 unruptured) were segmented from Computed Tomography Angiography (CTA) images. Wall tension distributions in these patient-specific geometric models were estimated based on various approaches such as differences in morphological detail utilized or modeling choices made. For all subjects in the study population, the peak wall tension was estimated using all investigated approaches and were compared to a reference approach-nonlinear finite element (FE) analysis using the Fung anisotropic model with regionally varying material fiber directions. Comparisons between approaches were focused toward assessing the similarity in stratification of IAs within the population based on peak wall tension. The stratification of IAs tension deviated to some extent from the reference approach as less geometric detail was incorporated. Interestingly, the size of the cerebral aneurysm as captured by a single size measure was the predominant determinant of peak wall tension-based stratification. Within FE approaches, simplifications to isotropy, material linearity and geometric linearity caused a gradual deviation from the reference estimates, but it was minimal and resulted in little to no impact on stratifications of IAs. Differences in modeling choices made without patient-specificity in parameters of such models had little impact on tension-based IA stratification in this population. Increasing morphological detail did impact the estimated peak wall tension, but size was the predominant determinant. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Reduction of irregular breathing artifacts in respiration-correlated CT images using a respiratory motion model.

    PubMed

    Hertanto, Agung; Zhang, Qinghui; Hu, Yu-Chi; Dzyubak, Oleksandr; Rimner, Andreas; Mageras, Gig S

    2012-06-01

    Respiration-correlated CT (RCCT) images produced with commonly used phase-based sorting of CT slices often exhibit discontinuity artifacts between CT slices, caused by cycle-to-cycle amplitude variations in respiration. Sorting based on the displacement of the respiratory signal yields slices at more consistent respiratory motion states and hence reduces artifacts, but missing image data (gaps) may occur. The authors report on the application of a respiratory motion model to produce an RCCT image set with reduced artifacts and without missing data. Input data consist of CT slices from a cine CT scan acquired while recording respiration by monitoring abdominal displacement. The model-based generation of RCCT images consists of four processing steps: (1) displacement-based sorting of CT slices to form volume images at 10 motion states over the cycle; (2) selection of a reference image without gaps and deformable registration between the reference image and each of the remaining images; (3) generation of the motion model by applying a principal component analysis to establish a relationship between displacement field and respiration signal at each motion state; (4) application of the motion model to deform the reference image into images at the 9 other motion states. Deformable image registration uses a modified fast free-form algorithm that excludes zero-intensity voxels, caused by missing data, from the image similarity term in the minimization function. In each iteration of the minimization, the displacement field in the gap regions is linearly interpolated from nearest neighbor nonzero intensity slices. Evaluation of the model-based RCCT examines three types of image sets: cine scans of a physical phantom programmed to move according to a patient respiratory signal, NURBS-based cardiac torso (NCAT) software phantom, and patient thoracic scans. Comparison in physical motion phantom shows that object distortion caused by variable motion amplitude in phase-based sorting is visibly reduced with model-based RCCT. Comparison of model-based RCCT to original NCAT images as ground truth shows best agreement at motion states whose displacement-sorted images have no missing slices, with mean and maximum discrepancies in lung of 1 and 3 mm, respectively. Larger discrepancies correlate with motion states having a larger number of missing slices in the displacement-sorted images. Artifacts in patient images at different motion states are also reduced. Comparison with displacement-sorted patient images as a ground truth shows that the model-based images closely reproduce the ground truth geometry at different motion states. Results in phantom and patient images indicate that the proposed method can produce RCCT image sets with reduced artifacts relative to phase-sorted images, without the gaps inherent in displacement-sorted images. The method requires a reference image at one motion state that has no missing data. Highly irregular breathing patterns can affect the method's performance, by introducing artifacts in the reference image (although reduced relative to phase-sorted images), or in decreased accuracy in the image prediction of motion states containing large regions of missing data. © 2012 American Association of Physicists in Medicine.

  4. Threshold-driven optimization for reference-based auto-planning

    NASA Astrophysics Data System (ADS)

    Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo

    2018-02-01

    We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.

  5. Sensitivity tests to define the source apportionment performance criteria in the DeltaSA tool

    NASA Astrophysics Data System (ADS)

    Pernigotti, Denise; Belis, Claudio A.

    2017-04-01

    Identification and quantification of the contribution of emission sources to a given area is a key task for the design of abatement strategies. Moreover, European member states are obliged to report this kind of information for zones where the pollution levels exceed the limit values. At present, little is known about the performance and uncertainty of the variety of methodologies used for source apportionment and the comparability between the results of studies using different approaches. The source apportionment Delta (SA Delta) is a tool developed by the EC-JRC to support the particulate matter source apportionment modellers in the identification of sources (for factor analysis studies) and/or in the measure of their performance. The source identification is performed by the tool measuring the proximity of any user chemical profile to preloaded repository data (SPECIATE and SPECIEUROPE). The model performances criteria are based on standard statistical indexes calculated by comparing participants' source contribute estimates and their time series with preloaded references data. Those preloaded data refer to previous European SA intercomparison exercises: the first with real world data (22 participants), the second with synthetic data (25 participants) and the last with real world data which was also extended to Chemical Transport Models (38 receptor models and 4 CTMs). The references used for the model performances are 'true' (predefined by JRC) for the synthetic while they are calculated as ensemble average of the participants' results in real world intercomparisons. The candidates used for each source ensemble reference calculation were selected among participants results based on a number of consistency checks plus the similarity between their chemical profiles to the repository measured data. The estimation of the ensemble reference uncertainty is crucial in order to evaluate the users' performances against it. For this reason a sensitivity analysis on different methods to estimate the ensemble references' uncertainties was performed re-analyzing the synthetic intercomparison dataset, the only one where 'true' reference and ensemble reference contributions were both present. The Delta SA is now available on-line and will be presented, with a critical discussion of the sensitivity analysis on the ensemble reference uncertainty. In particular the grade of among participants mutual agreement on the presence of a certain source should be taken into account. Moreover also the importance of the synthetic intercomparisons in order to catch receptor models common biases will be stressed.

  6. MicroCT-Based Skeletal Models for Use in Tomographic Voxel Phantoms for Radiological Protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolch, Wesley

    The University of Florida (UF) proposes to develop two high-resolution image-based skeletal dosimetry models for direct use by ICRP Committee 2’s Task Group on Dose Calculation in their forthcoming Reference Voxel Male (RVM) and Reference Voxel Female (RVF) whole-body dosimetry phantoms. These two phantoms are CT-based, and thus do not have the image resolution to delineate and perform radiation transport modeling of the individual marrow cavities and bone trabeculae throughout their skeletal structures. Furthermore, new and innovative 3D microimaging techniques will now be required for the skeletal tissues following Committee 2’s revision of the target tissues of relevance for radiogenicmore » bone cancer induction. This target tissue had been defined in ICRP Publication 30 as a 10-μm cell layer on all bone surfaces of trabecular and cortical bone. The revised target tissue is now a 50-μm layer within the marrow cavities of trabecular bone only and is exclusive of the marrow adipocytes. Clearly, this new definition requires the use of 3D microimages of the trabecular architecture not available from past 2D optical studies of the adult skeleton. With our recent acquisition of two relatively young cadavers (males of age 18-years and 40-years), we will develop a series of reference skeletal models that can be directly applied to (1) the new ICRP reference voxel man and female phantoms developed for the ICRP, and (2) pediatric phantoms developed to target the ICRP reference children. Dosimetry data to be developed will include absorbed fractions for internal beta and alpha-particle sources, as well as photon and neutron fluence-to-dose response functions for direct use in external dosimetry studies of the ICRP reference workers and members of the general public« less

  7. Ozone reference models for the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Keating, G. M.; Pitts, M. C.; Young, D. F.

    1990-01-01

    Data on monthly latitudinal variations in middle-atmosphere vertical ozone profiles are presented, based on extensive Nimbus-7, AE-2, and SME satellite measurements from the period 1978-1982. The coverage of the data sets, the characteristics of the sensors, and the analysis techniques applied are described, and the results are compiled in tables and graphs. These ozone data are intended to supplement the models of the 1986 COSPAR International Reference Atmosphere.

  8. Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.

    2013-12-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.

  9. Computer Based Learning in Europe: A Bibliography.

    ERIC Educational Resources Information Center

    Rushby, N. J.

    This bibliography lists 172 references to papers on computer assisted learning (CAL) in European countries including the Soviet Union, Germany, Holland, Sweden, Yugoslavia, Austria, and Italy. The references which deal with such topics as teacher training, simulation, rural education, model construction, program evaluation, computer managed…

  10. Model benchmarking and reference signals for angled-beam shear wave ultrasonic nondestructive evaluation (NDE) inspections

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Hopkins, Deborah; Datuin, Marvin; Warchol, Mark; Warchol, Lyudmila; Forsyth, David S.; Buynak, Charlie; Lindgren, Eric A.

    2017-02-01

    For model benchmark studies, the accuracy of the model is typically evaluated based on the change in response relative to a selected reference signal. The use of a side drilled hole (SDH) in a plate was investigated as a reference signal for angled beam shear wave inspection for aircraft structure inspections of fastener sites. Systematic studies were performed with varying SDH depth and size, and varying the ultrasonic probe frequency, focal depth, and probe height. Increased error was observed with the simulation of angled shear wave beams in the near-field. Even more significant, asymmetry in real probes and the inherent sensitivity of signals in the near-field to subtle test conditions were found to provide a greater challenge with achieving model agreement. To achieve quality model benchmark results for this problem, it is critical to carefully align the probe with the part geometry, to verify symmetry in probe response, and ideally avoid using reference signals from the near-field response. Suggested reference signals for angled beam shear wave inspections include using the `through hole' corner specular reflection signal and the full skip' signal off of the far wall from the side drilled hole.

  11. Simplified risk assessment of noise induced hearing loss by means of 2 spreadsheet models.

    PubMed

    Lie, Arve; Engdahl, Bo; Tambs, Kristian

    2016-11-18

    The objective of this study has been to test 2 spreadsheet models to compare the observed with the expected hearing loss for a Norwegian reference population. The prevalence rates of the Norwegian and the National Institute for Occupational Safety and Health (NIOSH) definitions of hearing outcomes were calculated in terms of sex and age, 20-64 years old, for a screened (with no occupational noise exposure) (N = 18 858) and unscreened (N = 38 333) Norwegian reference population from the Nord-Trøndelag Hearing Loss Study (NTHLS). Based on the prevalence rates, 2 different spreadsheet models were constructed in order to compare the prevalence rates of various groups of workers with the expected rates. The spreadsheets were then tested on 10 different occupational groups with varying degrees of hearing loss as compared to a reference population. Hearing of office workers, train drivers, conductors and teachers differed little from the screened reference values based on the Norwegian and the NIOSH criterion. The construction workers, miners, farmers and military had an impaired hearing and railway maintenance workers and bus drivers had a mildly impaired hearing. The spreadsheet models give a valid assessment of the hearing loss. The use of spreadsheet models to compare hearing in occupational groups with that of a reference population is a simple and quick method. The results are in line with comparable hearing thresholds, and allow for significance testing. The method is believed to be useful for occupational health services in the assessment of risk of noise induced hearing loss (NIHL) and the preventive potential in groups of noise-exposed workers. Int J Occup Med Environ Health 2016;29(6):991-999. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  12. An approach for modeling thermal destruction of hazardous wastes in circulating fluidized bed incinerator.

    PubMed

    Patil, M P; Sonolikar, R L

    2008-10-01

    This paper presents a detailed computational fluid dynamics (CFD) based approach for modeling thermal destruction of hazardous wastes in a circulating fluidized bed (CFB) incinerator. The model is based on Eular - Lagrangian approach in which gas phase (continuous phase) is treated in a Eularian reference frame, whereas the waste particulate (dispersed phase) is treated in a Lagrangian reference frame. The reaction chemistry hasbeen modeled through a mixture fraction/ PDF approach. The conservation equations for mass, momentum, energy, mixture fraction and other closure equations have been solved using a general purpose CFD code FLUENT4.5. Afinite volume method on a structured grid has been used for solution of governing equations. The model provides detailed information on the hydrodynamics (gas velocity, particulate trajectories), gas composition (CO, CO2, O2) and temperature inside the riser. The model also allows different operating scenarios to be examined in an efficient manner.

  13. A One-System Theory Which is Not Propositional.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2009-04-01

    We argue that the propositional and link-based approaches to human contingency learning represent different levels of analysis because propositional reasoning requires a basis, which is plausibly provided by a link-based architecture. Moreover, in their attempt to compare two general classes of models (link-based and propositional), Mitchell et al. have referred to only two generic models and ignore the large variety of different models within each class.

  14. Modeling the Virtual Machine Launching Overhead under Fermicloud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele; Wu, Hao; Ren, Shangping

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resourcemore » (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.« less

  15. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  16. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  17. Evidence-based ergonomics: a model and conceptual structure proposal.

    PubMed

    Silveira, Dierci Marcio

    2012-01-01

    In Human Factors and Ergonomics Science (HFES), it is difficult to identify what is the best approach to tackle the workplace and systems design problems which needs to be solved, and it has been also advocated as transdisciplinary and multidisciplinary the issue of "How to solve the human factors and ergonomics problems that are identified?". The proposition on this study is to combine the theoretical approach for Sustainability Science, the Taxonomy of the Human Factors and Ergonomics (HFE) discipline and the framework for Evidence-Based Medicine in an attempt to be applied in Human Factors and Ergonomics. Applications of ontologies are known in the field of medical research and computer science. By scrutinizing the key requirements for the HFES structuring of knowledge, it was designed a reference model, First, it was identified the important requirements for HFES Concept structuring, as regarded by Meister. Second, it was developed an evidence-based ergonomics framework as a reference model composed of six levels based on these requirements. Third, it was devised a mapping tool using linguistic resources to translate human work, systems environment and the complexities inherent to their hierarchical relationships to support future development at Level 2 of the reference model and for meeting the two major challenges for HFES, namely, identifying what problems should be addressed in HFE as an Autonomous Science itself and proposing solutions by integrating concepts and methods applied in HFES for those problems.

  18. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    PubMed

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  19. Nonparametric spirometry reference values for Hispanic Americans.

    PubMed

    Glenn, Nancy L; Brown, Vanessa M

    2011-02-01

    Recent literature sites ethnic origin as a major factor in developing pulmonary function reference values. Extensive studies established reference values for European and African Americans, but not for Hispanic Americans. The Third National Health and Nutrition Examination Survey defines Hispanic as individuals of Spanish speaking cultures. While no group was excluded from the target population, sample size requirements only allowed inclusion of individuals who identified themselves as Mexican Americans. This research constructs nonparametric reference value confidence intervals for Hispanic American pulmonary function. The method is applicable to all ethnicities. We use empirical likelihood confidence intervals to establish normal ranges for reference values. Its major advantage: it is model free, but shares asymptotic properties of model based methods. Statistical comparisons indicate that empirical likelihood interval lengths are comparable to normal theory intervals. Power and efficiency studies agree with previously published theoretical results.

  20. Hot, cold, and annual reference atmospheres for Edwards Air Force Base, California (1975 version)

    NASA Technical Reports Server (NTRS)

    Johnson, D. L.

    1975-01-01

    Reference atmospheres pertaining to summer (hot), winter (cold), and mean annual conditions for Edwards Air Force Base, California, are presented from surface to 90 km altitude (700 km for the annual model). Computed values of pressure, kinetic temperature, virtual temperature, and density and relative differences percentage departure from the Edwards reference atmospheres, 1975 (ERA-75) of the atmospheric parameters versus altitude are tabulated in 250 m increments. Hydrostatic and gas law equations were used in conjunction with radiosonde and rocketsonde thermodynamic data in determining the vertical structure of these atmospheric models. The thermodynamic parameters were all subjected to a fifth degree least-squares curve-fit procedure, and the resulting coefficients were incorporated into Univac 1108 computer subroutines so that any quantity may be recomputed at any desired altitude using these subroutines.

  1. A spatial reference frame model of Beijing based on spatial cognitive experiment

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Zhang, Jing; Liu, Yu

    2006-10-01

    Orientation relation in the spatial relation is very important in GIS. People can obtain orientation information by making use of map reading and the cognition of the surrounding environment, and then create the spatial reference frame. City is a kind of special spatial environment, a person with life experiences has some spatial knowledge about the city where he or she lives in. Based on the spatial knowledge of the city environment, people can position, navigate and understand the meaning embodied in the environment correctly. Beijing as a real geographic space, its layout is very special and can form a kind of new spatial reference frame. Based on the characteristics of the layout of Beijing city, this paper will introduce a new spatial reference frame of Beijing and use two psychological experiments to validate its cognitive plausibility.

  2. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization

    NASA Astrophysics Data System (ADS)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-01

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  3. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization.

    PubMed

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-28

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  4. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  5. Design and Parametric Study of the Magnetic Sensor for Position Detection in Linear Motor Based on Nonlinear Parametric Model Order Reduction

    PubMed Central

    Paul, Sarbajit; Chang, Junghwan

    2017-01-01

    This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension. PMID:28671580

  6. Design and Parametric Study of the Magnetic Sensor for Position Detection in Linear Motor Based on Nonlinear Parametric model order reduction.

    PubMed

    Paul, Sarbajit; Chang, Junghwan

    2017-07-01

    This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension.

  7. Environmental and cost life cycle assessment of disinfection options for municipal wastewater treatment

    EPA Science Inventory

    This document summarizes the data collection, analysis, and results for a base case wastewater treatment (WWT) plant reference model. The base case is modeled after the Metropolitan Sewer District of Greater Cincinnati (MSDGC) Mill Creek Plant. The plant has an activated sludge s...

  8. Modelling Cognitive Style in a Peer Help Network.

    ERIC Educational Resources Information Center

    Bull, Susan; McCalla, Gord

    2002-01-01

    Explains I-Help, a computer-based peer help network where students can ask and answer questions about assignments and courses based on the metaphor of a help desk. Highlights include cognitive style; user modeling in I-Help; matching helpers to helpees; and types of questions. (Contains 64 references.) (LRW)

  9. A Model of Statistics Performance Based on Achievement Goal Theory.

    ERIC Educational Resources Information Center

    Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

    2003-01-01

    Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

  10. Multicomponent blood lipid analysis by means of near infrared spectroscopy, in geese.

    PubMed

    Bazar, George; Eles, Viktoria; Kovacs, Zoltan; Romvari, Robert; Szabo, Andras

    2016-08-01

    This study provides accurate near infrared (NIR) spectroscopic models on some laboratory determined clinicochemical parameters (i.e. total lipid (5.57±1.95 g/l), triglyceride (2.59±1.36 mmol/l), total cholesterol (3.81±0.68 mmol/l), high density lipoprotein (HDL) cholesterol (2.45±0.58 mmol/l)) of blood serum samples of fattened geese. To increase the performance of multivariate chemometrics, samples significantly deviating from the regression models implying laboratory error were excluded from the final calibration datasets. Reference data of excluded samples having outlier spectra in principal component analysis were not marked as false. Samples deviating from the regression models but having non outlier spectra in PCA were identified as having false reference constituent values. Based on the NIR selection methods, 5% of the reference measurement data were rated as doubtful. The achieved models reached R(2) of 0.864, 0.966, 0.850, 0.793, and RMSE of 0.639 g/l, 0.232 mmol/l, 0.210 mmol/l, 0.241 mmol/l for total lipid, triglyceride, total cholesterol and HDL cholesterol, respectively, during independent validation. Classical analytical techniques focus on single constituents and often require chemicals, time-consuming measurements, and experienced technicians. NIR technique provides a quick, cost effective, non-hazardous alternative method for analysis of several constituents based on one single spectrum of each sample, and it also offers the possibility for looking at the laboratory reference data critically. Evaluation of reference data to identify and exclude falsely analyzed samples can provide warning feedback to the reference laboratory, especially in the case of analyses where laboratory methods are not perfectly suited to the subjected material and there is an increased chance of laboratory error. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Modeling moving systems with RELAP5-3D

    DOE PAGES

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; ...

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the acceleratingmore » frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.« less

  12. Whole transcriptome analysis using next-generation sequencing of model species Setaria viridis to support C4 photosynthesis research.

    PubMed

    Xu, Jiajia; Li, Yuanyuan; Ma, Xiuling; Ding, Jianfeng; Wang, Kai; Wang, Sisi; Tian, Ye; Zhang, Hui; Zhu, Xin-Guang

    2013-09-01

    Setaria viridis is an emerging model species for genetic studies of C4 photosynthesis. Many basic molecular resources need to be developed to support for this species. In this paper, we performed a comprehensive transcriptome analysis from multiple developmental stages and tissues of S. viridis using next-generation sequencing technologies. Sequencing of the transcriptome from multiple tissues across three developmental stages (seed germination, vegetative growth, and reproduction) yielded a total of 71 million single end 100 bp long reads. Reference-based assembly using Setaria italica genome as a reference generated 42,754 transcripts. De novo assembly generated 60,751 transcripts. In addition, 9,576 and 7,056 potential simple sequence repeats (SSRs) covering S. viridis genome were identified when using the reference based assembled transcripts and the de novo assembled transcripts, respectively. This identified transcripts and SSR provided by this study can be used for both reverse and forward genetic studies based on S. viridis.

  13. Genotype Imputation with Millions of Reference Samples

    PubMed Central

    Browning, Brian L.; Browning, Sharon R.

    2016-01-01

    We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle’s throughput was more than 100× greater than Impute2’s throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. PMID:26748515

  14. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  15. Toward “optimal” integration of terrestrial biosphere models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwalm, Christopher R.; Huntingzger, Deborah; Fisher, Joshua B.

    2015-06-10

    Multi-model ensembles (MME) are commonplace in Earth system modeling. Here we perform MME integration using a 10-member ensemble of terrestrial biosphere models (TBMs) from the Multi-scale synthesis and Terrestrial Model Intercomparison Project (MsTMIP). We contrast optimal (skill-based for present-day carbon cycling) versus naïve (“one model – one vote”) integration. MsTMIP optimal and naïve mean land sink strength estimates (–1.16 vs. –1.15 Pg C per annum respectively) are statistically indistinguishable. This holds also for grid cell values and extends to gross uptake, biomass, and net ecosystem productivity. TBM skill is similarly indistinguishable. The added complexity of skill-based integration does not materiallymore » change MME values. This suggests that carbon metabolism has predictability limits and/or that all models and references are misspecified. Resolving this issue requires addressing specific uncertainty types (initial conditions, structure, references) and a change in model development paradigms currently dominant in the TBM community.« less

  16. Using features of Arden Syntax with object-oriented medical data models for guideline modeling.

    PubMed

    Peleg, M; Ogunyemi, O; Tu, S; Boxwala, A A; Zeng, Q; Greenes, R A; Shortliffe, E H

    2001-01-01

    Computer-interpretable guidelines (CIGs) can deliver patient-specific decision support at the point of care. CIGs base their recommendations on eligibility and decision criteria that relate medical concepts to patient data. CIG models use expression languages for specifying these criteria, and define models for medical data to which the expressions can refer. In developing version 3 of the GuideLine Interchange Format (GLIF3), we used existing standards as the medical data model and expression language. We investigated the object-oriented HL7 Reference Information Model (RIM) as a default data model. We developed an expression language, called GEL, based on Arden Syntax's logic grammar. Together with other GLIF constructs, GEL reconciles incompatibilities between the data models of Arden Syntax and the HL7 RIM. These incompatibilities include Arden's lack of support for complex data types and time intervals, and the mismatch between Arden's single primary time and multiple time attributes of the HL7 RIM.

  17. Intonation in unaccompanied singing: accuracy, drift, and a model of reference pitch memory.

    PubMed

    Mauch, Matthias; Frieler, Klaus; Dixon, Simon

    2014-07-01

    This paper presents a study on intonation and intonation drift in unaccompanied singing, and proposes a simple model of reference pitch memory that accounts for many of the effects observed. Singing experiments were conducted with 24 singers of varying ability under three conditions (Normal, Masked, Imagined). Over the duration of a recording, ∼50 s, a median absolute intonation drift of 11 cents was observed. While smaller than the median note error (19 cents), drift was significant in 22% of recordings. Drift magnitude did not correlate with other measures of singing accuracy, singing experience, or the presence of conditions tested. Furthermore, it is shown that neither a static intonation memory model nor a memoryless interval-based intonation model can account for the accuracy and drift behavior observed. The proposed causal model provides a better explanation as it treats the reference pitch as a changing latent variable.

  18. Reference Solutions for Benchmark Turbulent Flows in Three Dimensions

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Pandya, Mohagna J.; Rumsey, Christopher L.

    2016-01-01

    A grid convergence study is performed to establish benchmark solutions for turbulent flows in three dimensions (3D) in support of turbulence-model verification campaign at the Turbulence Modeling Resource (TMR) website. The three benchmark cases are subsonic flows around a 3D bump and a hemisphere-cylinder configuration and a supersonic internal flow through a square duct. Reference solutions are computed for Reynolds Averaged Navier Stokes equations with the Spalart-Allmaras turbulence model using a linear eddy-viscosity model for the external flows and a nonlinear eddy-viscosity model based on a quadratic constitutive relation for the internal flow. The study involves three widely-used practical computational fluid dynamics codes developed and supported at NASA Langley Research Center: FUN3D, USM3D, and CFL3D. Reference steady-state solutions computed with these three codes on families of consistently refined grids are presented. Grid-to-grid and code-to-code variations are described in detail.

  19. A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application

    PubMed Central

    Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang

    2018-01-01

    Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549

  20. Differential item functioning magnitude and impact measures from item response theory models.

    PubMed

    Kleinman, Marjorie; Teresi, Jeanne A

    2016-01-01

    Measures of magnitude and impact of differential item functioning (DIF) at the item and scale level, respectively are presented and reviewed in this paper. Most measures are based on item response theory models. Magnitude refers to item level effect sizes, whereas impact refers to differences between groups at the scale score level. Reviewed are magnitude measures based on group differences in the expected item scores and impact measures based on differences in the expected scale scores. The similarities among these indices are demonstrated. Various software packages are described that provide magnitude and impact measures, and new software presented that computes all of the available statistics conveniently in one program with explanations of their relationships to one another.

  1. A new class of compact high sensitive tiltmeter based on the UNISA folded pendulum mechanical architecture

    NASA Astrophysics Data System (ADS)

    Barone, Fabrizio; Giordano, Gerardo

    2018-02-01

    We present the Extended Folded Pendulum Model (EFPM), a model developed for a quantitative description of the dynamical behavior of a folded pendulum generically oriented in space. This model, based on the Tait-Bryan angular reference system, highlights the relationship between the folded pendulum orientation in the gravitational field and its natural resonance frequency. Tis model validated by tests performed with a monolithic UNISA Folded Pendulum, highlights a new technique of implementation of folded pendulum based tiltmeters.

  2. Hyperspectral face recognition using improved inter-channel alignment based on qualitative prediction models.

    PubMed

    Cho, Woon; Jang, Jinbeum; Koschan, Andreas; Abidi, Mongi A; Paik, Joonki

    2016-11-28

    A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.

  3. The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification

    ERIC Educational Resources Information Center

    Petrov, Alexander A.; Anderson, John R.

    2005-01-01

    A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…

  4. A Revised Thermosphere for the Mars Global Reference Atmospheric Model (Mars-GRAM Version 3.4)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Johnson, D. L.; James, B. F.

    1996-01-01

    This report describes the newly-revised model thermosphere for the Mars Global Reference Atmospheric Model (Mars-GRAM, Version 3.4). It also provides descriptions of other changes made to the program since publication of the programmer's guide for Mars-GRAM Version 3.34. The original Mars-GRAM model thermosphere was based on the global-mean model of Stewart. The revised thermosphere is based largely on parameterizations derived from output data from the three-dimensional Mars Thermospheric Global Circulation Model (MTGCM). The new thermospheric model includes revised dependence on the 10.7 cm solar flux for the global means of exospheric temperature, temperature of the base of the thermosphere, and scale height for the thermospheric temperature variations, as well as revised dependence on orbital position for global mean height of the base of the thermosphere. Other features of the new thermospheric model are: (1) realistic variations of temperature and density with latitude and time of day, (2) more realistic wind magnitudes, based on improved estimates of horizontal pressure gradients, and (3) allowance for user-input adjustments to the model values for mean exospheric temperature and for height and temperature at the base of the thermosphere. Other new features of Mars-GRAM 3.4 include: (1) allowance for user-input values of climatic adjustment factors for temperature profiles from the surface to 75 km, and (2) a revised method for computing the sub-solar longitude position in the 'ORBIT' subroutine.

  5. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  6. Evaluation of accuracy of complete-arch multiple-unit abutment-level dental implant impressions using different impression and splinting materials.

    PubMed

    Buzayan, Muaiyed; Baig, Mirza Rustum; Yunus, Norsiah

    2013-01-01

    This in vitro study evaluated the accuracy of multiple-unit dental implant casts obtained from splinted or nonsplinted direct impression techniques using various splinting materials by comparing the casts to the reference models. The effect of two different impression materials on the accuracy of the implant casts was also evaluated for abutment-level impressions. A reference model with six internal-connection implant replicas placed in the completely edentulous mandibular arch and connected to multi-base abutments was fabricated from heat-curing acrylic resin. Forty impressions of the reference model were made, 20 each with polyether (PE) and polyvinylsiloxane (PVS) impression materials using the open tray technique. The PE and PVS groups were further subdivided into four subgroups of five each on the bases of splinting type: no splinting, bite registration PE, bite registration addition silicone, or autopolymerizing acrylic resin. The positional accuracy of the implant replica heads was measured on the poured casts using a coordinate measuring machine to assess linear differences in interimplant distances in all three axes. The collected data (linear and three-dimensional [3D] displacement values) were compared with the measurements calculated on the reference resin model and analyzed with nonparametric tests (Kruskal-Wallis and Mann-Whitney). No significant differences were found between the various splinting groups for both PE and PVS impression materials in terms of linear and 3D distortions. However, small but significant differences were found between the two impression materials (PVS, 91 μm; PE, 103 μm) in terms of 3D discrepancies, irrespective of the splinting technique employed. Casts obtained from both impression materials exhibited differences from the reference model. The impression material influenced impression inaccuracy more than the splinting material for multiple-unit abutment-level impressions.

  7. Contribution of the International Reference Ionosphere to the progress of the ionospheric representation

    NASA Astrophysics Data System (ADS)

    Bilitza, Dieter

    2017-04-01

    The International Reference Ionosphere (IRI), a joint project of the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI), is a data-based reference model for the ionosphere and since 2014 it is also recognized as the ISO (International Standardization Organization) standard for the ionosphere. The model is a synthesis of most of the available and reliable observations of ionospheric parameters combining ground and space measurements. This presentation reviews the steady progress in achieving a more and more accurate representation of the ionospheric plasma parameters accomplished during the last decade of IRI model improvements. Understandably, a data-based model is only as good as the data foundation on which it is built. We will discuss areas where we are in need of more data to obtain a more solid and continuous data foundation in space and time. We will also take a look at still existing discrepancies between simultaneous measurements of the same parameter with different measurement techniques and discuss the approach taken in the IRI model to deal with these conflicts. In conclusion we will provide an outlook at development activities that may result in significant future improvements of the accurate representation of the ionosphere in the IRI model.

  8. Periodic reference tracking control approach for smart material actuators with complex hysteretic characteristics

    NASA Astrophysics Data System (ADS)

    Sun, Zhiyong; Hao, Lina; Song, Bo; Yang, Ruiguo; Cao, Ruimin; Cheng, Yu

    2016-10-01

    Micro/nano positioning technologies have been attractive for decades for their various applications in both industrial and scientific fields. The actuators employed in these technologies are typically smart material actuators, which possess inherent hysteresis that may cause systems behave unexpectedly. Periodic reference tracking capability is fundamental for apparatuses such as scanning probe microscope, which employs smart material actuators to generate periodic scanning motion. However, traditional controller such as PID method cannot guarantee accurate fast periodic scanning motion. To tackle this problem and to conduct practical implementation in digital devices, this paper proposes a novel control method named discrete extended unparallel Prandtl-Ishlinskii model based internal model (d-EUPI-IM) control approach. To tackle modeling uncertainties, the robust d-EUPI-IM control approach is investigated, and the associated sufficient stabilizing conditions are derived. The advantages of the proposed controller are: it is designed and represented in discrete form, thus practical for digital devices implementation; the extended unparallel Prandtl-Ishlinskii model can precisely represent forward/inverse complex hysteretic characteristics, thus can reduce modeling uncertainties and benefits controllers design; in addition, the internal model principle based control module can be utilized as a natural oscillator for tackling periodic references tracking problem. The proposed controller was verified through comparative experiments on a piezoelectric actuator platform, and convincing results have been achieved.

  9. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  10. NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia

    NASA Astrophysics Data System (ADS)

    Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas

    2016-04-01

    Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.

  11. Setting nutrient thresholds to support an ecological assessment based on nutrient enrichment, potential primary production and undesirable disturbance.

    PubMed

    Devlin, Michelle; Painting, Suzanne; Best, Mike

    2007-01-01

    The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.

  12. Optical properties of light absorbing carbon aggregates mixed with sulfate: assessment of different model geometries for climate forcing calculations.

    PubMed

    Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin

    2012-04-23

    Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America

  13. The International Geomagnetic Reference Field, 2005

    USGS Publications Warehouse

    Rukstales, Kenneth S.; Love, Jeffrey J.

    2007-01-01

    This is a set of five world charts showing the declination, inclination, horizontal intensity, vertical component, and total intensity of the Earth's magnetic field at mean sea level at the beginning of 2005. The charts are based on the International Geomagnetic Reference Field (IGRF) main model for 2005 and secular change model for 2005-2010. The IGRF is referenced to the World Geodetic System 1984 ellipsoid. Additional information about the USGS geomagnetism program is available at: http://geomag.usgs.gov/

  14. PVWatts Version 1 Technical Reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2013-10-01

    The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.

  15. A Distributed Simulation Software System for Multi-Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Burns, Richard; Davis, George; Cary, Everett

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  16. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    PubMed

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Novel indexes based on network structure to indicate financial market

    NASA Astrophysics Data System (ADS)

    Zhong, Tao; Peng, Qinke; Wang, Xiao; Zhang, Jing

    2016-02-01

    There have been various achievements to understand and to analyze the financial market by complex network model. However, current studies analyze the financial network model but seldom present quantified indexes to indicate or forecast the price action of market. In this paper, the stock market is modeled as a dynamic network, in which the vertices refer to listed companies and edges refer to their rank-based correlation based on price series. Characteristics of the network are analyzed and then novel indexes are introduced into market analysis, which are calculated from maximum and fully-connected subnets. The indexes are compared with existing ones and the results confirm that our indexes perform better to indicate the daily trend of market composite index in advance. Via investment simulation, the performance of our indexes is analyzed in detail. The results indicate that the dynamic complex network model could not only serve as a structural description of the financial market, but also work to predict the market and guide investment by indexes.

  18. Perceptual quality estimation of H.264/AVC videos using reduced-reference and no-reference models

    NASA Astrophysics Data System (ADS)

    Shahid, Muhammad; Pandremmenou, Katerina; Kondi, Lisimachos P.; Rossholm, Andreas; Lövström, Benny

    2016-09-01

    Reduced-reference (RR) and no-reference (NR) models for video quality estimation, using features that account for the impact of coding artifacts, spatio-temporal complexity, and packet losses, are proposed. The purpose of this study is to analyze a number of potentially quality-relevant features in order to select the most suitable set of features for building the desired models. The proposed sets of features have not been used in the literature and some of the features are used for the first time in this study. The features are employed by the least absolute shrinkage and selection operator (LASSO), which selects only the most influential of them toward perceptual quality. For comparison, we apply feature selection in the complete feature sets and ridge regression on the reduced sets. The models are validated using a database of H.264/AVC encoded videos that were subjectively assessed for quality in an ITU-T compliant laboratory. We infer that just two features selected by RR LASSO and two bitstream-based features selected by NR LASSO are able to estimate perceptual quality with high accuracy, higher than that of ridge, which uses more features. The comparisons with competing works and two full-reference metrics also verify the superiority of our models.

  19. Low energy stage study. Volume 2: Requirements and candidate propulsion modes. [orbital launching of shuttle payloads

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A payload mission model covering 129 launches, was examined and compared against the space transportation system shuttle standard orbit inclinations and a shuttle launch site implementation schedule. Based on this examination and comparison, a set of six reference missions were defined in terms of spacecraft weight and velocity requirements to deliver the payload from a 296 km circular Shuttle standard orbit to the spacecraft's planned orbit. Payload characteristics and requirements representative of the model payloads included in the regime bounded by each of the six reference missions were determined. A set of launch cost envelopes were developed and defined based on the characteristics of existing/planned Shuttle upper stages and expendable launch systems in terms of launch cost and velocity delivered. These six reference missions were used to define the requirements for the candidate propulsion modes which were developed and screened to determine the propulsion approaches for conceptual design.

  20. A Four-Stage Model for Planning Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Morrison, Gary R.; Ross, Steven M.

    1988-01-01

    Describes a flexible planning process for developing computer based instruction (CBI) in which the CBI design is implemented on paper between the lesson design and the program production. A four-stage model is explained, including (1) an initial flowchart, (2) storyboards, (3) a detailed flowchart, and (4) an evaluation. (16 references)…

  1. The Actualization of Literary Learning Model Based on Verbal-Linguistic Intelligence

    ERIC Educational Resources Information Center

    Hali, Nur Ihsan

    2017-01-01

    This article is inspired by Howard Gardner's concept of linguistic intelligence and also from some authors' previous writings. All of them became the authors' reference in developing ideas on constructing a literary learning model based on linguistic intelligence. The writing of this article is not done by collecting data empirically, but by…

  2. Incorporating Solid Modeling and Team-Based Design into Freshman Engineering Graphics.

    ERIC Educational Resources Information Center

    Buchal, Ralph O.

    2001-01-01

    Describes the integration of these topics through a major team-based design and computer aided design (CAD) modeling project in freshman engineering graphics at the University of Western Ontario. Involves n=250 students working in teams of four to design and document an original Lego toy. Includes 12 references. (Author/YDS)

  3. A no-reference video quality assessment metric based on ROI

    NASA Astrophysics Data System (ADS)

    Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan

    2015-01-01

    A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.

  4. Defining the Reference Condition for Wadeable Streams in the Sand Hills Subdivision of the Southeastern Plains Ecoregion, USA

    NASA Astrophysics Data System (ADS)

    Kosnicki, Ely; Sefick, Stephen A.; Paller, Michael H.; Jarrell, Miller S.; Prusha, Blair A.; Sterrett, Sean C.; Tuberville, Tracey D.; Feminella, Jack W.

    2014-09-01

    The Sand Hills subdivision of the Southeastern Plains ecoregion has been impacted by historical land uses over the past two centuries and, with the additive effects of contemporary land use, determining reference condition for streams in this region is a challenge. We identified reference condition based on the combined use of 3 independent selection methods. Method 1 involved use of a multivariate disturbance gradient derived from several stressors, method 2 was based on variation in channel morphology, and method 3 was based on passing 6 of 7 environmental criteria. Sites selected as reference from all 3 methods were considered primary reference, whereas those selected by 2 or 1 methods were considered secondary or tertiary reference, respectively. Sites not selected by any of the methods were considered non-reference. In addition, best professional judgment (BPJ) was used to exclude some sites from any reference class, and comparisons were made to examine the utility of BPJ. Non-metric multidimensional scaling indicated that use of BPJ may help designate non-reference sites when unidentified stressors are present. The macroinvertebrate community measures Ephemeroptera, Plecoptera, Trichoptera richness and North Carolina Biotic Index showed no differences between primary and secondary reference sites when BPJ was ignored. However, there was no significant difference among primary, secondary, and tertiary reference sites when BPJ was used. We underscore the importance of classifying reference conditions, especially in regions that have endured significant anthropogenic activity. We suggest that the use of secondary reference sites may enable construction of models that target a broader set of management interests.

  5. An Indoor Positioning Technique Based on a Feed-Forward Artificial Neural Network Using Levenberg-Marquardt Learning Method

    NASA Astrophysics Data System (ADS)

    Pahlavani, P.; Gholami, A.; Azimi, S.

    2017-09-01

    This paper presents an indoor positioning technique based on a multi-layer feed-forward (MLFF) artificial neural networks (ANN). Most of the indoor received signal strength (RSS)-based WLAN positioning systems use the fingerprinting technique that can be divided into two phases: the offline (calibration) phase and the online (estimation) phase. In this paper, RSSs were collected for all references points in four directions and two periods of time (Morning and Evening). Hence, RSS readings were sampled at a regular time interval and specific orientation at each reference point. The proposed ANN based model used Levenberg-Marquardt algorithm for learning and fitting the network to the training data. This RSS readings in all references points and the known position of these references points was prepared for training phase of the proposed MLFF neural network. Eventually, the average positioning error for this network using 30% check and validation data was computed approximately 2.20 meter.

  6. Process for computing geometric perturbations for probabilistic analysis

    DOEpatents

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  7. An Example of Competence-Based Learning: Use of Maxima in Linear Algebra for Engineers

    ERIC Educational Resources Information Center

    Diaz, Ana; Garcia, Alfonsa; de la Villa, Agustin

    2011-01-01

    This paper analyses the role of Computer Algebra Systems (CAS) in a model of learning based on competences. The proposal is an e-learning model Linear Algebra course for Engineering, which includes the use of a CAS (Maxima) and focuses on problem solving. A reference model has been taken from the Spanish Open University. The proper use of CAS is…

  8. CD-SEM real time bias correction using reference metrology based modeling

    NASA Astrophysics Data System (ADS)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  9. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  10. Eastern Baltic region vs. Western Europe: modelling age related changes in the pubic symphysis and the auricular surface.

    PubMed

    Jatautis, Šarūnas; Jankauskas, Rimantas

    2018-02-01

    Objectives. The present study addresses the following two main questions: a) Is the pattern of skeletal ageing observed in well-known western European reference collections applicable to modern eastern Baltic populations, or are population-specific standards needed? b) What are the consequences for estimating the age-at-death distribution in the target population when differences in the estimates from reference data are not taken into account? Materials and methods. The dataset consists of a modern Lithuanian osteological reference collection, which is the only collection of this type in the eastern Baltic countries (n = 381); and two major western European reference collections, Coimbra (n = 264) and Spitalfields (n = 239). The age-related changes were evaluated using the scoring systems of Suchey-Brooks (Brooks & Suchey 1990) and Lovejoy et al. (1985), and were modelled via regression models for multinomial responses. A controlled experiment based on simulations and the Rostock Manifesto estimation protocol (Wood et al. 2002) was then carried out to assess the effect of using estimates from different reference samples and different regression models on estimates of the age-at-death distribution in the hypothetical target population. Results. The following key results were obtained in this study. a) The morphological alterations in the pubic symphysis were much faster among women than among men at comparable ages in all three reference samples. In contrast, we found no strong evidence in any of the reference samples that sex is an important factor to explain rate of changes in the auricular surface. b) The rate of ageing in the pubic symphysis seems to be similar across the three reference samples, but there is little evidence of a similar pattern in the auricular surface. That is, the estimated rate of age-related changes in the auricular surface was much faster in the LORC and the Coimbra samples than in the Spitalfields sample. c) The results of simulations showed that the differences in the estimates from the reference data result in noticeably different age-at-death distributions in the target population. Thus, a degree bias may be expected if estimates from the western European reference data are used to collect information on ages at death in the eastern Baltic region based on the changes in the auricular surface. d) Moreover, the bias is expected to be more pronounced if the fitted regression model improperly describes the reference data. Conclusions. Differences in the timing of age-related changes in skeletal traits are to be expected among European reference samples, and cannot be ignored when seeking to reliably estimate an age-at-death distribution in the target population. This form of bias should be taken into consideration in further studies of skeletal samples from the eastern Baltic region.

  11. Defining Top-of-Atmosphere Flux Reference Level for Earth Radiation Budget Studies

    NASA Technical Reports Server (NTRS)

    Loeb, N. G.; Kato, S.; Wielicki, B. A.

    2002-01-01

    To estimate the earth's radiation budget at the top of the atmosphere (TOA) from satellite-measured radiances, it is necessary to account for the finite geometry of the earth and recognize that the earth is a solid body surrounded by a translucent atmosphere of finite thickness that attenuates solar radiation differently at different heights. As a result, in order to account for all of the reflected solar and emitted thermal radiation from the planet by direct integration of satellite-measured radiances, the measurement viewing geometry must be defined at a reference level well above the earth s surface (e.g., 100 km). This ensures that all radiation contributions, including radiation escaping the planet along slant paths above the earth s tangent point, are accounted for. By using a field-of- view (FOV) reference level that is too low (such as the surface reference level), TOA fluxes for most scene types are systematically underestimated by 1-2 W/sq m. In addition, since TOA flux represents a flow of radiant energy per unit area, and varies with distance from the earth according to the inverse-square law, a reference level is also needed to define satellite-based TOA fluxes. From theoretical radiative transfer calculations using a model that accounts for spherical geometry, the optimal reference level for defining TOA fluxes in radiation budget studies for the earth is estimated to be approximately 20 km. At this reference level, there is no need to explicitly account for horizontal transmission of solar radiation through the atmosphere in the earth radiation budget calculation. In this context, therefore, the 20-km reference level corresponds to the effective radiative top of atmosphere for the planet. Although the optimal flux reference level depends slightly on scene type due to differences in effective transmission of solar radiation with cloud height, the difference in flux caused by neglecting the scene-type dependence is less than 0.1%. If an inappropriate TOA flux reference level is used to define satellite TOA fluxes, and horizontal transmission of solar radiation through the planet is not accounted for in the radiation budget equation, systematic errors in net flux of up to 8 W/sq m can result. Since climate models generally use a plane-parallel model approximation to estimate TOA fluxes and the earth radiation budget, they implicitly assume zero horizontal transmission of solar radiation in the radiation budget equation, and do not need to specify a flux reference level. By defining satellite-based TOA flux estimates at a 20-km flux reference level, comparisons with plane-parallel climate model calculations are simplified since there is no need to explicitly correct plane-parallel climate model fluxes for horizontal transmission of solar radiation through a finite earth.

  12. Structural model constructing for optical handwritten character recognition

    NASA Astrophysics Data System (ADS)

    Khaustov, P. A.; Spitsyn, V. G.; Maksimova, E. I.

    2017-02-01

    The article is devoted to the development of the algorithms for optical handwritten character recognition based on the structural models constructing. The main advantage of these algorithms is the low requirement regarding the number of reference images. The one-pass approach to a thinning of the binary character representation has been proposed. This approach is based on the joint use of Zhang-Suen and Wu-Tsai algorithms. The effectiveness of the proposed approach is confirmed by the results of the experiments. The article includes the detailed description of the structural model constructing algorithm’s steps. The proposed algorithm has been implemented in character processing application and has been approved on MNIST handwriting characters database. Algorithms that could be used in case of limited reference images number were used for the comparison.

  13. Resources for National Water Savings for Outdoor Water Use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melody, Moya; Stratton, Hannah; Williams, Alison

    2014-05-01

    In support of efforts by the U.S. Environmental Agency's (EPA's) WaterSense program to develop a spreadsheet model for calculating the national water and financial savings attributable to WaterSense certification and labeling of weather-based irrigation controllers, Lawrence Berkeley National Laboratory reviewed reports, technical data, and other information related to outdoor water use and irrigation controllers. In this document we categorize and describe the reviewed references, highlighting pertinent data. We relied on these references when developing model parameters and calculating controller savings. We grouped resources into three major categories: landscapes (section 1); irrigation devices (section 2); and analytical and modeling efforts (sectionmore » 3). Each category is subdivided further as described in its section. References are listed in order of date of publication, most recent first.« less

  14. A reference model for space data system interconnection services

    NASA Astrophysics Data System (ADS)

    Pietras, John; Theis, Gerhard

    1993-03-01

    The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).

  15. A reference model for space data system interconnection services

    NASA Technical Reports Server (NTRS)

    Pietras, John; Theis, Gerhard

    1993-01-01

    The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).

  16. The Web as a Reference Tool: Comparisons with Traditional Sources.

    ERIC Educational Resources Information Center

    Janes, Joseph; McClure, Charles R.

    1999-01-01

    This preliminary study suggests that the same level of timeliness and accuracy can be obtained for answers to reference questions using resources in freely available World Wide Web sites as with traditional print-based resources. Discusses implications for library collection development, new models of consortia, training needs, and costing and…

  17. Control of Systems With Slow Actuators Using Time Scale Separation

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vehram; Nguyen, Nhan

    2009-01-01

    This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.

  18. Adapt-Mix: learning local genetic correlation structure improves summary statistics-based analyses

    PubMed Central

    Park, Danny S.; Brown, Brielin; Eng, Celeste; Huntsman, Scott; Hu, Donglei; Torgerson, Dara G.; Burchard, Esteban G.; Zaitlen, Noah

    2015-01-01

    Motivation: Approaches to identifying new risk loci, training risk prediction models, imputing untyped variants and fine-mapping causal variants from summary statistics of genome-wide association studies are playing an increasingly important role in the human genetics community. Current summary statistics-based methods rely on global ‘best guess’ reference panels to model the genetic correlation structure of the dataset being studied. This approach, especially in admixed populations, has the potential to produce misleading results, ignores variation in local structure and is not feasible when appropriate reference panels are missing or small. Here, we develop a method, Adapt-Mix, that combines information across all available reference panels to produce estimates of local genetic correlation structure for summary statistics-based methods in arbitrary populations. Results: We applied Adapt-Mix to estimate the genetic correlation structure of both admixed and non-admixed individuals using simulated and real data. We evaluated our method by measuring the performance of two summary statistics-based methods: imputation and joint-testing. When using our method as opposed to the current standard of ‘best guess’ reference panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease in mean-squared error for joint-testing. Availability and implementation: Our method is publicly available in a software package called ADAPT-Mix available at https://github.com/dpark27/adapt_mix. Contact: noah.zaitlen@ucsf.edu PMID:26072481

  19. Space Shuttle propulsion performance reconstruction from flight data

    NASA Technical Reports Server (NTRS)

    Rogers, Robert M.

    1989-01-01

    The aplication of extended Kalman filtering to estimating Space Shuttle Solid Rocket Booster (SRB) performance, specific impulse, from flight data in a post-flight processing computer program. The flight data used includes inertial platform acceleration, SRB head pressure, and ground based radar tracking data. The key feature in this application is the model used for the SRBs, which represents a reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are included.

  20. A methodology and supply chain management inspired reference ontology for modeling healthcare teams.

    PubMed

    Kuziemsky, Craig E; Yazdi, Sara

    2011-01-01

    Numerous studies and strategic plans are advocating more team based healthcare delivery that is facilitated by information and communication technologies (ICTs). However before we can design ICTs to support teams we need a solid conceptual model of team processes and a methodology for using such a model in healthcare settings. This paper draws upon success in the supply chain management domain to develop a reference ontology of healthcare teams and a methodology for modeling teams to instantiate the ontology in specific settings. This research can help us understand how teams function and how we can design ICTs to support teams.

  1. Coupled local facilitation and global hydrologic inhibition drive landscape geometry in a patterned peatland

    NASA Astrophysics Data System (ADS)

    Acharya, S.; Kaplan, D. A.; Casey, S.; Cohen, M. J.; Jawitz, J. W.

    2015-05-01

    Self-organized landscape patterning can arise in response to multiple processes. Discriminating among alternative patterning mechanisms, particularly where experimental manipulations are untenable, requires process-based models. Previous modeling studies have attributed patterning in the Everglades (Florida, USA) to sediment redistribution and anisotropic soil hydraulic properties. In this work, we tested an alternate theory, the self-organizing-canal (SOC) hypothesis, by developing a cellular automata model that simulates pattern evolution via local positive feedbacks (i.e., facilitation) coupled with a global negative feedback based on hydrology. The model is forced by global hydroperiod that drives stochastic transitions between two patch types: ridge (higher elevation) and slough (lower elevation). We evaluated model performance using multiple criteria based on six statistical and geostatistical properties observed in reference portions of the Everglades landscape: patch density, patch anisotropy, semivariogram ranges, power-law scaling of ridge areas, perimeter area fractal dimension, and characteristic pattern wavelength. Model results showed strong statistical agreement with reference landscapes, but only when anisotropically acting local facilitation was coupled with hydrologic global feedback, for which several plausible mechanisms exist. Critically, the model correctly generated fractal landscapes that had no characteristic pattern wavelength, supporting the invocation of global rather than scale-specific negative feedbacks.

  2. Coupled local facilitation and global hydrologic inhibition drive landscape geometry in a patterned peatland

    NASA Astrophysics Data System (ADS)

    Acharya, S.; Kaplan, D. A.; Casey, S.; Cohen, M. J.; Jawitz, J. W.

    2015-01-01

    Self-organized landscape patterning can arise in response to multiple processes. Discriminating among alternative patterning mechanisms, particularly where experimental manipulations are untenable, requires process-based models. Previous modeling studies have attributed patterning in the Everglades (Florida, USA) to sediment redistribution and anisotropic soil hydraulic properties. In this work, we tested an alternate theory, the self-organizing canal (SOC) hypothesis, by developing a cellular automata model that simulates pattern evolution via local positive feedbacks (i.e., facilitation) coupled with a global negative feedback based on hydrology. The model is forced by global hydroperiod that drives stochastic transitions between two patch types: ridge (higher elevation) and slough (lower elevation). We evaluated model performance using multiple criteria based on six statistical and geostatistical properties observed in reference portions of the Everglades landscape: patch density, patch anisotropy, semivariogram ranges, power-law scaling of ridge areas, perimeter area fractal dimension, and characteristic pattern wavelength. Model results showed strong statistical agreement with reference landscapes, but only when anisotropically acting local facilitation was coupled with hydrologic global feedback, for which several plausible mechanisms exist. Critically, the model correctly generated fractal landscapes that had no characteristic pattern wavelength, supporting the invocation of global rather than scale-specific negative feedbacks.

  3. Sensor trustworthiness in uncertain time varying stochastic environments

    NASA Astrophysics Data System (ADS)

    Verma, Ajay; Fernandes, Ronald; Vadakkeveedu, Kalyan

    2011-06-01

    Persistent surveillance applications require unattended sensors deployed in remote regions to track and monitor some physical stimulant of interest that can be modeled as output of time varying stochastic process. However, the accuracy or the trustworthiness of the information received through a remote and unattended sensor and sensor network cannot be readily assumed, since sensors may get disabled, corrupted, or even compromised, resulting in unreliable information. The aim of this paper is to develop information theory based metric to determine sensor trustworthiness from the sensor data in an uncertain and time varying stochastic environment. In this paper we show an information theory based determination of sensor data trustworthiness using an adaptive stochastic reference sensor model that tracks the sensor performance for the time varying physical feature, and provides a baseline model that is used to compare and analyze the observed sensor output. We present an approach in which relative entropy is used for reference model adaptation and determination of divergence of the sensor signal from the estimated reference baseline. We show that that KL-divergence is a useful metric that can be successfully used in determination of sensor failures or sensor malice of various types.

  4. Tracking of multiple targets using online learning for reference model adaptation.

    PubMed

    Pernkopf, Franz

    2008-12-01

    Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.

  5. A data storage and retrieval model for Louisiana traffic operations data : technical summary.

    DOT National Transportation Integrated Search

    1996-08-01

    The overall goal of this research study was to develop a prototype computer-based indexing model for traffic operation data in DOTD. The methodology included: 1) extraction of state road network, 2) development of geographic reference model, 3) engin...

  6. Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8: Users Guide

    NASA Astrophysics Data System (ADS)

    Justus, C. G.; James, B. F.

    1999-05-01

    Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8 is presented and its new features are discussed. Mars-GRAM uses new values of planetary reference ellipsoid radii, gravity term, and rotation rate (consistent with current JPL values) and includes centrifugal effects on gravity. The model now uses NASA Ames Global Circulation Model low resolution topography. Curvature corrections are applied to winds and limits based on speed of sound are applied. Altitude of the F1 ionization peak and density scale height, including effects of change of molecular weight with altitude are computed. A check is performed to disallow temperatures below CO2 sublimination. This memorandum includes instructions on obtaining Mars-GRAM source code and data files and running the program. Sample input and output are provided. An example of incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code is also given.

  7. Users manual for a one-dimensional Lagrangian transport model

    USGS Publications Warehouse

    Schoellhamer, D.H.; Jobson, H.E.

    1986-01-01

    A Users Manual for the Lagrangian Transport Model (LTM) is presented. The LTM uses Lagrangian calculations that are based on a reference frame moving with the river flow. The Lagrangian reference frame eliminates the need to numerically solve the convective term of the convection-diffusion equation and provides significant numerical advantages over the more commonly used Eulerian reference frame. When properly applied, the LTM can simulate riverine transport and decay processes within the accuracy required by most water quality studies. The LTM is applicable to steady or unsteady one-dimensional unidirectional flows in fixed channels with tributary and lateral inflows. Application of the LTM is relatively simple and optional capabilities improve the model 's convenience. Appendices give file formats and three example LTM applications that include the incorporation of the QUAL II water quality model 's reaction kinetics into the LTM. (Author 's abstract)

  8. Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8: Users Guide

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, B. F.

    1999-01-01

    Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8 is presented and its new features are discussed. Mars-GRAM uses new values of planetary reference ellipsoid radii, gravity term, and rotation rate (consistent with current JPL values) and includes centrifugal effects on gravity. The model now uses NASA Ames Global Circulation Model low resolution topography. Curvature corrections are applied to winds and limits based on speed of sound are applied. Altitude of the F1 ionization peak and density scale height, including effects of change of molecular weight with altitude are computed. A check is performed to disallow temperatures below CO2 sublimination. This memorandum includes instructions on obtaining Mars-GRAM source code and data files and running the program. Sample input and output are provided. An example of incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code is also given.

  9. The Reynolds-stress tensor in diffusion flames; An experimental and theoretical investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, F.; Janicka, J.

    1990-07-01

    The authors present measurements and predictions of Reynolds-stress components and mean velocities in a CH{sub 4}-air diffusion flame. A reference beam LDA technique is applied for measuring all Reynolds-stress components. A hologram with dichromated gelatine as recording medium generates strictly coherent reference beams. The theoretical part describes a Reynolds-stress model based on Favre-averaged quantities, paying special attention to modeling the pressure-shear correlation and the dissipation equation in flames. Finally, measurement/prediction comparisons are presented.

  10. The Geolocation model for lunar-based Earth observation

    NASA Astrophysics Data System (ADS)

    Ding, Yixing; Liu, Guang; Ren, Yuanzhen; Ye, Hanlin; Guo, Huadong; Lv, Mingyang

    2016-07-01

    In recent years, people are more and more aware of that the earth need to treated as an entirety, and consequently to be observed in a holistic, systematic and multi-scale view. However, the interaction mechanism between the Earth's inner layers and outer layers is still unclear. Therefore, we propose to observe the Earth's inner layers and outer layers instantaneously on the Moon which may be helpful to the studies in climatology, meteorology, seismology, etc. At present, the Moon has been proved to be an irreplaceable platform for Earth's outer layers observation. Meanwhile, some discussions have been made in lunar-based observation of the Earth's inner layers, but the geolocation model of lunar-based observation has not been specified yet. In this paper, we present a geolocation model based on transformation matrix. The model includes six coordinate systems: The telescope coordinate system, the lunar local coordinate system, the lunar-reference coordinate system, the selenocentric inertial coordinate system, the geocentric inertial coordinate system and the geo-reference coordinate system. The parameters, lncluding the position of the Sun, the Earth, the Moon, the libration and the attitude of the Earth, can be acquired from the Ephemeris. By giving an elevation angle and an azimuth angle of the lunar-based telescope, this model links the image pixel to the ground point uniquely.

  11. Medical applications of model-based dynamic thermography

    NASA Astrophysics Data System (ADS)

    Nowakowski, Antoni; Kaczmarek, Mariusz; Ruminski, Jacek; Hryciuk, Marcin; Renkielska, Alicja; Grudzinski, Jacek; Siebert, Janusz; Jagielak, Dariusz; Rogowski, Jan; Roszak, Krzysztof; Stojek, Wojciech

    2001-03-01

    The proposal to use active thermography in medical diagnostics is promising in some applications concerning investigation of directly accessible parts of the human body. The combination of dynamic thermograms with thermal models of investigated structures gives attractive possibility to make internal structure reconstruction basing on different thermal properties of biological tissues. Measurements of temperature distribution synchronized with external light excitation allow registration of dynamic changes of local temperature dependent on heat exchange conditions. Preliminary results of active thermography applications in medicine are discussed. For skin and under- skin tissues an equivalent thermal model may be determined. For the assumed model its effective parameters may be reconstructed basing on the results of transient thermal processes. For known thermal diffusivity and conductivity of specific tissues the local thickness of a two or three layer structure may be calculated. Results of some medical cases as well as reference data of in vivo study on animals are presented. The method was also applied to evaluate the state of the human heart during the open chest cardio-surgical interventions. Reference studies of evoked heart infarct in pigs are referred, too. We see the proposed new in medical applications technique as a promising diagnostic tool. It is a fully non-invasive, clean, handy, fast and affordable method giving not only qualitative view of investigated surfaces but also an objective quantitative measurement result, accurate enough for many applications including fast screening of affected tissues.

  12. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    NASA Astrophysics Data System (ADS)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  13. Autoignition response of n-butanol and its blend with primary reference fuel constituents of gasoline.

    DOE PAGES

    Kumar, Kamal; Zhang, Yu; Sung, Chi -Jen; ...

    2015-04-13

    We study the influence of blending n-butanol on the ignition delay times of n-heptane and iso-octane, the primary reference fuels for gasoline. The ignition delay times are measured using a rapid compression machine, with an emphasis on the low-to-intermediate temperature conditions. The experiments are conducted at equivalence ratios of 0.4 and 1.0, for a compressed pressure of 20 bar, with the temperatures at the end of compression ranging from 613 K to 979 K. The effect of n-butanol addition on the development of the two-stage ignition characteristics for the two primary reference fuels is also examined. The experimental results aremore » compared to predictions obtained using a detailed chemical kinetic mechanism, which has been obtained by a systematic merger of previously reported base models for the combustion of the individual fuel constituents. In conclusion, a sensitivity analysis on the base, and the merged models, is also performed to understand the dependence of autoignition delay times on the model parameters.« less

  14. State observer-based sliding mode control for semi-active hydro-pneumatic suspension

    NASA Astrophysics Data System (ADS)

    Ren, Hongbin; Chen, Sizhong; Zhao, Yuzhuang; Liu, Gang; Yang, Lin

    2016-02-01

    This paper proposes an improved virtual reference model for semi-active suspension to coordinate the vehicle ride comfort and handling stability. The reference model combines the virtues of sky-hook with ground-hook control logic, and the hybrid coefficient is tuned according to the longitudinal and lateral acceleration so as to improve the vehicle stability especially in high-speed condition. Suspension state observer based on unscented Kalman filter is designed. A sliding mode controller (SMC) is developed to track the states of the reference model. The stability of the SMC strategy is proven by means of Lyapunov function taking into account the nonlinear damper characteristics and sprung mass variation of the vehicle. Finally, the performance of the controller is demonstrated under three typical working conditions: the random road excitation, speed bump road and sharp acceleration and braking. The simulation results indicated that, compared with the traditional passive suspension, the proposed control algorithm can offer a better coordination between vehicle ride comfort and handling stability. This approach provides a viable alternative to costlier active suspension control systems for commercial vehicles.

  15. Genotype Imputation with Millions of Reference Samples.

    PubMed

    Browning, Brian L; Browning, Sharon R

    2016-01-07

    We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle's throughput was more than 100× greater than Impute2's throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  16. A Rapid Identification Method for Calamine Using Near-Infrared Spectroscopy Based on Multi-Reference Correlation Coefficient Method and Back Propagation Artificial Neural Network.

    PubMed

    Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli

    2017-07-01

    As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.

  17. Direct model reference adaptive control of a flexible robotic manipulator

    NASA Technical Reports Server (NTRS)

    Meldrum, D. R.

    1985-01-01

    Quick, precise control of a flexible manipulator in a space environment is essential for future Space Station repair and satellite servicing. Numerous control algorithms have proven successful in controlling rigid manipulators wih colocated sensors and actuators; however, few have been tested on a flexible manipulator with noncolocated sensors and actuators. In this thesis, a model reference adaptive control (MRAC) scheme based on command generator tracker theory is designed for a flexible manipulator. Quicker, more precise tracking results are expected over nonadaptive control laws for this MRAC approach. Equations of motion in modal coordinates are derived for a single-link, flexible manipulator with an actuator at the pinned-end and a sensor at the free end. An MRAC is designed with the objective of controlling the torquing actuator so that the tip position follows a trajectory that is prescribed by the reference model. An appealing feature of this direct MRAC law is that it allows the reference model to have fewer states than the plant itself. Direct adaptive control also adjusts the controller parameters directly with knowledge of only the plant output and input signals.

  18. An internal reference model-based PRF temperature mapping method with Cramer-Rao lower bound noise performance analysis.

    PubMed

    Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng

    2009-11-01

    The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.

  19. Uncertainty Propagation of Non-Parametric-Derived Precipitation Estimates into Multi-Hydrologic Model Simulations

    NASA Astrophysics Data System (ADS)

    Bhuiyan, M. A. E.; Nikolopoulos, E. I.; Anagnostou, E. N.

    2017-12-01

    Quantifying the uncertainty of global precipitation datasets is beneficial when using these precipitation products in hydrological applications, because precipitation uncertainty propagation through hydrologic modeling can significantly affect the accuracy of the simulated hydrologic variables. In this research the Iberian Peninsula has been used as the study area with a study period spanning eleven years (2000-2010). This study evaluates the performance of multiple hydrologic models forced with combined global rainfall estimates derived based on a Quantile Regression Forests (QRF) technique. In QRF technique three satellite precipitation products (CMORPH, PERSIANN, and 3B42 (V7)); an atmospheric reanalysis precipitation and air temperature dataset; satellite-derived near-surface daily soil moisture data; and a terrain elevation dataset are being utilized in this study. A high-resolution, ground-based observations driven precipitation dataset (named SAFRAN) available at 5 km/1 h resolution is used as reference. Through the QRF blending framework the stochastic error model produces error-adjusted ensemble precipitation realizations, which are used to force four global hydrological models (JULES (Joint UK Land Environment Simulator), WaterGAP3 (Water-Global Assessment and Prognosis), ORCHIDEE (Organizing Carbon and Hydrology in Dynamic Ecosystems) and SURFEX (Stands for Surface Externalisée) ) to simulate three hydrologic variables (surface runoff, subsurface runoff and evapotranspiration). The models are forced with the reference precipitation to generate reference-based hydrologic simulations. This study presents a comparative analysis of multiple hydrologic model simulations for different hydrologic variables and the impact of the blending algorithm on the simulated hydrologic variables. Results show how precipitation uncertainty propagates through the different hydrologic model structures to manifest in reduction of error in hydrologic variables.

  20. Mathematical model of biological order state or syndrome in traditional Chinese medicine: based on electromagnetic radiation within the human body.

    PubMed

    Han, Jinxiang; Huang, Jinzhao

    2012-03-01

    In this study, based on the resonator model and exciplex model of electromagnetic radiation within the human body, mathematical model of biological order state, also referred to as syndrome in traditional Chinese medicine, was established and expressed as: "Sy = v/ 1n(6I + 1)". This model provides the theoretical foundation for experimental research addressing the order state of living system, especially the quantitative research syndrome in traditional Chinese medicine.

  1. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    DTIC Science & Technology

    2013-10-01

    are modeled using SPH elements. Model validation runs with monolithic SiC tiles are conducted based on the DoP experiments described in reference...TERMS ,30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets, AutoDyn Simulations, Tile Gap 16. SECURITY...range 700 m/s to 1000 m/s are modeled using SPH elements. □ Model validation runs with monolithic SiC tiles are conducted based on the DoP

  2. A Personal Value-Based Model of College Students' Aptitudes and Expected Choice Behavior Regarding Retailing Careers.

    ERIC Educational Resources Information Center

    Shim, Soyeon; Warrington, Patti; Goldsberry, Ellen

    1999-01-01

    A study of 754 retail management students developed a value-based model of career attitude and expected choice behavior. Findings indicate that personal values had an influence on all aspects of retail career attitudes, which then had a direct effect on expected choice behavior. (Contains 55 references.) (Author/JOW)

  3. Design Heuristics for Authentic Simulation-Based Learning Games

    ERIC Educational Resources Information Center

    Ney, Muriel; Gonçalves, Celso; Balacheff, Nicolas

    2014-01-01

    Simulation games are games for learning based on a reference in the real world. We propose a model for authenticity in this context as a result of a compromise among learning, playing and realism. In the health game used to apply this model, students interact with characters in the game through phone messages, mail messages, SMS and video.…

  4. Understanding the Common Elements of Evidence-Based Practice: Misconceptions and Clinical Examples

    ERIC Educational Resources Information Center

    Chorpita, Bruce F.; Becker, Kimberly D.; Daleiden, Eric L.

    2007-01-01

    In this article, the authors proposed a distillation and matching model (DMM) that describes how evidence-based treatment operations can be conceptualized at a lower order level of analysis than simply by their manuals. Also referred to as the "common elements" approach, this model demonstrates the feasibility of coding and identifying the…

  5. An Associative Index Model for the Results List Based on Vannevar Bush's Selection Concept

    ERIC Educational Resources Information Center

    Cole, Charles; Julien, Charles-Antoine; Leide, John E.

    2010-01-01

    Introduction: We define the results list problem in information search and suggest the "associative index model", an ad-hoc, user-derived indexing solution based on Vannevar Bush's description of an associative indexing approach for his memex machine. We further define what selection means in indexing terms with reference to Charles…

  6. Estimating patient-specific and anatomically correct reference model for craniomaxillofacial deformity via sparse representation

    PubMed Central

    Wang, Li; Ren, Yi; Gao, Yaozong; Tang, Zhen; Chen, Ken-Chung; Li, Jianfu; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Xia, James J.; Shen, Dinggang

    2015-01-01

    Purpose: A significant number of patients suffer from craniomaxillofacial (CMF) deformity and require CMF surgery in the United States. The success of CMF surgery depends on not only the surgical techniques but also an accurate surgical planning. However, surgical planning for CMF surgery is challenging due to the absence of a patient-specific reference model. Currently, the outcome of the surgery is often subjective and highly dependent on surgeon’s experience. In this paper, the authors present an automatic method to estimate an anatomically correct reference shape of jaws for orthognathic surgery, a common type of CMF surgery. Methods: To estimate a patient-specific jaw reference model, the authors use a data-driven method based on sparse shape composition. Given a dictionary of normal subjects, the authors first use the sparse representation to represent the midface of a patient by the midfaces of the normal subjects in the dictionary. Then, the derived sparse coefficients are used to reconstruct a patient-specific reference jaw shape. Results: The authors have validated the proposed method on both synthetic and real patient data. Experimental results show that the authors’ method can effectively reconstruct the normal shape of jaw for patients. Conclusions: The authors have presented a novel method to automatically estimate a patient-specific reference model for the patient suffering from CMF deformity. PMID:26429255

  7. Introduction to the special issue: parsimony and redundancy in models of language.

    PubMed

    Wiechmann, Daniel; Kerz, Elma; Snider, Neal; Jaeger, T Florian

    2013-09-01

    One of the most fundamental goals in linguistic theory is to understand the nature of linguistic knowledge, that is, the representations and mechanisms that figure in a cognitively plausible model of human language-processing. The past 50 years have witnessed the development and refinement of various theories about what kind of 'stuff' human knowledge of language consists of, and technological advances now permit the development of increasingly sophisticated computational models implementing key assumptions of different theories from both rationalist and empiricist perspectives. The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them (cf. Bod, Hay, & Jannedy, 2003; Christiansen & Chater, 2008; Hauser, Chomsky, & Fitch, 2002; Oaksford & Chater, 2007; O'Donnell, Hauser, & Fitch, 2005). Rather, the research presented in this issue, which we label usage-based here, conceives of linguistic knowledge as being induced from experience. According to the strongest of such accounts, the acquisition and processing of language can be explained with reference to general cognitive mechanisms alone (rather than with reference to innate language-specific mechanisms). Defined in these terms, usage-based approaches encompass approaches referred to as experience-based, performance-based and/or emergentist approaches (Amrnon & Snider, 2010; Bannard, Lieven, & Tomasello, 2009; Bannard & Matthews, 2008; Chater & Manning, 2006; Clark & Lappin, 2010; Gerken, Wilson, & Lewis, 2005; Gomez, 2002;

  8. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  9. The Dairy Greenhouse Gas Emission Model: Reference Manual

    USDA-ARS?s Scientific Manuscript database

    The Dairy Greenhouse Gas Model (DairyGHG) is a software tool for estimating the greenhouse gas emissions and carbon footprint of dairy production systems. A relatively simple process-based model is used to predict the primary greenhouse gas emissions, which include the net emission of carbon dioxide...

  10. Estimating regional centile curves from mixed data sources and countries.

    PubMed

    van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J

    2009-10-15

    Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.

  11. Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, T.; Herrmann, J.; Hanss, M.

    2010-10-01

    For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.

  12. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  13. PARRoT- a homology-based strategy to quantify and compare RNA-sequencing from non-model organisms.

    PubMed

    Gan, Ruei-Chi; Chen, Ting-Wen; Wu, Timothy H; Huang, Po-Jung; Lee, Chi-Ching; Yeh, Yuan-Ming; Chiu, Cheng-Hsun; Huang, Hsien-Da; Tang, Petrus

    2016-12-22

    Next-generation sequencing promises the de novo genomic and transcriptomic analysis of samples of interests. However, there are only a few organisms having reference genomic sequences and even fewer having well-defined or curated annotations. For transcriptome studies focusing on organisms lacking proper reference genomes, the common strategy is de novo assembly followed by functional annotation. However, things become even more complicated when multiple transcriptomes are compared. Here, we propose a new analysis strategy and quantification methods for quantifying expression level which not only generate a virtual reference from sequencing data, but also provide comparisons between transcriptomes. First, all reads from the transcriptome datasets are pooled together for de novo assembly. The assembled contigs are searched against NCBI NR databases to find potential homolog sequences. Based on the searched result, a set of virtual transcripts are generated and served as a reference transcriptome. By using the same reference, normalized quantification values including RC (read counts), eRPKM (estimated RPKM) and eTPM (estimated TPM) can be obtained that are comparable across transcriptome datasets. In order to demonstrate the feasibility of our strategy, we implement it in the web service PARRoT. PARRoT stands for Pipeline for Analyzing RNA Reads of Transcriptomes. It analyzes gene expression profiles for two transcriptome sequencing datasets. For better understanding of the biological meaning from the comparison among transcriptomes, PARRoT further provides linkage between these virtual transcripts and their potential function through showing best hits in SwissProt, NR database, assigning GO terms. Our demo datasets showed that PARRoT can analyze two paired-end transcriptomic datasets of approximately 100 million reads within just three hours. In this study, we proposed and implemented a strategy to analyze transcriptomes from non-reference organisms which offers the opportunity to quantify and compare transcriptome profiles through a homolog based virtual transcriptome reference. By using the homolog based reference, our strategy effectively avoids the problems that may cause from inconsistencies among transcriptomes. This strategy will shed lights on the field of comparative genomics for non-model organism. We have implemented PARRoT as a web service which is freely available at http://parrot.cgu.edu.tw .

  14. Use of Reference Frames for Interplanetary Navigation at JPL

    NASA Technical Reports Server (NTRS)

    Heflin, Michael; Jacobs, Chris; Sovers, Ojars; Moore, Angelyn; Owen, Sue

    2010-01-01

    Navigation of interplanetary spacecraft is typically based on range, Doppler, and differential interferometric measurements made by ground-based telescopes. Acquisition and interpretation of these observations requires accurate knowledge of the terrestrial reference frame and its orientation with respect to the celestial frame. Work is underway at JPL to reprocess historical VLBI and GPS data to improve realizations of the terrestrial and celestial frames. Improvements include minimal constraint alignment, improved tropospheric modeling, better orbit determination, and corrections for antenna phase center patterns.

  15. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    NASA Astrophysics Data System (ADS)

    Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-04-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.

  16. The Generalized Internal/External Frame of Reference Model: An Extension to Dimensional Comparison Theory

    ERIC Educational Resources Information Center

    Möller, Jens; Müller-Kalthoff, Hanno; Helm, Friederike; Nagy, Nicole; Marsh, Herb W.

    2016-01-01

    The dimensional comparison theory (DCT) focuses on the effects of internal, dimensional comparisons (e.g., "How good am I in math compared to English?") on academic self-concepts with widespread consequences for students' self-evaluation, motivation, and behavioral choices. DCT is based on the internal/external frame of reference model…

  17. Selection of low-variance expressed Malus x domestica (apple) genes for use as quantitative PCR reference genes (housekeepers)

    USDA-ARS?s Scientific Manuscript database

    To accurately measure gene expression using PCR-based approaches, there is the need for reference genes that have low variance in expression (housekeeping genes) to normalise the data for RNA quantity and quality. For non-model species such as Malus x domestica (apples), previously, the selection of...

  18. Self-Organized Air Tasking: Examining a Non-Hierarchical Model for Joint Air Operations

    DTIC Science & Technology

    2004-06-01

    refers to systems with this dynamic incoherence as “strong sense of agency ” systems, and uses “weak sense of agency ” to refer to more predictable...agent-based systems, such as robotics or state-determined automata. Increasing the level of dynamic incoherency indicates a stronger sense of agency . This

  19. Enhancing the effectiveness of antismoking messages via self-congruent appeals.

    PubMed

    Chang, Chingching

    2009-01-01

    A self-congruent effect model was applied to understand adolescents' responses to antismoking advertising that referred to the self or others. Experiment 1 showed that self-referring ads generated more negative smoking attitudes than other-referring ads among adolescents with independent self-construals, whereas other-referring ads generated more negative smoking attitudes than self-referring ads among adolescents with interdependent self-construals. A survey further showed that smokers rated themselves higher on a measure of independent self-construal than nonsmokers. Experiment 2 then found that self-referring ads are more effective than other-referring ads for smokers, who have independent self-construals. Findings supported the idea that health communication campaign designers can maximize message effectiveness by developing different messages for different target segments of the population based on their self-construals.

  20. High Speed Civil Transport Aircraft Simulation: Reference-H Cycle 1, MATLAB Implementation

    NASA Technical Reports Server (NTRS)

    Sotack, Robert A.; Chowdhry, Rajiv S.; Buttrill, Carey S.

    1999-01-01

    The mathematical model and associated code to simulate a high speed civil transport aircraft - the Boeing Reference H configuration - are described. The simulation was constructed in support of advanced control law research. In addition to providing time histories of the dynamic response, the code includes the capabilities for calculating trim solutions and for generating linear models. The simulation relies on the nonlinear, six-degree-of-freedom equations which govern the motion of a rigid aircraft in atmospheric flight. The 1962 Standard Atmosphere Tables are used along with a turbulence model to simulate the Earth atmosphere. The aircraft model has three parts - an aerodynamic model, an engine model, and a mass model. These models use the data from the Boeing Reference H cycle 1 simulation data base. Models for the actuator dynamics, landing gear, and flight control system are not included in this aircraft model. Dynamic responses generated by the nonlinear simulation are presented and compared with results generated from alternate simulations at Boeing Commercial Aircraft Company and NASA Langley Research Center. Also, dynamic responses generated using linear models are presented and compared with dynamic responses generated using the nonlinear simulation.

  1. The Computational Science Education Reference Desk: A tool for increasing inquiry based learning in the science classroom

    NASA Astrophysics Data System (ADS)

    Joiner, D. A.; Stevenson, D. E.; Panoff, R. M.

    2000-12-01

    The Computational Science Reference Desk is an online tool designed to provide educators in math, physics, astronomy, biology, chemistry, and engineering with information on how to use computational science to enhance inquiry based learning in the undergraduate and pre college classroom. The Reference Desk features a showcase of original content exploration activities, including lesson plans and background materials; a catalog of websites which contain models, lesson plans, software, and instructional resources; and a forum to allow educators to communicate their ideas. Many of the recent advances in astronomy rely on the use of computer simulation, and tools are being developed by CSERD to allow students to experiment with some of the models that have guided scientific discovery. One of these models allows students to study how scientists use spectral information to determine the makeup of the interstellar medium by modeling the interstellar extinction curve using spherical grains of silicate, amorphous carbon, or graphite. Students can directly compare their model to the average interstellar extinction curve, and experiment with how small changes in their model alter the shape of the interstellar extinction curve. A simpler model allows students to visualize spatial relationships between the Earth, Moon, and Sun to understand the cause of the phases of the moon. A report on the usefulness of these models in two classes, the Computational Astrophysics workshop at The Shodor Education Foundation and the Conceptual Astronomy class at the University of North Carolina at Greensboro, will be presented.

  2. Methods to estimate irrigated reference crop evapotranspiration - a review.

    PubMed

    Kumar, R; Jat, M K; Shankar, V

    2012-01-01

    Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.

  3. An image-based skeletal tissue model for the ICRP reference newborn

    NASA Astrophysics Data System (ADS)

    Pafundi, Deanna; Lee, Choonsik; Watchman, Christopher; Bourke, Vincent; Aris, John; Shagina, Natalia; Harrison, John; Fell, Tim; Bolch, Wesley

    2009-07-01

    Hybrid phantoms represent a third generation of computational models of human anatomy needed for dose assessment in both external and internal radiation exposures. Recently, we presented the first whole-body hybrid phantom of the ICRP reference newborn with a skeleton constructed from both non-uniform rational B-spline and polygon-mesh surfaces (Lee et al 2007 Phys. Med. Biol. 52 3309-33). The skeleton in that model included regions of cartilage and fibrous connective tissue, with the remainder given as a homogenous mixture of cortical and trabecular bone, active marrow and miscellaneous skeletal tissues. In the present study, we present a comprehensive skeletal tissue model of the ICRP reference newborn to permit a heterogeneous representation of the skeleton in that hybrid phantom set—both male and female—that explicitly includes a delineation of cortical bone so that marrow shielding effects are correctly modeled for low-energy photons incident upon the newborn skeleton. Data sources for the tissue model were threefold. First, skeletal site-dependent volumes of homogeneous bone were obtained from whole-cadaver CT image analyses. Second, selected newborn bone specimens were acquired at autopsy and subjected to micro-CT image analysis to derive model parameters of the marrow cavity and bone trabecular 3D microarchitecture. Third, data given in ICRP Publications 70 and 89 were selected to match reference values on total skeletal tissue mass. Active marrow distributions were found to be in reasonable agreement with those given previously by the ICRP. However, significant differences were seen in total skeletal and site-specific masses of trabecular and cortical bone between the current and ICRP newborn skeletal tissue models. The latter utilizes an age-independent ratio of 80%/20% cortical and trabecular bone for the reference newborn. In the current study, a ratio closer to 40%/60% is used based upon newborn CT and micro-CT skeletal image analyses. These changes in mineral bone composition may have significant dosimetric implications when considering localized marrow dosimetry for radionuclides that target mineral bone in the newborn child.

  4. A reference skeletal dosimetry model for an adult male radionuclide therapy patient based on three-dimensional imaging and paired-image radiation transport

    NASA Astrophysics Data System (ADS)

    Shah, Amish P.

    The need for improved patient-specificity of skeletal dose estimates is widely recognized in radionuclide therapy. Current clinical models for marrow dose are based on skeletal mass estimates from a variety of sources and linear chord-length distributions that do not account for particle escape into cortical bone. To predict marrow dose, these clinical models use a scheme that requires separate calculations of cumulated activity and radionuclide S values. Selection of an appropriate S value is generally limited to one of only three sources, all of which use as input the trabecular microstructure of an individual measured 25 years ago, and the tissue mass derived from different individuals measured 75 years ago. Our study proposed a new modeling approach to marrow dosimetry---the Paired Image Radiation Transport (PIRT) model---that properly accounts for both the trabecular microstructure and the cortical macrostructure of each skeletal site in a reference male radionuclide patient. The PIRT model, as applied within EGSnrc, requires two sets of input geometry: (1) an infinite voxel array of segmented microimages of the spongiosa acquired via microCT; and (2) a segmented ex-vivo CT image of the bone site macrostructure defining both the spongiosa (marrow, endosteum, and trabeculae) and the cortical bone cortex. Our study also proposed revising reference skeletal dosimetry models for the adult male cancer patient. Skeletal site-specific radionuclide S values were obtained for a 66-year-old male reference patient. The derivation for total skeletal S values were unique in that the necessary skeletal mass and electron dosimetry calculations were formulated from the same source bone site over the entire skeleton. We conclude that paired-image radiation-transport techniques provide an adoptable method by which the intricate, anisotropic trabecular microstructure of the skeletal site; and the physical size and shape of the bone can be handled together, for improved compilation of reference radionuclide S values. We also conclude that this comprehensive model for the adult male cancer patient should be implemented for use in patient-specific calculations for radionuclide dosimetry of the skeleton.

  5. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

    PubMed

    Huhn, Carolin; Pyell, Ute

    2008-07-11

    It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

  6. SAM Photovoltaic Model Technical Reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilman, P.

    2015-05-27

    This manual describes the photovoltaic performance model in the System Advisor Model (SAM). The U.S. Department of Energy’s National Renewable Energy Laboratory maintains and distributes SAM, which is available as a free download from https://sam.nrel.gov. These descriptions are based on SAM 2015.1.30 (SSC 41).

  7. Economic Modeling as a Component of Academic Strategic Planning.

    ERIC Educational Resources Information Center

    MacKinnon, Joyce; Sothmann, Mark; Johnson, James

    2001-01-01

    Computer-based economic modeling was used to enable a school of allied health to define outcomes, identify associated costs, develop cost and revenue models, and create a financial planning system. As a strategic planning tool, it assisted realistic budgeting and improved efficiency and effectiveness. (Contains 18 references.) (SK)

  8. Nursing care systematization as a multidimensional and interactive phenomenon.

    PubMed

    Backes, Dirce Stein; Koerich, Magda Santos; Nascimento, Keyla Cristiane do; Erdmann, Alacoque Lorenzini

    2008-01-01

    This study aimed to understand the meaning of Nursing Care Systematization (NCS) for multiprofessional health team professionals based on the relationships, interactions and associations of Complex thought. This qualitative study uses Grounded Theory as a methodological reference framework. Data were obtained through interviews with three sample groups, totaling 15 professionals from different institutions. Simultaneous data codification and analysis identified the central theme: 'Glimpsing nursing care systematization as an interactive and multidimensional phenomenon' and the respective reference model. NCS appoints, in addition to interactivity and professional complementarity, the importance of dialog and connection between the academy, health practices and regulatory offices, based on new reference frameworks for the organization of health practices.

  9. A set of 4D pediatric XCAT reference phantoms for multimodality research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Hannah, E-mail: Hannah.norris@duke.edu; Zhang, Yakun; Bond, Jason

    Purpose: The authors previously developed an adult population of 4D extended cardiac-torso (XCAT) phantoms for multimodality imaging research. In this work, the authors develop a reference set of 4D pediatric XCAT phantoms consisting of male and female anatomies at ages of newborn, 1, 5, 10, and 15 years. These models will serve as the foundation from which the authors will create a vast population of pediatric phantoms for optimizing pediatric CT imaging protocols. Methods: Each phantom was based on a unique set of CT data from a normal patient obtained from the Duke University database. The datasets were selected tomore » best match the reference values for height and weight for the different ages and genders according to ICRP Publication 89. The major organs and structures were segmented from the CT data and used to create an initial pediatric model defined using nonuniform rational B-spline surfaces. The CT data covered the entire torso and part of the head. To complete the body, the authors manually added on the top of the head and the arms and legs using scaled versions of the XCAT adult models or additional models created from cadaver data. A multichannel large deformation diffeomorphic metric mapping algorithm was then used to calculate the transform from a template XCAT phantom (male or female 50th percentile adult) to the target pediatric model. The transform was applied to the template XCAT to fill in any unsegmented structures within the target phantom and to implement the 4D cardiac and respiratory models in the new anatomy. The masses of the organs in each phantom were matched to the reference values given in ICRP Publication 89. The new reference models were checked for anatomical accuracy via visual inspection. Results: The authors created a set of ten pediatric reference phantoms that have the same level of detail and functionality as the original XCAT phantom adults. Each consists of thousands of anatomical structures and includes parameterized models for the cardiac and respiratory motions. Based on patient data, the phantoms capture the anatomic variations of childhood, such as the development of bone in the skull, pelvis, and long bones, and the growth of the vertebrae and organs. The phantoms can be combined with existing simulation packages to generate realistic pediatric imaging data from different modalities. Conclusions: The development of patient-derived pediatric computational phantoms is useful in providing variable anatomies for simulation. Future work will expand this ten-phantom base to a host of pediatric phantoms representative of the public at large. This can provide a means to evaluate and improve pediatric imaging devices and to optimize CT protocols in terms of image quality and radiation dose.« less

  10. AmapSim: a structural whole-plant simulator based on botanical knowledge and designed to host external functional models.

    PubMed

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-05-01

    AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Simulations were performed on tomato plants to demonstrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment.

  11. AmapSim: A Structural Whole-plant Simulator Based on Botanical Knowledge and Designed to Host External Functional Models

    PubMed Central

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-01-01

    Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310

  12. [Case mix analysis of patients who can be referred from emergency department triage to primary care].

    PubMed

    Gómez-Jiménez, Josep; Becerra, Oscar; Boneu, Francisco; Burgués, Lluís; Pàmies, Salvador

    2006-01-01

    Structured emergency department (ED) triage scales can be used to develop patient referral strategies from the ED to primary care. The objectives of the present study were to evaluate the percentage of patients who could potentially be referred from triage to primary care and to describe their clinical characteristics. We analyzed all patients with low acuity (triage levels IV and V) and low complexity (patients discharged from the ED) triaged during 2003 with the Andorran Triage Model in the ED and estimated the percentage of patients who could potentially be referred on the basis of three primary care models: A, centers unable to deal with emergencies or perform complementary investigations; B, centers able to deal with emergencies and perform complementary investigations, and C, centers able to deal with emergencies but unable to perform complementary investigations. Of the 25,319 patients included in the study, 5.63% could be referred to model A, 75.22% to model B and 33.36% to model C. A total of 81.04% of these model C patients were classified in seven symptomatic categories: wounds and traumatisms, inflammation or fever, pediatric problems, rhinolaryngological infection or alterations, ocular symptoms, pain and cutaneous allergy or reactions. Casemix analysis, based on the level of acuity and discharge criteria, can be used to establish the percentage of patients that could potentially be referred to primary care. Analysis of their clinical profile is useful to design referral protocols.

  13. Hearing Tests on Mobile Devices: Evaluation of the Reference Sound Level by Means of Biological Calibration.

    PubMed

    Masalski, Marcin; Kipiński, Lech; Grysiński, Tomasz; Kręcicki, Tomasz

    2016-05-30

    Hearing tests carried out in home setting by means of mobile devices require previous calibration of the reference sound level. Mobile devices with bundled headphones create a possibility of applying the predefined level for a particular model as an alternative to calibrating each device separately. The objective of this study was to determine the reference sound level for sets composed of a mobile device and bundled headphones. Reference sound levels for Android-based mobile devices were determined using an open access mobile phone app by means of biological calibration, that is, in relation to the normal-hearing threshold. The examinations were conducted in 2 groups: an uncontrolled and a controlled one. In the uncontrolled group, the fully automated self-measurements were carried out in home conditions by 18- to 35-year-old subjects, without prior hearing problems, recruited online. Calibration was conducted as a preliminary step in preparation for further examination. In the controlled group, audiologist-assisted examinations were performed in a sound booth, on normal-hearing subjects verified through pure-tone audiometry, recruited offline from among the workers and patients of the clinic. In both the groups, the reference sound levels were determined on a subject's mobile device using the Bekesy audiometry. The reference sound levels were compared between the groups. Intramodel and intermodel analyses were carried out as well. In the uncontrolled group, 8988 calibrations were conducted on 8620 different devices representing 2040 models. In the controlled group, 158 calibrations (test and retest) were conducted on 79 devices representing 50 models. Result analysis was performed for 10 most frequently used models in both the groups. The difference in reference sound levels between uncontrolled and controlled groups was 1.50 dB (SD 4.42). The mean SD of the reference sound level determined for devices within the same model was 4.03 dB (95% CI 3.93-4.11). Statistically significant differences were found across models. Reference sound levels determined in the uncontrolled group are comparable to the values obtained in the controlled group. This validates the use of biological calibration in the uncontrolled group for determining the predefined reference sound level for new devices. Moreover, due to a relatively small deviation of the reference sound level for devices of the same model, it is feasible to conduct hearing screening on devices calibrated with the predefined reference sound level.

  14. Reference-based source separation method for identification of brain regions involved in a reference state from intracerebral EEG

    PubMed Central

    Samadi, Samareh; Amini, Ladan; Cosandier-Rimélé, Delphine; Soltanian-Zadeh, Hamid; Jutten, Christian

    2013-01-01

    In this paper, we present a fast method to extract the sources related to interictal epileptiform state. The method is based on general eigenvalue decomposition using two correlation matrices during: 1) periods including interictal epileptiform discharges (IED) as a reference activation model and 2) periods excluding IEDs or abnormal physiological signals as background activity. After extracting the most similar sources to the reference or IED state, IED regions are estimated by using multiobjective optimization. The method is evaluated using both realistic simulated data and actual intracerebral electroencephalography recordings of patients suffering from focal epilepsy. These patients are seizure-free after the resective surgery. Quantitative comparisons of the proposed IED regions with the visually inspected ictal onset zones by the epileptologist and another method of identification of IED regions reveal good performance. PMID:23428609

  15. A bi-articular model for scapular-humeral rhythm reconstruction through data from wearable sensors.

    PubMed

    Lorussi, Federico; Carbonaro, Nicola; De Rossi, Danilo; Tognetti, Alessandro

    2016-04-23

    Patient-specific performance assessment of arm movements in daily life activities is fundamental for neurological rehabilitation therapy. In most applications, the shoulder movement is simplified through a socket-ball joint, neglecting the movement of the scapular-thoracic complex. This may lead to significant errors. We propose an innovative bi-articular model of the human shoulder for estimating the position of the hand in relation to the sternum. The model takes into account both the scapular-toracic and gleno-humeral movements and their ratio governed by the scapular-humeral rhythm, fusing the information of inertial and textile-based strain sensors. To feed the reconstruction algorithm based on the bi-articular model, an ad-hoc sensing shirt was developed. The shirt was equipped with two inertial measurement units (IMUs) and an integrated textile strain sensor. We built the bi-articular model starting from the data obtained in two planar movements (arm abduction and flexion in the sagittal plane) and analysing the error between the reference data - measured through an optical reference system - and the socket-ball approximation of the shoulder. The 3D model was developed by extending the behaviour of the kinematic chain revealed in the planar trajectories through a parameter identification that takes into account the body structure of the subject. The bi-articular model was evaluated in five subjects in comparison with the optical reference system. The errors were computed in terms of distance between the reference position of the trochlea (end-effector) and the correspondent model estimation. The introduced method remarkably improved the estimation of the position of the trochlea (and consequently the estimation of the hand position during reaching activities) reducing position errors from 11.5 cm to 1.8 cm. Thanks to the developed bi-articular model, we demonstrated a reliable estimation of the upper arm kinematics with a minimal sensing system suitable for daily life monitoring of recovery.

  16. Spreadsheet-based engine data analysis tool - user's guide.

    DOT National Transportation Integrated Search

    2016-07-01

    This record refers to both the spreadsheet tool - Fleet Equipment Performance Measurement Preventive Maintenance Model: Spreadsheet-Based Engine Data Analysis Tool, http://ntl.bts.gov/lib/60000/60000/60007/0-6626-P1_Final.xlsm - and its accompanying ...

  17. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    PubMed

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  18. Statistical tools for transgene copy number estimation based on real-time PCR.

    PubMed

    Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal

    2007-11-01

    As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.

  19. Grading for Understanding--Standards-Based Grading

    ERIC Educational Resources Information Center

    Zimmerman, Todd

    2017-01-01

    Standards-based grading (SBG), sometimes called learning objectives-based assessment (LOBA), is an assessment model that relies on students demonstrating mastery of learning objectives (sometimes referred to as standards). The goal of this grading system is to focus students on mastering learning objectives rather than on accumulating points. I…

  20. Pupils' Pressure Models and Their Implications for Instruction.

    ERIC Educational Resources Information Center

    Kariotoglou, P.; Psillos, D.

    1993-01-01

    Discusses a study designed to investigate pupils' conceptions about fluids and particularly liquids in equilibrium, with reference to the concept of pressure. Based upon the results obtained, several mental models of how pupils understand liquids in equilibrium were proposed. (ZWH)

  1. Using a logical information model-driven design process in healthcare.

    PubMed

    Cheong, Yu Chye; Bird, Linda; Tun, Nwe Ni; Brooks, Colleen

    2011-01-01

    A hybrid standards-based approach has been adopted in Singapore to develop a Logical Information Model (LIM) for healthcare information exchange. The Singapore LIM uses a combination of international standards, including ISO13606-1 (a reference model for electronic health record communication), ISO21090 (healthcare datatypes), SNOMED CT (healthcare terminology) and HL7 v2 (healthcare messaging). This logic-based design approach also incorporates mechanisms for achieving bi-directional semantic interoperability.

  2. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  3. A Collaborative Secure Localization Algorithm Based on Trust Model in Underwater Wireless Sensor Networks

    PubMed Central

    Han, Guangjie; Liu, Li; Jiang, Jinfang; Shu, Lei; Rodrigues, Joel J.P.C.

    2016-01-01

    Localization is one of the hottest research topics in Underwater Wireless Sensor Networks (UWSNs), since many important applications of UWSNs, e.g., event sensing, target tracking and monitoring, require location information of sensor nodes. Nowadays, a large number of localization algorithms have been proposed for UWSNs. How to improve location accuracy are well studied. However, few of them take location reliability or security into consideration. In this paper, we propose a Collaborative Secure Localization algorithm based on Trust model (CSLT) for UWSNs to ensure location security. Based on the trust model, the secure localization process can be divided into the following five sub-processes: trust evaluation of anchor nodes, initial localization of unknown nodes, trust evaluation of reference nodes, selection of reference node, and secondary localization of unknown node. Simulation results demonstrate that the proposed CSLT algorithm performs better than the compared related works in terms of location security, average localization accuracy and localization ratio. PMID:26891300

  4. BRIDGING GAPS BETWEEN ZOO AND WILDLIFE MEDICINE: ESTABLISHING REFERENCE INTERVALS FOR FREE-RANGING AFRICAN LIONS (PANTHERA LEO).

    PubMed

    Broughton, Heather M; Govender, Danny; Shikwambana, Purvance; Chappell, Patrick; Jolles, Anna

    2017-06-01

    The International Species Information System has set forth an extensive database of reference intervals for zoologic species, allowing veterinarians and game park officials to distinguish normal health parameters from underlying disease processes in captive wildlife. However, several recent studies comparing reference values from captive and free-ranging animals have found significant variation between populations, necessitating the development of separate reference intervals in free-ranging wildlife to aid in the interpretation of health data. Thus, this study characterizes reference intervals for six biochemical analytes, eleven hematologic or immune parameters, and three hormones using samples from 219 free-ranging African lions ( Panthera leo ) captured in Kruger National Park, South Africa. Using the original sample population, exclusion criteria based on physical examination were applied to yield a final reference population of 52 clinically normal lions. Reference intervals were then generated via 90% confidence intervals on log-transformed data using parametric bootstrapping techniques. In addition to the generation of reference intervals, linear mixed-effect models and generalized linear mixed-effect models were used to model associations of each focal parameter with the following independent variables: age, sex, and body condition score. Age and sex were statistically significant drivers for changes in hepatic enzymes, renal values, hematologic parameters, and leptin, a hormone related to body fat stores. Body condition was positively correlated with changes in monocyte counts. Given the large variation in reference values taken from captive versus free-ranging lions, it is our hope that this study will serve as a baseline for future clinical evaluations and biomedical research targeting free-ranging African lions.

  5. Probabilistic modeling of discourse-aware sentence processing.

    PubMed

    Dubey, Amit; Keller, Frank; Sturt, Patrick

    2013-07-01

    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.

  6. Validation of Western North America Models based on finite-frequency and ray theory imaging methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Maceira, Monica; Porritt, Robert W.

    2015-02-02

    We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-­wave FF approach and the RT approach, when the data selection, processing and reference models are the same.

  7. Multilayer Insulation Ascent Venting Model

    NASA Technical Reports Server (NTRS)

    Tramel, R. W.; Sutherlin, S. G.; Johnson, W. L.

    2017-01-01

    The thermal and venting transient experienced by tank-applied multilayer insulation (MLI) in the Earth-to-orbit environment is very dynamic and not well characterized. This new predictive code is a first principles-based engineering model which tracks the time history of the mass and temperature (internal energy) of the gas in each MLI layer. A continuum-based model is used for early portions of the trajectory while a kinetic theory-based model is used for the later portions of the trajectory, and the models are blended based on a reference mean free path. This new capability should improve understanding of the Earth-to-orbit transient and enable better insulation system designs for in-space cryogenic propellant systems.

  8. Two methods for modeling vibrations of planetary gearboxes including faults: Comparison and validation

    NASA Astrophysics Data System (ADS)

    Parra, J.; Vicuña, Cristián Molina

    2017-08-01

    Planetary gearboxes are important components of many industrial applications. Vibration analysis can increase their lifetime and prevent expensive repair and safety concerns. However, an effective analysis is only possible if the vibration features of planetary gearboxes are properly understood. In this paper, models are used to study the frequency content of planetary gearbox vibrations under non-fault and different fault conditions. Two different models are considered: phenomenological model, which is an analytical-mathematical formulation based on observation, and lumped-parameter model, which is based on the solution of the equations of motion of the system. Results of both models are not directly comparable, because the phenomenological model provides the vibration on a fixed radial direction, such as the measurements of the vibration sensor mounted on the outer part of the ring gear. On the other hand, the lumped-parameter model provides the vibrations on the basis of a rotating reference frame fixed to the carrier. To overcome this situation, a function to decompose the lumped-parameter model solutions to a fixed reference frame is presented. Finally, comparisons of results from both model perspectives and experimental measurements are presented.

  9. Model-Based Reasoning: Using Visual Tools to Reveal Student Learning

    ERIC Educational Resources Information Center

    Luckie, Douglas; Harrison, Scott H.; Ebert-May, Diane

    2011-01-01

    Using visual models is common in science and should become more common in classrooms. Our research group has developed and completed studies on the use of a visual modeling tool, the Concept Connector. This modeling tool consists of an online concept mapping Java applet that has automatic scoring functions we refer to as Robograder. The Concept…

  10. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    PubMed

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.

  11. Definition and Proposed Realization of the International Height Reference System (IHRS)

    NASA Astrophysics Data System (ADS)

    Ihde, Johannes; Sánchez, Laura; Barzaghi, Riccardo; Drewes, Hermann; Foerste, Christoph; Gruber, Thomas; Liebsch, Gunter; Marti, Urs; Pail, Roland; Sideris, Michael

    2017-05-01

    Studying, understanding and modelling global change require geodetic reference frames with an order of accuracy higher than the magnitude of the effects to be actually studied and with high consistency and reliability worldwide. The International Association of Geodesy, taking care of providing a precise geodetic infrastructure for monitoring the Earth system, promotes the implementation of an integrated global geodetic reference frame that provides a reliable frame for consistent analysis and modelling of global phenomena and processes affecting the Earth's gravity field, the Earth's surface geometry and the Earth's rotation. The definition, realization, maintenance and wide utilization of the International Terrestrial Reference System guarantee a globally unified geometric reference frame with an accuracy at the millimetre level. An equivalent high-precision global physical reference frame that supports the reliable description of changes in the Earth's gravity field (such as sea level variations, mass displacements, processes associated with geophysical fluids) is missing. This paper addresses the theoretical foundations supporting the implementation of such a physical reference surface in terms of an International Height Reference System and provides guidance for the coming activities required for the practical and sustainable realization of this system. Based on conceptual approaches of physical geodesy, the requirements for a unified global height reference system are derived. In accordance with the practice, its realization as the International Height Reference Frame is designed. Further steps for the implementation are also proposed.

  12. Reference governors for controlled belt restraint systems

    NASA Astrophysics Data System (ADS)

    van der Laan, E. P.; Heemels, W. P. M. H.; Luijten, H.; Veldpaus, F. E.; Steinbuch, M.

    2010-07-01

    Today's restraint systems typically include a number of airbags, and a three-point seat belt with load limiter and pretensioner. For the class of real-time controlled restraint systems, the restraint actuator settings are continuously manipulated during the crash. This paper presents a novel control strategy for these systems. The control strategy developed here is based on a combination of model predictive control and reference management, in which a non-linear device - a reference governor (RG) - is added to a primal closed-loop controlled system. This RG determines an optimal setpoint in terms of injury reduction and constraint satisfaction by solving a constrained optimisation problem. Prediction of the vehicle motion, required to predict future constraint violation, is included in the design and is based on past crash data, using linear regression techniques. Simulation results with MADYMO models show that, with ideal sensors and actuators, a significant reduction (45%) of the peak chest acceleration can be achieved, without prior knowledge of the crash. Furthermore, it is shown that the algorithms are sufficiently fast to be implemented online.

  13. A dynamic model of the marriage market-part 1: matching algorithm based on age preference and availability.

    PubMed

    Matthews, A P; Garenne, M L

    2013-09-01

    The matching algorithm in a dynamic marriage market model is described in this first of two companion papers. Iterative Proportional Fitting is used to find a marriage function (an age distribution of new marriages for both sexes), in a stable reference population, that is consistent with the one-sex age distributions of new marriages, and includes age preference. The one-sex age distributions (which are the marginals of the two-sex distribution) are based on the Picrate model, and age preference on a normal distribution, both of which may be adjusted by choice of parameter values. For a population that is perturbed from the reference state, the total number of new marriages is found as the harmonic mean of target totals for men and women obtained by applying reference population marriage rates to the perturbed population. The marriage function uses the age preference function, assumed to be the same for the reference and the perturbed populations, to distribute the total number of new marriages. The marriage function also has an availability factor that varies as the population changes with time, where availability depends on the supply of unmarried men and women. To simplify exposition, only first marriage is treated, and the algorithm is illustrated by application to Zambia. In the second paper, remarriage and dissolution are included. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. PVWatts Version 5 Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  15. Practised Intelligence Testing Based on a Modern Test Conceptualization and Its Reference to the Common Intelligence Theories

    ERIC Educational Resources Information Center

    Kubinger, Klaus D.; Litzenberger, Margarete; Mrakotsky, Christine

    2006-01-01

    The question is to what extent intelligence test-batteries prove any kind of empirical reference to common intelligence theories. Of particular interest are conceptualized tests that are of a high psychometric standard--those that fit the Rasch model--and hence are not exposed to fundamental critique. As individualized testing, i.e., a…

  16. Fire, herbicide, and chainsaw felling effects on arthropods in fire-suppressed longleaf pine sandhills at Eglin Air Force Base, Florida

    Treesearch

    Louis Provencher; Krista E. M. Galley; Andrea R. Litt; Doria R. Gordon; Leonard A. Brennan; George W. Tanner; Jeffrey L. Hardesty

    2002-01-01

    Experimentally evaluating the success of hardwood reduction techniques against a "model" reference condition of longleaf pine sandhill communities is not directly possible because reference sites are not randomized or replicated. We addressed this issue by measuring the similarity of arthropods in treatment (fire, herbicide, felling/girdling, and control) and...

  17. Annual Geocenter Motion from Space Geodesy and Models

    NASA Astrophysics Data System (ADS)

    Ries, J. C.

    2013-12-01

    Ideally, the origin of the terrestrial reference frame and the center of mass of the Earth are always coincident. By construction, the origin of the reference frame is coincident with the mean Earth center of mass, averaged over the time span of the satellite laser ranging (SLR) observations used in the reference frame solution, within some level of uncertainty. At shorter time scales, tidal and non-tidal mass variations result in an offset between the origin and geocenter, called geocenter motion. Currently, there is a conventional model for the tidally-coherent diurnal and semi-diurnal geocenter motion, but there is no model for the non-tidal annual variation. This annual motion reflects the largest-scale mass redistribution in the Earth system, so it essential to observe it for a complete description of the total mass transport. Failing to model it can also cause false signals in geodetic products such as sea height observations from satellite altimeters. In this paper, a variety of estimates for the annual geocenter motion are presented based on several different geodetic techniques and models, and a ';consensus' model from SLR is suggested.

  18. Developing primary care in Hong Kong: evidence into practice and the development of reference frameworks.

    PubMed

    Griffiths, Sian M; Lee, Jeff P M

    2012-10-01

    Enhancing primary care is one of the proposals put forward in the Healthcare Reform Consultation Document "Your Health, Your Life" issued in March 2008. In 2009, the Working Group on Primary Care, chaired by the Secretary for Food and Health, recommended the development of age-group and disease-specific primary care conceptual models and reference frameworks. Drawing on international experience and best evidence, the Task Force on Conceptual Model and Preventive Protocols of the Working Group on Primary Care has developed two reference frameworks for the management of two common chronic diseases in Hong Kong, namely diabetes and hypertension, in primary care settings. Adopting a population approach for the prevention and control of diabetes and hypertension across the life course, the reference frameworks aim to provide evidence-based and appropriate recommendations for the provision of continuing and comprehensive care for patients with chronic diseases in the community.

  19. Archive, Access, and Supply of Scientifically Derived Data: A Data Model for Multi-Parameterized Querying Where Spectral Data Base Meets GIS-Based Mapping Archive

    NASA Astrophysics Data System (ADS)

    Nass, A.; D'Amore, M.; Helbert, J.

    2018-04-01

    An archiving structure and reference level of derived and already published data supports the scientific community significantly by a constant rise of knowledge and understanding based on recent discussions within Information Science and Management.

  20. TU-FG-BRB-03: Basis Vector Model Based Method for Proton Stopping Power Estimation From Experimental Dual Energy CT Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S; Politte, D; O’Sullivan, J

    2016-06-15

    Purpose: This work aims at reducing the uncertainty in proton stopping power (SP) estimation by a novel combination of a linear, separable basis vector model (BVM) for stopping power calculation (Med Phys 43:600) and a statistical, model-based dual-energy CT (DECT) image reconstruction algorithm (TMI 35:685). The method was applied to experimental data. Methods: BVM assumes the photon attenuation coefficients, electron densities, and mean excitation energies (I-values) of unknown materials can be approximated by a combination of the corresponding quantities of two reference materials. The DECT projection data for a phantom with 5 different known materials was collected on a Philipsmore » Brilliance scanner using two scans at 90 kVp and 140 kVp. The line integral alternating minimization (LIAM) algorithm was used to recover the two BVM coefficient images using the measured source spectra. The proton stopping powers are then estimated from the Bethe-Bloch equation using electron densities and I-values derived from the BVM coefficients. The proton stopping powers and proton ranges for the phantom materials estimated via our BVM based DECT method are compared to ICRU reference values and a post-processing DECT analysis (Yang PMB 55:1343) applied to vendorreconstructed images using the Torikoshi parametric fit model (tPFM). Results: For the phantom materials, the average stopping power estimations for 175 MeV protons derived from our method are within 1% of the ICRU reference values (except for Teflon with a 1.48% error), with an average standard deviation of 0.46% over pixels. The resultant proton ranges agree with the reference values within 2 mm. Conclusion: Our principled DECT iterative reconstruction algorithm, incorporating optimal beam hardening and scatter corrections, in conjunction with a simple linear BVM model, achieves more accurate and robust proton stopping power maps than the post-processing, nonlinear tPFM based DECT analysis applied to conventional reconstructions of low and high energy scans. Funding Support: NIH R01CA 75371; NCI grant R01 CA 149305.« less

  1. Modeling susceptibility difference artifacts produced by metallic implants in magnetic resonance imaging with point-based thin-plate spline image registration.

    PubMed

    Pauchard, Y; Smith, M; Mintchev, M

    2004-01-01

    Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.

  2. Basic Restriction and Reference Level in Anatomically-based Japanese Models for Low-Frequency Electric and Magnetic Field Exposures

    NASA Astrophysics Data System (ADS)

    Takano, Yukinori; Hirata, Akimasa; Fujiwara, Osamu

    Human exposed to electric and/or magnetic fields at low frequencies may cause direct effect such as nerve stimulation and excitation. Therefore, basic restriction is regulated in terms of induced current density in the ICNIRP guidelines and in-situ electric field in the IEEE standard. External electric or magnetic field which does not produce induced quantities exceeding the basic restriction is used as a reference level. The relationship between the basic restriction and reference level for low-frequency electric and magnetic fields has been investigated using European anatomic models, while limited for Japanese model, especially for electric field exposures. In addition, that relationship has not well been discussed. In the present study, we calculated the induced quantities in anatomic Japanese male and female models exposed to electric and magnetic fields at reference level. A quasi static finite-difference time-domain (FDTD) method was applied to analyze this problem. As a result, spatially averaged induced current density was found to be more sensitive to averaging algorithms than that of in-situ electric field. For electric and magnetic field exposure at the ICNIRP reference level, the maximum values of the induced current density for different averaging algorithm were smaller than the basic restriction for most cases. For exposures at the reference level in the IEEE standard, the maximum electric fields in the brain were larger than the basic restriction in the brain while smaller for the spinal cord and heart.

  3. A global reference for caesarean section rates (C-Model): a multicountry cross-sectional study.

    PubMed

    Souza, J P; Betran, A P; Dumont, A; de Mucio, B; Gibbs Pickens, C M; Deneux-Tharaux, C; Ortiz-Panozo, E; Sullivan, E; Ota, E; Togoobaatar, G; Carroli, G; Knight, H; Zhang, J; Cecatti, J G; Vogel, J P; Jayaratne, K; Leal, M C; Gissler, M; Morisaki, N; Lack, N; Oladapo, O T; Tunçalp, Ö; Lumbiganon, P; Mori, R; Quintana, S; Costa Passos, A D; Marcolin, A C; Zongo, A; Blondel, B; Hernández, B; Hogue, C J; Prunet, C; Landman, C; Ochir, C; Cuesta, C; Pileggi-Castro, C; Walker, D; Alves, D; Abalos, E; Moises, Ecd; Vieira, E M; Duarte, G; Perdona, G; Gurol-Urganci, I; Takahiko, K; Moscovici, L; Campodonico, L; Oliveira-Ciabati, L; Laopaiboon, M; Danansuriya, M; Nakamura-Pereira, M; Costa, M L; Torloni, M R; Kramer, M R; Borges, P; Olkhanud, P B; Pérez-Cuevas, R; Agampodi, S B; Mittal, S; Serruya, S; Bataglia, V; Li, Z; Temmerman, M; Gülmezoglu, A M

    2016-02-01

    To generate a global reference for caesarean section (CS) rates at health facilities. Cross-sectional study. Health facilities from 43 countries. Thirty eight thousand three hundred and twenty-four women giving birth from 22 countries for model building and 10,045,875 women giving birth from 43 countries for model testing. We hypothesised that mathematical models could determine the relationship between clinical-obstetric characteristics and CS. These models generated probabilities of CS that could be compared with the observed CS rates. We devised a three-step approach to generate the global benchmark of CS rates at health facilities: creation of a multi-country reference population, building mathematical models, and testing these models. Area under the ROC curves, diagnostic odds ratio, expected CS rate, observed CS rate. According to the different versions of the model, areas under the ROC curves suggested a good discriminatory capacity of C-Model, with summary estimates ranging from 0.832 to 0.844. The C-Model was able to generate expected CS rates adjusted for the case-mix of the obstetric population. We have also prepared an e-calculator to facilitate use of C-Model (www.who.int/reproductivehealth/publications/maternal_perinatal_health/c-model/en/). This article describes the development of a global reference for CS rates. Based on maternal characteristics, this tool was able to generate an individualised expected CS rate for health facilities or groups of health facilities. With C-Model, obstetric teams, health system managers, health facilities, health insurance companies, and governments can produce a customised reference CS rate for assessing use (and overuse) of CS. The C-Model provides a customized benchmark for caesarean section rates in health facilities and systems. © 2015 World Health Organization; licensed by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  4. Selection of reference standard during method development using the analytical hierarchy process.

    PubMed

    Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun

    2015-03-25

    Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Development of a 5 MW reference gearbox for offshore wind turbines: 5 MW reference gearbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nejad, Amir Rasekhi; Guo, Yi; Gao, Zhen

    2015-07-27

    This paper presents detailed descriptions, modeling parameters and technical data of a 5MW high-speed gearbox developed for the National Renewable Energy Laboratory offshore 5MW baseline wind turbine. The main aim of this paper is to support the concept studies and research for large offshore wind turbines by providing a baseline gearbox model with detailed modeling parameters. This baseline gearbox follows the most conventional design types of those used in wind turbines. It is based on the four-point supports: two main bearings and two torque arms. The gearbox consists of three stages: two planetary and one parallel stage gears. The gearmore » ratios among the stages are calculated in a way to obtain the minimum gearbox weight. The gearbox components are designed and selected based on the offshore wind turbine design codes and validated by comparison to the data available from large offshore wind turbine prototypes. All parameters required to establish the dynamic model of the gearbox are then provided. Moreover, a maintenance map indicating components with high to low probability of failure is shown. The 5 MW reference gearbox can be used as a baseline for research on wind turbine gearboxes and comparison studies. It can also be employed in global analysis tools to represent a more realistic model of a gearbox in a coupled analysis.« less

  6. International energy outlook 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-05-01

    This International Energy Outlook presents historical data from 1970 to 1993 and EIA`s projections of energy consumption and carbon emissions through 2015 for 6 country groups. Prospects for individual fuels are discussed. Summary tables of the IEO96 world energy consumption, oil production, and carbon emissions projections are provided in Appendix A. The reference case projections of total foreign energy consumption and of natural gas, coal, and renewable energy were prepared using EIA`s World Energy Projection System (WEPS) model. Reference case projections of foreign oil production and consumption were prepared using the International Energy Module of the National Energy Modeling Systemmore » (NEMS). Nuclear consumption projections were derived from the International Nuclear Model, PC Version (PC-INM). Alternatively, nuclear capacity projections were developed using two methods: the lower reference case projections were based on analysts` knowledge of the nuclear programs in different countries; the upper reference case was generated by the World Integrated Nuclear Evaluation System (WINES)--a demand-driven model. In addition, the NEMS Coal Export Submodule (CES) was used to derive flows in international coal trade. As noted above, foreign projections of electricity demand are now projected as part of the WEPS. 64 figs., 62 tabs.« less

  7. Direct model reference adaptive control of robotic arms

    NASA Technical Reports Server (NTRS)

    Kaufman, Howard; Swift, David C.; Cummings, Steven T.; Shankey, Jeffrey R.

    1993-01-01

    The results of controlling A PUMA 560 Robotic Manipulator and the NASA shuttle Remote Manipulator System (RMS) using a Command Generator Tracker (CGT) based Model Reference Adaptive Controller (DMRAC) are presented. Initially, the DMRAC algorithm was run in simulation using a detailed dynamic model of the PUMA 560. The algorithm was tuned on the simulation and then used to control the manipulator using minimum jerk trajectories as the desired reference inputs. The ability to track a trajectory in the presence of load changes was also investigated in the simulation. Satisfactory performance was achieved in both simulation and on the actual robot. The obtained responses showed that the algorithm was robust in the presence of sudden load changes. Because these results indicate that the DMRAC algorithm can indeed be successfully applied to the control of robotic manipulators, additional testing was performed to validate the applicability of DMRAC to simulated dynamics of the shuttle RMS.

  8. A Neurobehavioral Model of Flexible Spatial Language Behaviors

    PubMed Central

    Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schöner, Gregor

    2012-01-01

    We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes. PMID:21517224

  9. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  10. ECHO: A reference-free short-read error correction algorithm

    PubMed Central

    Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.

    2011-01-01

    Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625

  11. An algebraic method for constructing stable and consistent autoregressive filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu

    2015-02-15

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less

  12. Template-based protein-protein docking exploiting pairwise interfacial residue restraints.

    PubMed

    Xue, Li C; Rodrigues, João P G L M; Dobbs, Drena; Honavar, Vasant; Bonvin, Alexandre M J J

    2017-05-01

    Although many advanced and sophisticated ab initio approaches for modeling protein-protein complexes have been proposed in past decades, template-based modeling (TBM) remains the most accurate and widely used approach, given a reliable template is available. However, there are many different ways to exploit template information in the modeling process. Here, we systematically evaluate and benchmark a TBM method that uses conserved interfacial residue pairs as docking distance restraints [referred to as alpha carbon-alpha carbon (CA-CA)-guided docking]. We compare it with two other template-based protein-protein modeling approaches, including a conserved non-pairwise interfacial residue restrained docking approach [referred to as the ambiguous interaction restraint (AIR)-guided docking] and a simple superposition-based modeling approach. Our results show that, for most cases, the CA-CA-guided docking method outperforms both superposition with refinement and the AIR-guided docking method. We emphasize the superiority of the CA-CA-guided docking on cases with medium to large conformational changes, and interactions mediated through loops, tails or disordered regions. Our results also underscore the importance of a proper refinement of superimposition models to reduce steric clashes. In summary, we provide a benchmarked TBM protocol that uses conserved pairwise interface distance as restraints in generating realistic 3D protein-protein interaction models, when reliable templates are available. The described CA-CA-guided docking protocol is based on the HADDOCK platform, which allows users to incorporate additional prior knowledge of the target system to further improve the quality of the resulting models. © The Author 2016. Published by Oxford University Press.

  13. A physically based analytical spatial air temperature and humidity model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Endreny, Theodore A.; Nowak, David J.

    2013-09-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat storage based on semiempirical functions and generates spatially distributed estimates based on inputs of topography, land cover, and the weather data measured at a reference site. The model assumes that for all grids under the same mesoscale climate, grid air temperature and humidity are modified by local variation in absorbed solar radiation and the partitioning of sensible and latent heat. The model uses a reference grid site for time series meteorological data and the air temperature and humidity of any other grid can be obtained by solving the heat flux network equations. PASATH was coupled with the USDA iTree-Hydro water balance model to obtain evapotranspiration terms and run from 20 to 29 August 2010 at a 360 m by 360 m grid scale and hourly time step across a 285 km2 watershed including the urban area of Syracuse, NY. PASATH predictions were tested at nine urban weather stations representing variability in urban topography and land cover. The PASATH model predictive efficiency R2 ranged from 0.81 to 0.99 for air temperature and 0.77 to 0.97 for dew point temperature. PASATH is expected to have broad applications on environmental and ecological models.

  14. MDA-based EHR application security services.

    PubMed

    Blobel, Bernd; Pharow, Peter

    2004-01-01

    Component-oriented, distributed, virtual EHR systems have to meet enhanced security and privacy requirements. In the context of advanced architectural paradigms such as component-orientation, model-driven, and knowledge-based, standardised security services needed have to be specified and implemented in an integrated way following the same paradigm. This concerns the deployment of formal models, meta-languages, reference models such as the ISO RM-ODP, and development as well as implementation tools. International projects' results presented proceed on that streamline.

  15. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    PubMed Central

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-01-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. PMID:24223474

  16. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-10-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.

  17. Design of low noise wind turbine blades using Betz and Joukowski concepts

    NASA Astrophysics Data System (ADS)

    Shen, W. Z.; Hrgovan, I.; Okulov, V.; Zhu, W. J.; Madsen, J.

    2014-06-01

    This paper presents the aerodynamic design of low noise wind turbine blades using Betz and Joukowski concepts. The aerodynamic model is based on Blade Element Momentum theory whereas the aeroacoustic prediction model is based on the BPM model. The investigation is started with a 3MW baseline/reference turbine rotor with a diameter of 80 m. To reduce the noise emission from the baseline rotor, the rotor is reconstructed with the low noise CQU-DTU-LN1 series of airfoils which has been tested in the acoustic wind tunnel located at Virginia Tech. Finally, 3MW low noise turbine rotors are designed using the concepts of Betz and Joukowski, and the CQU-DTU-LN1 series of airfoils. Performance analysis shows that the newly designed turbine rotors can achieve an overall noise reduction of 6 dB and 1.5 dB(A) with a similar power output as compared to the reference rotor.

  18. Comparison of two different artificial neural networks for prostate biopsy indication in two different patient populations.

    PubMed

    Stephan, Carsten; Xu, Chuanliang; Finne, Patrik; Cammann, Henning; Meyer, Hellmuth-Alexander; Lein, Michael; Jung, Klaus; Stenman, Ulf-Hakan

    2007-09-01

    Different artificial neural networks (ANNs) using total prostate-specific antigen (PSA) and percentage of free PSA (%fPSA) have been introduced to enhance the specificity of prostate cancer detection. The applicability of independently trained ANN and logistic regression (LR) models to different populations regarding the composition (screening versus referred) and different PSA assays has not yet been tested. Two ANN and LR models using PSA (range 4 to 10 ng/mL), %fPSA, prostate volume, digital rectal examination findings, and patient age were tested. A multilayer perceptron network (MLP) was trained on 656 screening participants (Prostatus PSA assay) and another ANN (Immulite-based ANN [iANN]) was constructed on 606 multicentric urologically referred men. These and other assay-adapted ANN models, including one new iANN-based ANN, were used. The areas under the curve for the iANN (0.736) and MLP (0.745) were equal but showed no differences to %fPSA (0.725) in the Finnish group. Only the new iANN-based ANN reached a significant larger area under the curve (0.77). At 95% sensitivity, the specificities of MLP (33%) and the new iANN-based ANN (34%) were significantly better than the iANN (23%) and %fPSA (19%). Reverse methodology using the MLP model on the referred patients revealed, in contrast, a significant improvement in the areas under the curve for iANN and MLP (each 0.83) compared with %fPSA (0.70). At 90% and 95% sensitivity, the specificities of all LR and ANN models were significantly greater than those for %fPSA. The ANNs based on different PSA assays and populations were mostly comparable, but the clearly different patient composition also allowed with assay adaptation no unbiased ANN application to the other cohort. Thus, the use of ANNs in other populations than originally built is possible, but has limitations.

  19. Observational uncertainty and regional climate model evaluation: A pan-European perspective

    NASA Astrophysics Data System (ADS)

    Kotlarski, Sven; Szabó, Péter; Herrera, Sixto; Räty, Olle; Keuler, Klaus; Soares, Pedro M.; Cardoso, Rita M.; Bosshard, Thomas; Pagé, Christian; Boberg, Fredrik; Gutiérrez, José M.; Jaczewski, Adam; Kreienkamp, Frank; Liniger, Mark. A.; Lussana, Cristian; Szepszo, Gabriella

    2017-04-01

    Local and regional climate change assessments based on downscaling methods crucially depend on the existence of accurate and reliable observational reference data. In dynamical downscaling via regional climate models (RCMs) observational data can influence model development itself and, later on, model evaluation, parameter calibration and added value assessment. In empirical-statistical downscaling, observations serve as predictand data and directly influence model calibration with corresponding effects on downscaled climate change projections. Focusing on the evaluation of RCMs, we here analyze the influence of uncertainties in observational reference data on evaluation results in a well-defined performance assessment framework and on a European scale. For this purpose we employ three different gridded observational reference grids, namely (1) the well-established EOBS dataset (2) the recently developed EURO4M-MESAN regional re-analysis, and (3) several national high-resolution and quality-controlled gridded datasets that recently became available. In terms of climate models five reanalysis-driven experiments carried out by five different RCMs within the EURO-CORDEX framework are used. Two variables (temperature and precipitation) and a range of evaluation metrics that reflect different aspects of RCM performance are considered. We furthermore include an illustrative model ranking exercise and relate observational spread to RCM spread. The results obtained indicate a varying influence of observational uncertainty on model evaluation depending on the variable, the season, the region and the specific performance metric considered. Over most parts of the continent, the influence of the choice of the reference dataset for temperature is rather small for seasonal mean values and inter-annual variability. Here, model uncertainty (as measured by the spread between the five RCM simulations considered) is typically much larger than reference data uncertainty. For parameters of the daily temperature distribution and for the spatial pattern correlation, however, important dependencies on the reference dataset can arise. The related evaluation uncertainties can be as large or even larger than model uncertainty. For precipitation the influence of observational uncertainty is, in general, larger than for temperature. It often dominates model uncertainty especially for the evaluation of the wet day frequency, the spatial correlation and the shape and location of the distribution of daily values. But even the evaluation of large-scale seasonal mean values can be considerably affected by the choice of the reference. When employing a simple and illustrative model ranking scheme on these results it is found that RCM ranking in many cases depends on the reference dataset employed.

  20. Introducing an Equal Rights Framework for Older Persons in Residential Care

    PubMed Central

    Jönson, Håkan; Harnett, Tove

    2016-01-01

    This article reconceptualizes residential care for older persons by introducing a framework developed from a rights-based principle of disability policies: the normalization principle. This principle is part of the social model and states that society should make available for people who have impairments living conditions that are as close as possible to those of “others.” Using the framework on the case of eldercare in Sweden shows that although disability policies have used people without impairments as a comparative (external) reference group for claiming rights, eldercare policies use internal reference groups, basing comparisons on other care users. The article highlights the need for external comparisons in eldercare and suggests that the third age, which so far has been a normative reference group for older people, could be a comparative reference group when older persons in need of care claim rights to equal conditions. PMID:26035884

  1. Management of Computer-Based Instruction: Design of an Adaptive Control Strategy.

    ERIC Educational Resources Information Center

    Tennyson, Robert D.; Rothen, Wolfgang

    1979-01-01

    Theoretical and research literature on learner, program, and adaptive control as forms of instructional management are critiqued in reference to the design of computer-based instruction. An adaptive control strategy using an online, iterative algorithmic model is proposed. (RAO)

  2. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    PubMed

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  3. Moving towards ecosystem-based fisheries management: Options for parameterizing multi-species biological reference points

    NASA Astrophysics Data System (ADS)

    Moffitt, Elizabeth A.; Punt, André E.; Holsman, Kirstin; Aydin, Kerim Y.; Ianelli, James N.; Ortiz, Ivonne

    2016-12-01

    Multi-species models can improve our understanding of the effects of fishing so that it is possible to make informed and transparent decisions regarding fishery impacts. Broad application of multi-species assessment models to support ecosystem-based fisheries management (EBFM) requires the development and testing of multi-species biological reference points (MBRPs) for use in harvest-control rules. We outline and contrast several possible MBRPs that range from those that can be readily used in current frameworks to those belonging to a broader EBFM context. We demonstrate each of the possible MBRPs using a simple two species model, motivated by walleye pollock (Gadus chalcogrammus) and Pacific cod (Gadus macrocephalus) in the eastern Bering Sea, to illustrate differences among methods. The MBRPs we outline each differ in how they approach the multiple, potentially conflicting management objectives and trade-offs of EBFM. These options for MBRPs allow multi-species models to be readily adapted for EBFM across a diversity of management mandates and approaches.

  4. Developments in Stochastic Fuel Efficient Cruise Control and Constrained Control with Applications to Aircraft

    NASA Astrophysics Data System (ADS)

    McDonough, Kevin K.

    The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of these sets for aircraft longitudinal and lateral aircraft dynamics are reported, and it is shown that these sets can be larger in size compared to the more commonly used safe sets. An approach to constrained maneuver planning based on chaining recoverable sets or integral safe sets is described and illustrated with a simulation example. To facilitate the application of this maneuver planning approach in aircraft loss of control (LOC) situations when the model is only identified at the current trim condition but when these sets need to be predicted at other flight conditions, the dependence trends of the safe and recoverable sets on aircraft flight conditions are characterized. The scaling procedure to estimate subsets of safe and recoverable sets at one trim condition based on their knowledge at another trim condition is defined. Finally, two control schemes that exploit integral safe sets are proposed. The first scheme, referred to as the controller state governor (CSG), resets the controller state (typically an integrator) to enforce the constraints and enlarge the set of plant states that can be recovered without constraint violation. The second scheme, referred to as the controller state and reference governor (CSRG), combines the controller state governor with the reference governor control architecture and provides the capability of simultaneously modifying the reference command and the controller state to enforce the constraints. Theoretical results that characterize the response properties of both schemes are presented. Examples are reported that illustrate the operation of these schemes on aircraft flight dynamics models and gas turbine engine dynamic models.

  5. Managed Development Environment Successes for MSFC's VIPA Team

    NASA Technical Reports Server (NTRS)

    Finckenor, Jeff; Corder, Gary; Owens, James; Meehan, Jim; Tidwell, Paul H.

    2005-01-01

    This paper outlines the best practices of the Vehicle Design Team for VIPA. The functions of the VIPA Vehicle Design (VVD) discipline team are to maintain the controlled reference geometry and provide linked, simplified geometry for each of the other discipline analyses. The core of the VVD work, and the approach for VVD s first task of controlling the reference geometry, involves systems engineering, top-down, layout-based CAD modeling within a Product Data Manager (PDM) development environment. The top- down approach allows for simple control of very large, integrated assemblies and greatly enhances the ability to generate trade configurations and reuse data. The second VVD task, model simplification for analysis, is handled within the managed environment through application of the master model concept. In this approach, there is a single controlling, or master, product definition dataset. Connected to this master model are reference datasets with live geometric and expression links. The referenced models can be for drawings, manufacturing, visualization, embedded analysis, or analysis simplification. A discussion of web based interaction, including visualization, between the design and other disciplines is included. Demonstrated examples are cited, including the Space Launch Initiative development cycle, the Saturn V systems integration and verification cycle, an Orbital Space Plane study, and NASA Exploration Office studies of Shuttle derived and clean sheet launch vehicles. The VIPA Team has brought an immense amount of detailed data to bear on program issues. A central piece of that success has been the Managed Development Environment and the VVD Team approach to modeling.

  6. Implementation of a Sage-Based Stirling Model Into a System-Level Numerical Model of the Fission Power System Technology Demonstration Unit

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.

    2011-01-01

    The Fission Power System (FPS) project is developing a Technology Demonstration Unit (TDU) to verify the performance and functionality of a subscale version of the FPS reference concept in a relevant environment, and to verify component and system models. As hardware is developed for the TDU, component and system models must be refined to include the details of specific component designs. This paper describes the development of a Sage-based pseudo-steady-state Stirling convertor model and its implementation into a system-level model of the TDU.

  7. Shuttle derived atmospheric density model. Part 1: Comparisons of the various ambient atmospheric source data with derived parameters from the first twelve STS entry flights, a data package for AOTV atmospheric development

    NASA Technical Reports Server (NTRS)

    Findlay, J. T.; Kelly, G. M.; Troutman, P. A.

    1984-01-01

    The ambient atmospheric parameter comparisons versus derived values from the first twelve Space Shuttle Orbiter entry flights are presented. Available flights, flight data products, and data sources utilized are reviewed. Comparisons are presented based on remote meteorological measurements as well as two comprehensive models which incorporate latitudinal and seasonal effects. These are the Air Force 1978 Reference Atmosphere and the Marshall Space Flight Center Global Reference Model (GRAM). Atmospheric structure sensible in the Shuttle flight data is shown and discussed. A model for consideration in Aero-assisted Orbital Transfer Vehicle (AOTV) trajectory analysis, proposed to modify the GRAM data to emulate Shuttle experiments.

  8. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    PubMed Central

    Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-01-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629

  9. Use of an eight-arm radial water maze to assess working and reference memory following neonatal brain injury.

    PubMed

    Penley, Stephanie C; Gaudet, Cynthia M; Threlkeld, Steven W

    2013-12-04

    Working and reference memory are commonly assessed using the land based radial arm maze. However, this paradigm requires pretraining, food deprivation, and may introduce scent cue confounds. The eight-arm radial water maze is designed to evaluate reference and working memory performance simultaneously by requiring subjects to use extra-maze cues to locate escape platforms and remedies the limitations observed in land based radial arm maze designs. Specifically, subjects are required to avoid the arms previously used for escape during each testing day (working memory) as well as avoid the fixed arms, which never contain escape platforms (reference memory). Re-entries into arms that have already been used for escape during a testing session (and thus the escape platform has been removed) and re-entries into reference memory arms are indicative of working memory deficits. Alternatively, first entries into reference memory arms are indicative of reference memory deficits. We used this maze to compare performance of rats with neonatal brain injury and sham controls following induction of hypoxia-ischemia and show significant deficits in both working and reference memory after eleven days of testing. This protocol could be easily modified to examine many other models of learning impairment.

  10. Accuracy of computer-aided design models of the jaws produced using ultra-low MDCT doses and ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Alfadda, Sara A; Ameen, Wadea; Hörmann, Romed; Puelacher, Wolfgang; Widmann, Gerlig

    2018-06-16

    To compare the surface of computer-aided design (CAD) models of the maxilla produced using ultra-low MDCT doses combined with filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) reconstruction techniques with that produced from a standard dose/FBP protocol. A cadaveric completely edentulous maxilla was imaged using a standard dose protocol (CTDIvol: 29.4 mGy) and FBP, in addition to 5 low dose test protocols (LD1-5) (CTDIvol: 4.19, 2.64, 0.99, 0.53, and 0.29 mGy) reconstructed with FBP, ASIR 50, ASIR 100, and MBIR. A CAD model from each test protocol was superimposed onto the reference model using the 'Best Fit Alignment' function. Differences between the test and reference models were analyzed as maximum and mean deviations, and root-mean-square of the deviations, and color-coded models were obtained which demonstrated the location, magnitude and direction of the deviations. Based upon the magnitude, size, and distribution of areas of deviations, CAD models from the following protocols were comparable to the reference model: FBP/LD1; ASIR 50/LD1 and LD2; ASIR 100/LD1, LD2, and LD3; MBIR/LD1. The following protocols demonstrated deviations mostly between 1-2 mm or under 1 mm but over large areas, and so their effect on surgical guide accuracy is questionable: FBP/LD2; MBIR/LD2, LD3, LD4, and LD5. The following protocols demonstrated large deviations over large areas and therefore were not comparable to the reference model: FBP/LD3, LD4, and LD5; ASIR 50/LD3, LD4, and LD5; ASIR 100/LD4, and LD5. When MDCT is used for CAD models of the jaws, dose reductions of 86% may be possible with FBP, 91% with ASIR 50, and 97% with ASIR 100. Analysis of the stability and accuracy of CAD/CAM surgical guides as directly related to the jaws is needed to confirm the results.

  11. Review on DTU-parton model for hh and hA collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiu, C.B.

    1981-02-01

    Recently several groups have considered small-p/sub T/ models, which combine features from both the parton model and the DTU model. We shall refer to them loosely as the DTU-parton model. In this talk, we take a definite point of view to motivate this model, and based on this framework we briefly survey its phenomenological applications to hadron-hadron and hadron-nucleus collisions.

  12. Recent literature on structural modeling, identification, and analysis

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.

    1990-01-01

    The literature on the mathematical modeling of large space structures is first reviewed, with attention given to continuum models, model order reduction, substructuring, and computational techniques. System identification and mode verification are then discussed with reference to the verification of mathematical models of large space structures. In connection with analysis, the paper surveys recent research on eigensolvers and dynamic response solvers for large-order finite-element-based models.

  13. Operational Exploitation of Satellite-Based Sounding Data and Numerical Weather Prediction Models for Directed Energy Applications

    DTIC Science & Technology

    2015-12-01

    Verification Tool for Laser Environmental Effects Definition and Reference (LEEDR) Development ................................... 45 3.5 Gap Filling with NWP... effective cloud cover for all cloud layers within the AIRS field-of-view. ......................................... 59 Figure 37. Average wind...IR Infrared JPL Jet Propulsion Lab LEEDR Laser Environmental Effects Definition and Reference LIDAR Light Detection and Ranging MODIS Moderate

  14. Teaching Writing within the Common European Framework of Reference (CEFR): A Supplement Asynchronous Blended Learning Approach in an EFL Undergraduate Course in Egypt

    ERIC Educational Resources Information Center

    Shaarawy, Hanaa Youssef; Lotfy, Nohayer Esmat

    2013-01-01

    Based on the Common European Framework of Reference (CEFR) and following a blended learning approach (a supplement model), this article reports on a quasi-experiment where writing was taught evenly with other language skills in everyday language contexts and where asynchronous online activities were required from students to extend learning beyond…

  15. ARMA models for earthquake ground motions. Seismic safety margins research program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.

    1981-02-01

    Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less

  16. Combining cow and bull reference populations to increase accuracy of genomic prediction and genome-wide association studies.

    PubMed

    Calus, M P L; de Haas, Y; Veerkamp, R F

    2013-10-01

    Genomic selection holds the promise to be particularly beneficial for traits that are difficult or expensive to measure, such that access to phenotypes on large daughter groups of bulls is limited. Instead, cow reference populations can be generated, potentially supplemented with existing information from the same or (highly) correlated traits available on bull reference populations. The objective of this study, therefore, was to develop a model to perform genomic predictions and genome-wide association studies based on a combined cow and bull reference data set, with the accuracy of the phenotypes differing between the cow and bull genomic selection reference populations. The developed bivariate Bayesian stochastic search variable selection model allowed for an unbalanced design by imputing residuals in the residual updating scheme for all missing records. The performance of this model is demonstrated on a real data example, where the analyzed trait, being milk fat or protein yield, was either measured only on a cow or a bull reference population, or recorded on both. Our results were that the developed bivariate Bayesian stochastic search variable selection model was able to analyze 2 traits, even though animals had measurements on only 1 of 2 traits. The Bayesian stochastic search variable selection model yielded consistently higher accuracy for fat yield compared with a model without variable selection, both for the univariate and bivariate analyses, whereas the accuracy of both models was very similar for protein yield. The bivariate model identified several additional quantitative trait loci peaks compared with the single-trait models on either trait. In addition, the bivariate models showed a marginal increase in accuracy of genomic predictions for the cow traits (0.01-0.05), although a greater increase in accuracy is expected as the size of the bull population increases. Our results emphasize that the chosen value of priors in Bayesian genomic prediction models are especially important in small data sets. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. The International Reference Ionosphere 2012 - a model of international collaboration

    NASA Astrophysics Data System (ADS)

    Bilitza, Dieter; Altadill, David; Zhang, Yongliang; Mertens, Chris; Truhlik, Vladimir; Richards, Phil; McKinnell, Lee-Anne; Reinisch, Bodo

    2014-02-01

    The International Reference Ionosphere (IRI) project was established jointly by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI) in the late sixties with the goal to develop an international standard for the specification of plasma parameters in the Earth's ionosphere. COSPAR needed such a specification for the evaluation of environmental effects on spacecraft and experiments in space, and URSI for radiowave propagation studies and applications. At the request of COSPAR and URSI, IRI was developed as a data-based model to avoid the uncertainty of theory-based models which are only as good as the evolving theoretical understanding. Being based on most of the available and reliable observations of the ionospheric plasma from the ground and from space, IRI describes monthly averages of electron density, electron temperature, ion temperature, ion composition, and several additional parameters in the altitude range from 60 km to 2000 km. A working group of about 50 international ionospheric experts is in charge of developing and improving the IRI model. Over time as new data became available and new modeling techniques emerged, steadily improved editions of the IRI model have been published. This paper gives a brief history of the IRI project and describes the latest version of the model, IRI-2012. It also briefly discusses efforts to develop a real-time IRI model. The IRI homepage is at http://IRImodel.org.

  18. Accurate reconstruction of 3D cardiac geometry from coarsely-sliced MRI.

    PubMed

    Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Berenfeld, Omer; Snyder, Brett; Boyers, Pamela; Gold, Jeffrey

    2014-02-01

    We present a comprehensive validation analysis to assess the geometric impact of using coarsely-sliced short-axis images to reconstruct patient-specific cardiac geometry. The methods utilize high-resolution diffusion tensor MRI (DTMRI) datasets as reference geometries from which synthesized coarsely-sliced datasets simulating in vivo MRI were produced. 3D models are reconstructed from the coarse data using variational implicit surfaces through a commonly used modeling tool, CardioViz3D. The resulting geometries were then compared to the reference DTMRI models from which they were derived to analyze how well the synthesized geometries approximate the reference anatomy. Averaged over seven hearts, 95% spatial overlap, less than 3% volume variability, and normal-to-surface distance of 0.32 mm was observed between the synthesized myocardial geometries reconstructed from 8 mm sliced images and the reference data. The results provide strong supportive evidence to validate the hypothesis that coarsely-sliced MRI may be used to accurately reconstruct geometric ventricular models. Furthermore, the use of DTMRI for validation of in vivo MRI presents a novel benchmark procedure for studies which aim to substantiate their modeling and simulation methods using coarsely-sliced cardiac data. In addition, the paper outlines a suggested original procedure for deriving image-based ventricular models using the CardioViz3D software. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Regional Lung Ventilation Analysis Using Temporally Resolved Magnetic Resonance Imaging.

    PubMed

    Kolb, Christoph; Wetscherek, Andreas; Buzan, Maria Teodora; Werner, René; Rank, Christopher M; Kachelrie, Marc; Kreuter, Michael; Dinkel, Julien; Heuel, Claus Peter; Maier-Hein, Klaus

    We propose a computer-aided method for regional ventilation analysis and observation of lung diseases in temporally resolved magnetic resonance imaging (4D MRI). A shape model-based segmentation and registration workflow was used to create an atlas-derived reference system in which regional tissue motion can be quantified and multimodal image data can be compared regionally. Model-based temporal registration of the lung surfaces in 4D MRI data was compared with the registration of 4D computed tomography (CT) images. A ventilation analysis was performed on 4D MR images of patients with lung fibrosis; 4D MR ventilation maps were compared with corresponding diagnostic 3D CT images of the patients and 4D CT maps of subjects without impaired lung function (serving as reference). Comparison between the computed patient-specific 4D MR regional ventilation maps and diagnostic CT images shows good correlation in conspicuous regions. Comparison to 4D CT-derived ventilation maps supports the plausibility of the 4D MR maps. Dynamic MRI-based flow-volume loops and spirograms further visualize the free-breathing behavior. The proposed methods allow for 4D MR-based regional analysis of tissue dynamics and ventilation in spontaneous breathing and comparison of patient data. The proposed atlas-based reference coordinate system provides an automated manner of annotating and comparing multimodal lung image data.

  20. Development of a clinical pharmacy model within an Australian home nursing service using co-creation and participatory action research: the Visiting Pharmacist (ViP) study

    PubMed Central

    Lee, Cik Yin; Beanland, Christine; Goeman, Dianne P; Petrie, Neil; Petrie, Barbara; Vise, Felicity; Gray, June

    2017-01-01

    Objective To develop a collaborative, person-centred model of clinical pharmacy support for community nurses and their medication management clients. Design Co-creation and participatory action research, based on reflection, data collection, interaction and feedback from participants and other stakeholders. Setting A large, non-profit home nursing service in Melbourne, Australia. Participants Older people referred to the home nursing service for medication management, their carers, community nurses, general practitioners (GPs) and pharmacists, a multidisciplinary stakeholder reference group (including consumer representation) and the project team. Data collection and analysis Feedback and reflections from minutes, notes and transcripts from: project team meetings, clinical pharmacists’ reflective diaries and interviews, meetings with community nurses, reference group meetings and interviews and focus groups with 27 older people, 18 carers, 53 nurses, 15 GPs and seven community pharmacists. Results The model was based on best practice medication management standards and designed to address key medication management issues raised by stakeholders. Pharmacist roles included direct client care and indirect care. Direct care included home visits, medication reconciliation, medication review, medication regimen simplification, preparation of medication lists for clients and nurses, liaison and information sharing with prescribers and pharmacies and patient/carer education. Indirect care included providing medicines information and education for nurses and assisting with review and implementation of organisational medication policies and procedures. The model allowed nurses to refer directly to the pharmacist, enabling timely resolution of medication issues. Direct care was provided to 84 older people over a 15-month implementation period. Ongoing feedback and consultation, in line with participatory action research principles, informed the development and refinement of the model and identification of enablers and challenges. Conclusions A collaborative, person-centred clinical pharmacy model that addressed the needs of clients, carers, nurses and other stakeholders was successfully developed. The model is likely to have applicability to home nursing services nationally and internationally. PMID:29102998

  1. Historical trends and high-resolution future climate projections in northern Tuscany (Italy)

    NASA Astrophysics Data System (ADS)

    D'Oria, Marco; Ferraresi, Massimo; Tanda, Maria Giovanna

    2017-12-01

    This paper analyzes the historical precipitation and temperature trends and the future climate projections with reference to the northern part of Tuscany (Italy). The trends are identified and quantified at monthly and annual scale at gauging stations with data collected for long periods (60-90 years). An ensemble of 13 Regional Climate Models (RCMs), based on two Representative Concentration Pathways (RCP4.5 and RCP8.5), was then used to assess local scale future precipitation and temperature projections and to represent the uncertainty in the results. The historical data highlight a general decrease of the annual rainfall at a mean rate of 22 mm per decade but, in many cases, the tendencies are not statistically significant. Conversely, the annual mean temperature exhibits an upward trend, statistically significant in the majority of cases, with a warming rate of about 0.1 °C per decade. With reference to the model projections and the annual precipitation, the results are not concordant; the deviations between models in the same period are higher than the future changes at medium- (2031-2040) and long-term (2051-2060) and highlight that the model uncertainty and variability is high. According to the climate model projections, the warming of the study area is unequivocal; a mean positive increment of 0.8 °C at medium-term and 1.1 °C at long-term is expected with respect to the reference period (2003-2012) and the scenario RCP4.5; the increments grow to 0.9 °C and 1.9 °C for the RCP8.5. Finally, in order to check the observed climate change signals, the climate model projections were compared with the trends based on the historical data. A satisfactory agreement is obtained with reference to the precipitation; a systematic underestimation of the trend values with respect to the models, at medium- and long-term, is observed for the temperature data.

  2. Psychophysiological effects of a web-based stress management system: a prospective, randomized controlled intervention study of IT and media workers [ISRCTN54254861].

    PubMed

    Hasson, Dan; Anderberg, Ulla Maria; Theorell, Töres; Arnetz, Bengt B

    2005-07-25

    The aim of the present study was to assess possible effects on mental and physical well-being and stress-related biological markers of a web-based health promotion tool. A randomized, prospectively controlled study was conducted with before and after measurements, involving 303 employees (187 men and 116 women, age 23-64) from four information technology and two media companies. Half of the participants were offered web-based health promotion and stress management training (intervention) lasting for six months. All other participants constituted the reference group. Different biological markers were measured to detect possible physiological changes. After six months the intervention group had improved statistically significantly compared to the reference group on ratings of ability to manage stress, sleep quality, mental energy, concentration ability and social support. The anabolic hormone dehydroepiandosterone sulphate (DHEA-S) decreased significantly in the reference group as compared to unchanged levels in the intervention group. Neuropeptide Y (NPY) increased significantly in the intervention group compared to the reference group. Chromogranin A (CgA) decreased significantly in the intervention group as compared to the reference group. Tumour necrosis factor alpha (TNFalpha) decreased significantly in the reference group compared to the intervention group. Logistic regression analysis revealed that group (intervention vs. reference) remained a significant factor in five out of nine predictive models. The results indicate that an automatic web-based system might have short-term beneficial physiological and psychological effects and thus might be an opportunity in counteracting some clinically relevant and common stress and health issues of today.

  3. Second-order sliding mode controller with model reference adaptation for automatic train operation

    NASA Astrophysics Data System (ADS)

    Ganesan, M.; Ezhilarasi, D.; Benni, Jijo

    2017-11-01

    In this paper, a new approach to model reference based adaptive second-order sliding mode control together with adaptive state feedback is presented to control the longitudinal dynamic motion of a high speed train for automatic train operation with the objective of minimal jerk travel by the passengers. The nonlinear dynamic model for the longitudinal motion of the train comprises of a locomotive and coach subsystems is constructed using multiple point-mass model by considering the forces acting on the vehicle. An adaptation scheme using Lyapunov criterion is derived to tune the controller gains by considering a linear, stable reference model that ensures the stability of the system in closed loop. The effectiveness of the controller tracking performance is tested under uncertain passenger load, coupler-draft gear parameters, propulsion resistance coefficients variations and environmental disturbances due to side wind and wet rail conditions. The results demonstrate improved tracking performance of the proposed control scheme with a least jerk under maximum parameter uncertainties when compared to constant gain second-order sliding mode control.

  4. A Study of the Efficacy of Project-Based Learning Integrated with Computer-Based Simulation--STELLA

    ERIC Educational Resources Information Center

    Eskrootchi, Rogheyeh; Oskrochi, G. Reza

    2010-01-01

    Incorporating computer-simulation modelling into project-based learning may be effective but requires careful planning and implementation. Teachers, especially, need pedagogical content knowledge which refers to knowledge about how students learn from materials infused with technology. This study suggests that students learn best by actively…

  5. Estimating missing hourly climatic data using artificial neural network for energy balance based ET mapping applications

    USDA-ARS?s Scientific Manuscript database

    Remote sensing based evapotranspiration (ET) mapping has become an important tool for water resources management at a regional scale. Accurate hourly climatic data and reference ET are crucial input for successfully implementing remote sensing based ET models such as Mapping ET with internal calibra...

  6. Estimating missing hourly climatic data using artificial neural network for energy balance based ET mapping applications

    USDA-ARS?s Scientific Manuscript database

    Remote sensing based evapotranspiration (ET) mapping is an important improvement for water resources management. Hourly climatic data and reference ET are crucial for implementing remote sensing based ET models such as METRIC and SEBAL. In Turkey, data on all climatic variables may not be available ...

  7. An assessment of the near-surface accuracy of the international geomagnetic reference field 1980 model of the main geomagnetic field

    USGS Publications Warehouse

    Peddie, N.W.; Zunde, A.K.

    1985-01-01

    The new International Geomagnetic Reference Field (IGRF) model of the main geomagnetic field for 1980 is based heavily on measurements from the MAGSAT satellite survey. Assessment of the accuracy of the new model, as a description of the main field near the Earth's surface, is important because the accuracy of models derived from satellite data can be adversely affected by the magnetic field of electric currents in the ionosphere and the auroral zones. Until now, statements about its accuracy have been based on the 6 published assessments of the 2 proposed models from which it was derived. However, those assessments were either regional in scope or were based mainly on preliminary or extrapolated data. Here we assess the near-surface accuracy of the new model by comparing it with values for 1980 derived from annual means from 69 magnetic observatories, and by comparing it with WC80, a model derived from near-surface data. The comparison with observatory-derived data shows that the new model describes the field at the 69 observatories about as accurately as would a model derived solely from near-surface data. The comparison with WC80 shows that the 2 models agree closely in their description of D and I near the surface. These comparisons support the proposition that the new IGRF 1980 main-field model is a generally accurate description of the main field near the Earth's surface in 1980. ?? 1985.

  8. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    PubMed

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A Phrase-Based Matching Function.

    ERIC Educational Resources Information Center

    Galbiati, Giulia

    1991-01-01

    Describes the development of an information retrieval system designed for nonspecialist users that is based on the binary vector model. The syntactic structure of phrases used for indexing is examined, queries using an experimental collection of documents are described, and precision values are examined. (19 references) (LRW)

  10. Batch statistical process control of a fluid bed granulation process using in-line spatial filter velocimetry and product temperature measurements.

    PubMed

    Burggraeve, A; Van den Kerkhof, T; Hellings, M; Remon, J P; Vervaet, C; De Beer, T

    2011-04-18

    Fluid bed granulation is a batch process, which is characterized by the processing of raw materials for a predefined period of time, consisting of a fixed spraying phase and a subsequent drying period. The present study shows the multivariate statistical modeling and control of a fluid bed granulation process based on in-line particle size distribution (PSD) measurements (using spatial filter velocimetry) combined with continuous product temperature registration using a partial least squares (PLS) approach. Via the continuous in-line monitoring of the PSD and product temperature during granulation of various reference batches, a statistical batch model was developed allowing the real-time evaluation and acceptance or rejection of future batches. Continuously monitored PSD and product temperature process data of 10 reference batches (X-data) were used to develop a reference batch PLS model, regressing the X-data versus the batch process time (Y-data). Two PLS components captured 98.8% of the variation in the X-data block. Score control charts in which the average batch trajectory and upper and lower control limits are displayed were developed. Next, these control charts were used to monitor 4 new test batches in real-time and to immediately detect any deviations from the expected batch trajectory. By real-time evaluation of new batches using the developed control charts and by computation of contribution plots of deviating process behavior at a certain time point, batch losses or reprocessing can be prevented. Immediately after batch completion, all PSD and product temperature information (i.e., a batch progress fingerprint) was used to estimate some granule properties (density and flowability) at an early stage, which can improve batch release time. Individual PLS models relating the computed scores (X) of the reference PLS model (based on the 10 reference batches) and the density, respectively, flowabililty as Y-matrix, were developed. The scores of the 4 test batches were used to examine the predictive ability of the model. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. PET brain kinetics studies of 11C-ITMM and 11C-ITDM,radioprobes for metabotropic glutamate receptor type 1, in a nonhuman primate

    PubMed Central

    Yamasaki, Tomoteru; Maeda, Jun; Fujinaga, Masayuki; Nagai, Yuji; Hatori, Akiko; Yui, Joji; Xie, Lin; Nengaki, Nobuki; Zhang, Ming-Rong

    2014-01-01

    The metabotropic glutamate receptor type 1 (mGluR1) is a novel target protein for the development of new drugs against central nervous system disorders. Recently, we have developed 11C-labeled PET probes 11C-ITMM and 11C-ITDM, which demonstrate similar profiles, for imaging of mGluR1. In the present study, we compared 11C-ITMM and 11C-ITDM PET imaging and quantitative analysis in the monkey brain. Respective PET images showed similar distribution of uptake in the cerebellum, thalamus, and cingulate cortex. Slightly higher uptake was detected with 11C-ITDM than with 11C-ITMM. For the kinetic analysis using the two-tissue compartment model (2-TCM), the distribution volume (VT) in the cerebellum, an mGluR1-rich region in the brain, was 2.5 mL∙cm-3 for 11C-ITMM and 3.6 mL∙cm-3 for 11C-ITDM. By contrast, the VT in the pons, a region with negligible mGluR1 expression, was similarly low for both radiopharmaceuticals. Based on these results, we performed noninvasive PET quantitative analysis with general reference tissue models using the time-activity curve of the pons as a reference region. We confirmed the relationship and differences between the reference tissue models and 2-TCM using correlational scatter plots and Bland-Altman plots analyses. Although the scattergrams of both radiopharmaceuticals showed over- or underestimations of reference tissue model-based the binding potentials against 2-TCM, there were no significant differences between the two kinetic analysis models. In conclusion, we first demonstrated the potentials of 11C-ITMM and 11C-ITDM for noninvasive PET quantitative analysis using reference tissue models. In addition, our findings suggest that 11C-ITDM may be superior to 11C-ITMM as a PET probe for imaging of mGluR1, because regional VT values in PET with 11C-ITDM were higher than those of 11C-ITMM. Clinical studies of 11C-ITDM in humans will be necessary in the future. PMID:24795840

  12. PET brain kinetics studies of (11)C-ITMM and (11)C-ITDM,radioprobes for metabotropic glutamate receptor type 1, in a nonhuman primate.

    PubMed

    Yamasaki, Tomoteru; Maeda, Jun; Fujinaga, Masayuki; Nagai, Yuji; Hatori, Akiko; Yui, Joji; Xie, Lin; Nengaki, Nobuki; Zhang, Ming-Rong

    2014-01-01

    The metabotropic glutamate receptor type 1 (mGluR1) is a novel target protein for the development of new drugs against central nervous system disorders. Recently, we have developed (11)C-labeled PET probes (11)C-ITMM and (11)C-ITDM, which demonstrate similar profiles, for imaging of mGluR1. In the present study, we compared (11)C-ITMM and (11)C-ITDM PET imaging and quantitative analysis in the monkey brain. Respective PET images showed similar distribution of uptake in the cerebellum, thalamus, and cingulate cortex. Slightly higher uptake was detected with (11)C-ITDM than with (11)C-ITMM. For the kinetic analysis using the two-tissue compartment model (2-TCM), the distribution volume (VT) in the cerebellum, an mGluR1-rich region in the brain, was 2.5 mL∙cm(-3) for (11)C-ITMM and 3.6 mL∙cm(-3) for (11)C-ITDM. By contrast, the VT in the pons, a region with negligible mGluR1 expression, was similarly low for both radiopharmaceuticals. Based on these results, we performed noninvasive PET quantitative analysis with general reference tissue models using the time-activity curve of the pons as a reference region. We confirmed the relationship and differences between the reference tissue models and 2-TCM using correlational scatter plots and Bland-Altman plots analyses. Although the scattergrams of both radiopharmaceuticals showed over- or underestimations of reference tissue model-based the binding potentials against 2-TCM, there were no significant differences between the two kinetic analysis models. In conclusion, we first demonstrated the potentials of (11)C-ITMM and (11)C-ITDM for noninvasive PET quantitative analysis using reference tissue models. In addition, our findings suggest that (11)C-ITDM may be superior to (11)C-ITMM as a PET probe for imaging of mGluR1, because regional VT values in PET with (11)C-ITDM were higher than those of (11)C-ITMM. Clinical studies of (11)C-ITDM in humans will be necessary in the future.

  13. Unified Bayesian Estimator of EEG Reference at Infinity: rREST (Regularized Reference Electrode Standardization Technique).

    PubMed

    Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A

    2018-01-01

    The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.

  14. Towards a Theoretical Framework for Educational Simulations.

    ERIC Educational Resources Information Center

    Winer, Laura R.; Vazquez-Abad, Jesus

    1981-01-01

    Discusses the need for a sustained and systematic effort toward establishing a theoretical framework for educational simulations, proposes the adaptation of models borrowed from the natural and applied sciences, and describes three simulations based on such a model adapted using Brunerian learning theory. Sixteen references are listed. (LLS)

  15. A Model for Teaching Critical Thinking through Online Searching.

    ERIC Educational Resources Information Center

    Crane, Beverley; Markowitz, Nancy Lourie

    1994-01-01

    Presents a model that uses online searching to teach critical thinking skills in elementary and secondary education based on Bloom's taxonomy. Three levels of activity are described: analyzing a search statement; defining and clarifying a problem; and focusing an information need. (Contains 13 references.) (LRW)

  16. Revision of civil aircraft noise data for the Integrated Noise Model (INM)

    DOT National Transportation Integrated Search

    1986-09-30

    This report provides noise data for the Integrated Noise Model (INM) and is referred to as data base number nine. Air-to-ground sound level versus distance data for civil (and some military) aircraft in a form useful for airport noise contour computa...

  17. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    DTIC Science & Technology

    2013-09-09

    targets with .30cal AP M2 projectile using SPH elements. -Model validation runs were conducted based on the DoP experiments described in reference...effect of material properties on DoP 15. SUBJECT TERMS .30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets...and ceramic-faced aluminum targets with „30cal AP M2 projectile using SPH elements. □ Model validation runs were conducted based on the DoP

  18. Query Health: standards-based, cross-platform population health surveillance

    PubMed Central

    Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N

    2014-01-01

    Objective Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Materials and methods Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. Results We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. Discussions This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Conclusions Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. PMID:24699371

  19. Query Health: standards-based, cross-platform population health surveillance.

    PubMed

    Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N

    2014-01-01

    Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. A reduced-order nonlinear sliding mode observer for vehicle slip angle and tyre forces

    NASA Astrophysics Data System (ADS)

    Chen, Yuhang; Ji, Yunfeng; Guo, Konghui

    2014-12-01

    In this paper, a reduced-order sliding mode observer (RO-SMO) is developed for vehicle state estimation. Several improvements are achieved in this paper. First, the reference model accuracy is improved by considering vehicle load transfers and using a precise nonlinear tyre model 'UniTire'. Second, without the reference model accuracy degraded, the computing burden of the state observer is decreased by a reduced-order approach. Third, nonlinear system damping is integrated into the SMO to speed convergence and reduce chattering. The proposed RO-SMO is evaluated through simulation and experiments based on an in-wheel motor electric vehicle. The results show that the proposed observer accurately predicts the vehicle states.

  1. Improvement of radiology services based on the process management approach.

    PubMed

    Amaral, Creusa Sayuri Tahara; Rozenfeld, Henrique; Costa, Janaina Mascarenhas Hornos; Magon, Maria de Fátima de Andrade; Mascarenhas, Yvone Maria

    2011-06-01

    The health sector requires continuous investments to ensure the improvement of products and services from a technological standpoint, the use of new materials, equipment and tools, and the application of process management methods. Methods associated with the process management approach, such as the development of reference models of business processes, can provide significant innovations in the health sector and respond to the current market trend for modern management in this sector (Gunderman et al. (2008) [4]). This article proposes a process model for diagnostic medical X-ray imaging, from which it derives a primary reference model and describes how this information leads to gains in quality and improvements. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Catheter Insertion Reference Trajectory Construction Method Using Photoelastic Stress Analysis for Quantification of Respect for Tissue During Endovascular Surgery Simulation

    NASA Astrophysics Data System (ADS)

    Tercero, Carlos; Ikeda, Seiichi; Fukuda, Toshio; Arai, Fumihito; Negoro, Makoto; Takahashi, Ikuo

    2011-10-01

    There is a need to develop quantitative evaluation for simulator based training in medicine. Photoelastic stress analysis can be used in human tissue modeling materials; this enables the development of simulators that measure respect for tissue. For applying this to endovascular surgery, first we present a model of saccular aneurism where stress variation during micro-coils deployment is measured, and then relying on a bi-planar vision system we measure a catheter trajectory and compare it to a reference trajectory considering respect for tissue. New photoelastic tissue modeling materials will expand the applications of this technology to other medical training domains.

  3. Analysis of point-to-point lung motion with full inspiration and expiration CT data using non-linear optimization method: optimal geometric assumption model for the effective registration algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho

    2007-03-01

    The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.

  4. MODEL-BASED HYDROACOUSTIC BLOCKAGE ASSESSMENT AND DEVELOPMENT OF AN EXPLOSIVE SOURCE DATABASE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzel, E; Ramirez, A; Harben, P

    2005-07-11

    We are continuing the development of the Hydroacoustic Blockage Assessment Tool (HABAT) which is designed for use by analysts to predict which hydroacoustic monitoring stations can be used in discrimination analysis for any particular event. The research involves two approaches (1) model-based assessment of blockage, and (2) ground-truth data-based assessment of blockage. The tool presents the analyst with a map of the world, and plots raypath blockages from stations to sources. The analyst inputs source locations and blockage criteria, and the tool returns a list of blockage status from all source locations to all hydroacoustic stations. We are currently usingmore » the tool in an assessment of blockage criteria for simple direct-path arrivals. Hydroacoustic data, predominantly from earthquake sources, are read in and assessed for blockage at all available stations. Several measures are taken. First, can the event be observed at a station above background noise? Second, can we establish backazimuth from the station to the source. Third, how large is the decibel drop at one station relative to other stations. These observational results are then compared with model estimates to identify the best set of blockage criteria and used to create a set of blockage maps for each station. The model-based estimates are currently limited by the coarse bathymetry of existing databases and by the limitations inherent in the raytrace method. In collaboration with BBN Inc., the Hydroacoustic Coverage Assessment Model (HydroCAM) that generates the blockage files that serve as input to HABAT, is being extended to include high-resolution bathymetry databases in key areas that increase model-based blockage assessment reliability. An important aspect of this capability is to eventually include reflected T-phases where they reliably occur and to identify the associated reflectors. To assess how well any given hydroacoustic discriminant works in separating earthquake and in-water explosion populations it is necessary to have both a database of reference earthquake events and of reference in-water explosive events. Although reference earthquake events are readily available, explosive reference events are not. Consequently, building an in-water explosion reference database requires the compilation of events from many sources spanning a long period of time. We have developed a database of small implosive and explosive reference events from the 2003 Indian Ocean Cruise data. These events were recorded at some or all of the IMS Indian Ocean hydroacoustic stations: Diego Garcia, Cape Leeuwin, and Crozet Island. We have also reviewed many historical large in-water explosions and identified five that have adequate source information and can be positively associated to the hydrophone recordings. The five events are: Cannekin, Longshot, CHASE-3, CHASE-5, and IITRI-1. Of these, the first two are nuclear tests on land but near water. The latter three are in-water conventional explosive events with yields from ten to hundreds of tons TNT equivalent. The objective of this research is to enhance discrimination capabilities for events located in the world's oceans. Two research and development efforts are needed to achieve this: (1) improvement in discrimination algorithms and their joint statistical application to events, and (2) development of an automated and accurate blockage prediction capability that will identify all stations and phases (direct and reflected) from a given event that will have adequate signal to be used in a discrimination analysis. The strategy for improving blockage prediction in the world's oceans is to improve model-based prediction of blockage and to develop a ground-truth database of reference events to assess blockage. Currently, research is focused on the development of a blockage assessment software tool. The tool is envisioned to develop into a sophisticated and unifying package that optimally and automatically assesses both model and data based blockage predictions in all ocean basins, for all NDC stations, and accounting for reflected phases (Pulli et al., 2000). Currently, we have focused our efforts on the Diego Garcia, Cape Leeuwin and Crozet Island hydroacoustic stations in the Indian Ocean.« less

  5. Updated Reference Model for Heat Generation in the Lithosphere

    NASA Astrophysics Data System (ADS)

    Wipperfurth, S. A.; Sramek, O.; Roskovec, B.; Mantovani, F.; McDonough, W. F.

    2017-12-01

    Models integrating geophysics and geochemistry allow for characterization of the Earth's heat budget and geochemical evolution. Global lithospheric geophysical models are now constrained by surface and body wave data and are classified into several unique tectonic types. Global lithospheric geochemical models have evolved from petrological characterization of layers to a combination of petrologic and seismic constraints. Because of these advances regarding our knowledge of the lithosphere, it is necessary to create an updated chemical and physical reference model. We are developing a global lithospheric reference model based on LITHO1.0 (segmented into 1°lon x 1°lat x 9-layers) and seismological-geochemical relationships. Uncertainty assignments and correlations are assessed for its physical attributes, including layer thickness, Vp and Vs, and density. This approach yields uncertainties for the masses of the crust and lithospheric mantle. Heat producing element abundances (HPE: U, Th, and K) are ascribed to each volume element. These chemical attributes are based upon the composition of subducting sediment (sediment layers), composition of surface rocks (upper crust), a combination of petrologic and seismic correlations (middle and lower crust), and a compilation of xenolith data (lithospheric mantle). The HPE abundances are correlated within each voxel, but not vertically between layers. Efforts to provide correlation of abundances horizontally between each voxel are discussed. These models are used further to critically evaluate the bulk lithosphere heat production in the continents and the oceans. Cross-checks between our model and results from: 1) heat flux (Artemieva, 2006; Davies, 2013; Cammarano and Guerri, 2017), 2) gravity (Reguzzoni and Sampietro, 2015), and 3) geochemical and petrological models (Rudnick and Gao, 2014; Hacker et al. 2015) are performed.

  6. Improving the performance of the mass transfer-based reference evapotranspiration estimation approaches through a coupled wavelet-random forest methodology

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal

    2018-06-01

    Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.

  7. A modified Galam’s model for word-of-mouth information exchange

    NASA Astrophysics Data System (ADS)

    Ellero, Andrea; Fasano, Giovanni; Sorato, Annamaria

    2009-09-01

    In this paper we analyze the stochastic model proposed by Galam in [S. Galam, Modelling rumors: The no plane Pentagon French hoax case, Physica A 320 (2003), 571-580], for information spreading in a ‘word-of-mouth’ process among agents, based on a majority rule. Using the communications rules among agents defined in the above reference, we first perform simulations of the ‘word-of-mouth’ process and compare the results with the theoretical values predicted by Galam’s model. Some dissimilarities arise in particular when a small number of agents is considered. We find motivations for these dissimilarities and suggest some enhancements by introducing a new parameter dependent model. We propose a modified Galam’s scheme which is asymptotically coincident with the original model in the above reference. Furthermore, for relatively small values of the parameter, we provide a numerical experience proving that the modified model often outperforms the original one.

  8. Model reference tracking control of an aircraft: a robust adaptive approach

    NASA Astrophysics Data System (ADS)

    Tanyer, Ilker; Tatlicioglu, Enver; Zergeroglu, Erkan

    2017-05-01

    This work presents the design and the corresponding analysis of a nonlinear robust adaptive controller for model reference tracking of an aircraft that has parametric uncertainties in its system matrices and additive state- and/or time-dependent nonlinear disturbance-like terms in its dynamics. Specifically, robust integral of the sign of the error feedback term and an adaptive term is fused with a proportional integral controller. Lyapunov-based stability analysis techniques are utilised to prove global asymptotic convergence of the output tracking error. Extensive numerical simulations are presented to illustrate the performance of the proposed robust adaptive controller.

  9. Is the Sun Setting on Lecture-based Education?

    PubMed Central

    Lowe, Whitney

    2011-01-01

    Lecture-based instructional models have been the mainstay of education for centuries. They excel primarily at delivering information from the one to the many. Educators refer to this model as “the sage on the stage”. Clearly there are educators who relish this role and are strongly opposed to moving away from it. Yet, educational research and new innovative technologies are suggesting that lecture-based classes may no longer be the most effective teaching method for many situations, especially clinical practice. PMID:22211152

  10. Integrated driver modelling considering state transition feature for individual adaptation of driver assistance systems

    NASA Astrophysics Data System (ADS)

    Raksincharoensak, Pongsathorn; Khaisongkram, Wathanyoo; Nagai, Masao; Shimosaka, Masamichi; Mori, Taketoshi; Sato, Tomomasa

    2010-12-01

    This paper describes the modelling of naturalistic driving behaviour in real-world traffic scenarios, based on driving data collected via an experimental automobile equipped with a continuous sensing drive recorder. This paper focuses on the longitudinal driving situations which are classified into five categories - car following, braking, free following, decelerating and stopping - and are referred to as driving states. Here, the model is assumed to be represented by a state flow diagram. Statistical machine learning of driver-vehicle-environment system model based on driving database is conducted by a discriminative modelling approach called boosting sequential labelling method.

  11. Proposed Clinical Decision Rules to Diagnose Acute Rhinosinusitis Among Adults in Primary Care.

    PubMed

    Ebell, Mark H; Hansen, Jens Georg

    2017-07-01

    To reduce inappropriate antibiotic prescribing, we sought to develop a clinical decision rule for the diagnosis of acute rhinosinusitis and acute bacterial rhinosinusitis. Multivariate analysis and classification and regression tree (CART) analysis were used to develop clinical decision rules for the diagnosis of acute rhinosinusitis, defined using 3 different reference standards (purulent antral puncture fluid or abnormal finding on a computed tomographic (CT) scan; for acute bacterial rhinosinusitis, we used a positive bacterial culture of antral fluid). Signs, symptoms, C-reactive protein (CRP), and reference standard tests were prospectively recorded in 175 Danish patients aged 18 to 65 years seeking care for suspected acute rhinosinusitis. For each reference standard, we developed 2 clinical decision rules: a point score based on a logistic regression model and an algorithm based on a CART model. We identified low-, moderate-, and high-risk groups for acute rhinosinusitis or acute bacterial rhinosinusitis for each clinical decision rule. The point scores each had between 5 and 6 predictors, and an area under the receiver operating characteristic curve (AUROCC) between 0.721 and 0.767. For positive bacterial culture as the reference standard, low-, moderate-, and high-risk groups had a 16%, 49%, and 73% likelihood of acute bacterial rhinosinusitis, respectively. CART models had an AUROCC ranging from 0.783 to 0.827. For positive bacterial culture as the reference standard, low-, moderate-, and high-risk groups had a likelihood of acute bacterial rhinosinusitis of 6%, 31%, and 59% respectively. We have developed a series of clinical decision rules integrating signs, symptoms, and CRP to diagnose acute rhinosinusitis and acute bacterial rhinosinusitis with good accuracy. They now require prospective validation and an assessment of their effect on clinical and process outcomes. © 2017 Annals of Family Medicine, Inc.

  12. PCB-induced changes of a benthic community and expected ecosystem recovery following in situ sorbent amendment

    USGS Publications Warehouse

    Janssen, Elisabeth M.-L.; Thompson, Janet K.; Luoma, Samuel N.; Luthy, Richard G.

    2011-01-01

    The benthic community was analyzed to evaluate pollution-induced changes for the polychlorinated biphenyl (PCB)-contaminated site at Hunters Point (HP) relative to 30 reference sites in San Francisco Bay, California, USA. An analysis based on functional traits of feeding, reproduction, and position in the sediment shows that HP is depauperate in deposit feeders, subsurface carnivores, and species with no protective barrier. Sediment chemistry analysis shows that PCBs are the major risk drivers at HP (1,570 ppb) and that the reference sites contain very low levels of PCB contamination (9 ppb). Different feeding traits support the existence of direct pathways of exposure, which can be mechanistically linked to PCB bioaccumulation by biodynamic modeling. The model shows that the deposit feeder Neanthes arenaceodentata accumulates approximately 20 times more PCBs in its lipids than the facultative deposit feeder Macoma balthica and up to 130 times more than the filter feeder Mytilus edulis. The comparison of different exposure scenarios suggests that PCB tissue concentrations at HP are two orders of magnitude higher than at the reference sites. At full scale, in situ sorbent amendment with activated carbon may reduce PCB bioaccumulation at HP by up to 85 to 90% under favorable field and treatment conditions. The modeling framework further demonstrates that such expected remedial success corresponds to exposure conditions suggested as the cleanup goal for HP. However, concentrations remain slightly higher than at the reference sites. The present study demonstrates how the remedial success of a sorbent amendment, which lowers the PCB availability, can be compared to reference conditions and traditional cleanup goals, which are commonly based on bulk sediment concentrations.

  13. Computational assessment of model-based wave separation using a database of virtual subjects.

    PubMed

    Hametner, Bernhard; Schneider, Magdalena; Parragh, Stephanie; Wassertheurer, Siegfried

    2017-11-07

    The quantification of arterial wave reflection is an important area of interest in arterial pulse wave analysis. It can be achieved by wave separation analysis (WSA) if both the aortic pressure waveform and the aortic flow waveform are known. For better applicability, several mathematical models have been established to estimate aortic flow solely based on pressure waveforms. The aim of this study is to investigate and verify the model-based wave separation of the ARCSolver method on virtual pulse wave measurements. The study is based on an open access virtual database generated via simulations. Seven cardiac and arterial parameters were varied within physiological healthy ranges, leading to a total of 3325 virtual healthy subjects. For assessing the model-based ARCSolver method computationally, this method was used to perform WSA based on the aortic root pressure waveforms of the virtual patients. Asa reference, the values of WSA using both the pressure and flow waveforms provided by the virtual database were taken. The investigated parameters showed a good overall agreement between the model-based method and the reference. Mean differences and standard deviations were -0.05±0.02AU for characteristic impedance, -3.93±1.79mmHg for forward pressure amplitude, 1.37±1.56mmHg for backward pressure amplitude and 12.42±4.88% for reflection magnitude. The results indicate that the mathematical blood flow model of the ARCSolver method is a feasible surrogate for a measured flow waveform and provides a reasonable way to assess arterial wave reflection non-invasively in healthy subjects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Comparison of methods for the prediction of human clearance from hepatocyte intrinsic clearance for a set of reference compounds and an external evaluation set.

    PubMed

    Yamagata, Tetsuo; Zanelli, Ugo; Gallemann, Dieter; Perrin, Dominique; Dolgos, Hugues; Petersson, Carl

    2017-09-01

    1. We compared direct scaling, regression model equation and the so-called "Poulin et al." methods to scale clearance (CL) from in vitro intrinsic clearance (CL int ) measured in human hepatocytes using two sets of compounds. One reference set comprised of 20 compounds with known elimination pathways and one external evaluation set based on 17 compounds development in Merck (MS). 2. A 90% prospective confidence interval was calculated using the reference set. This interval was found relevant for the regression equation method. The three outliers identified were justified on the basis of their elimination mechanism. 3. The direct scaling method showed a systematic underestimation of clearance in both the reference and evaluation sets. The "Poulin et al." and the regression equation methods showed no obvious bias in either the reference or evaluation sets. 4. The regression model equation was slightly superior to the "Poulin et al." method in the reference set and showed a better absolute average fold error (AAFE) of value 1.3 compared to 1.6. A larger difference was observed in the evaluation set were the regression method and "Poulin et al." resulted in an AAFE of 1.7 and 2.6, respectively (removing the three compounds with known issues mentioned above). A similar pattern was observed for the correlation coefficient. Based on these data we suggest the regression equation method combined with a prospective confidence interval as the first choice for the extrapolation of human in vivo hepatic metabolic clearance from in vitro systems.

  15. Development of a paediatric population-based model of the pharmacokinetics of rivaroxaban.

    PubMed

    Willmann, Stefan; Becker, Corina; Burghaus, Rolf; Coboeken, Katrin; Edginton, Andrea; Lippert, Jörg; Siegmund, Hans-Ulrich; Thelen, Kirstin; Mück, Wolfgang

    2014-01-01

    Venous thromboembolism has been increasingly recognised as a clinical problem in the paediatric population. Guideline recommendations for antithrombotic therapy in paediatric patients are based mainly on extrapolation from adult clinical trial data, owing to the limited number of clinical trials in paediatric populations. The oral, direct Factor Xa inhibitor rivaroxaban has been approved in adult patients for several thromboembolic disorders, and its well-defined pharmacokinetic and pharmacodynamic characteristics and efficacy and safety profiles in adults warrant further investigation of this agent in the paediatric population. The objective of this study was to develop and qualify a physiologically based pharmacokinetic (PBPK) model for rivaroxaban doses of 10 and 20 mg in adults and to scale this model to the paediatric population (0-18 years) to inform the dosing regimen for a clinical study of rivaroxaban in paediatric patients. Experimental data sets from phase I studies supported the development and qualification of an adult PBPK model. This adult PBPK model was then scaled to the paediatric population by including anthropometric and physiological information, age-dependent clearance and age-dependent protein binding. The pharmacokinetic properties of rivaroxaban in virtual populations of children were simulated for two body weight-related dosing regimens equivalent to 10 and 20 mg once daily in adults. The quality of the model was judged by means of a visual predictive check. Subsequently, paediatric simulations of the area under the plasma concentration-time curve (AUC), maximum (peak) plasma drug concentration (C max) and concentration in plasma after 24 h (C 24h) were compared with the adult reference simulations. Simulations for AUC, C max and C 24h throughout the investigated age range largely overlapped with values obtained for the corresponding dose in the adult reference simulation for both body weight-related dosing regimens. However, pharmacokinetic values in infants and preschool children (body weight <40 kg) were lower than the 90 % confidence interval threshold of the adult reference model and, therefore, indicated that doses in these groups may need to be increased to achieve the same plasma levels as in adults. For children with body weight between 40 and 70 kg, simulated plasma pharmacokinetic parameters (C max, C 24h and AUC) overlapped with the values obtained in the corresponding adult reference simulation, indicating that body weight-related exposure was similar between these children and adults. In adolescents of >70 kg body weight, the simulated 90 % prediction interval values of AUC and C 24h were much higher than the 90 % confidence interval of the adult reference population, owing to the weight-based simulation approach, but for these patients rivaroxaban would be administered at adult fixed doses of 10 and 20 mg. The paediatric PBPK model developed here allowed an exploratory analysis of the pharmacokinetics of rivaroxaban in children to inform the dosing regimen for a clinical study in paediatric patients.

  16. Effects of Uncertainties in Electric Field Boundary Conditions for Ring Current Simulations

    NASA Astrophysics Data System (ADS)

    Chen, Margaret W.; O'Brien, T. Paul; Lemon, Colby L.; Guild, Timothy B.

    2018-01-01

    Physics-based simulation results can vary widely depending on the applied boundary conditions. As a first step toward assessing the effect of boundary conditions on ring current simulations, we analyze the uncertainty of cross-polar cap potentials (CPCP) on electric field boundary conditions applied to the Rice Convection Model-Equilibrium (RCM-E). The empirical Weimer model of CPCP is chosen as the reference model and Defense Meteorological Satellite Program CPCP measurements as the reference data. Using temporal correlations from a statistical analysis of the "errors" between the reference model and data, we construct a Monte Carlo CPCP discrete time series model that can be generalized to other model boundary conditions. RCM-E simulations using electric field boundary conditions from the reference model and from 20 randomly generated Monte Carlo discrete time series of CPCP are performed for two large storms. During the 10 August 2000 storm main phase, the proton density at 10 RE at midnight was observed to be low (< 1.4 cm-3) and the observed disturbance Dst index is bounded by the simulated Dst values. In contrast, the simulated Dst values during the recovery phases of the 10 August 2000 and 31 August 2005 storms tend to underestimate systematically the observed late Dst recovery. This suggests a need to improve the accuracy of particle loss calculations in the RCM-E model. Application of this technique can aid modelers to make efficient choices on either investing more effort on improving specification of boundary conditions or on improving descriptions of physical processes.

  17. The fourth radiation transfer model intercomparison (RAMI-IV): Proficiency testing of canopy reflectance models with ISO-13528

    NASA Astrophysics Data System (ADS)

    Widlowski, J.-L.; Pinty, B.; Lopatka, M.; Atzberger, C.; Buzica, D.; Chelle, M.; Disney, M.; Gastellu-Etchegorry, J.-P.; Gerboles, M.; Gobron, N.; Grau, E.; Huang, H.; Kallel, A.; Kobayashi, H.; Lewis, P. E.; Qin, W.; Schlerf, M.; Stuckens, J.; Xie, D.

    2013-07-01

    The radiation transfer model intercomparison (RAMI) activity aims at assessing the reliability of physics-based radiative transfer (RT) models under controlled experimental conditions. RAMI focuses on computer simulation models that mimic the interactions of radiation with plant canopies. These models are increasingly used in the development of satellite retrieval algorithms for terrestrial essential climate variables (ECVs). Rather than applying ad hoc performance metrics, RAMI-IV makes use of existing ISO standards to enhance the rigor of its protocols evaluating the quality of RT models. ISO-13528 was developed "to determine the performance of individual laboratories for specific tests or measurements." More specifically, it aims to guarantee that measurement results fall within specified tolerance criteria from a known reference. Of particular interest to RAMI is that ISO-13528 provides guidelines for comparisons where the true value of the target quantity is unknown. In those cases, "truth" must be replaced by a reliable "conventional reference value" to enable absolute performance tests. This contribution will show, for the first time, how the ISO-13528 standard developed by the chemical and physical measurement communities can be applied to proficiency testing of computer simulation models. Step by step, the pre-screening of data, the identification of reference solutions, and the choice of proficiency statistics will be discussed and illustrated with simulation results from the RAMI-IV "abstract canopy" scenarios. Detailed performance statistics of the participating RT models will be provided and the role of the accuracy of the reference solutions as well as the choice of the tolerance criteria will be highlighted.

  18. Relative Deprivation and the Gender Wage Gap.

    ERIC Educational Resources Information Center

    Jackson, Linda A.

    1989-01-01

    Discusses how gender differences in the value of pay, based on relative deprivation theory, explain women's paradoxical contentment with lower wages. Presents a model of pay satisfaction to integrate value-based and comparative-referent explanations of the relationship between gender and pay satisfaction. Discusses economic approaches to the…

  19. Gear Fatigue Crack Diagnosis by Vibration Analysis Using Embedded Modeling

    DTIC Science & Technology

    2001-04-05

    gave references on Wigner - Ville Distribution ( WVD ) and some statistical based methods including FM4, NA4 and NB4. There are limitations for vibration...Embedded Modeling DISTRIBUTION : Approved for public release, distribution unlimited This paper is part of the following report: TITLE: New Frontiers in

  20. Measuring News Media Literacy

    ERIC Educational Resources Information Center

    Maksl, Adam; Ashley, Seth; Craft, Stephanie

    2015-01-01

    News media literacy refers to the knowledge and motivations needed to identify and engage with journalism. This study measured levels of news media literacy among 500 teenagers using a new scale measure based on Potter's model of media literacy and adapted to news media specifically. The adapted model posits that news media literate individuals…

  1. Global Reference Atmosphere Model (GRAM)

    NASA Technical Reports Server (NTRS)

    Johnson, D. L.; Blocker, Rhonda; Justus, C. G.

    1993-01-01

    4D model provides atmospheric parameter values either automatically at positions along linear path or along any set of connected positions specified by user. Based on actual data, GRAM provides thermal wind shear for monthly mean winds, percent deviation from standard atmosphere, mean vertical wind, and perturbation data for each position.

  2. Modeling Educational Content: The Cognitive Approach of the PALO Language

    ERIC Educational Resources Information Center

    Rodriguez-Artacho, Miguel; Verdejo Maillo, M. Felisa

    2004-01-01

    This paper presents a reference framework to describe educational material. It introduces the PALO Language as a cognitive based approach to Educational Modeling Languages (EML). In accordance with recent trends for reusability and interoperability in Learning Technologies, EML constitutes an evolution of the current content-centered…

  3. Student Assistance Programs: New Approaches for Reducing Adolescent Substance Abuse.

    ERIC Educational Resources Information Center

    Moore, David D.; Forster, Jerald R.

    1993-01-01

    Describes school-based Student Assistance Programs (SAPs), which are designed to reduce adolescents' substance abuse. Notes that SAPs, modeled after Employee Assistance Programs in workplace, are identifying, assessing, referring, and managing cases of substance-abusing students. Sees adoption of SAP model as accelerating in response to growing…

  4. No-reference image quality assessment based on natural scene statistics and gradient magnitude similarity

    NASA Astrophysics Data System (ADS)

    Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang

    2014-11-01

    The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.

  5. Science and Technology Investment Strategy for Squadron Level Training

    DTIC Science & Technology

    1993-05-01

    be derived from empirically sound and theory -based instructional models. Cmment. The automation of instructional design could favorably impact the...require a significant amount of time to develop and where the underlying theory and/or applications hardware and software is ht flux. Long-term efforts...training or training courses. It does not refer to the initial evaluation of individuals entering Upgrade Training ( UGT ). It Am refer to the evaluation of

  6. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. How well can we quantify dust deposition to the ocean?

    PubMed

    Anderson, R F; Cheng, H; Edwards, R L; Fleisher, M Q; Hayes, C T; Huang, K-F; Kadko, D; Lam, P J; Landing, W M; Lao, Y; Lu, Y; Measures, C I; Moran, S B; Morton, P L; Ohnemus, D C; Robinson, L F; Shelley, R U

    2016-11-28

    Deposition of continental mineral aerosols (dust) in the Eastern Tropical North Atlantic Ocean, between the coast of Africa and the Mid-Atlantic Ridge, was estimated using several strategies based on the measurement of aerosols, trace metals dissolved in seawater, particulate material filtered from the water column, particles collected by sediment traps and sediments. Most of the data used in this synthesis involve samples collected during US GEOTRACES expeditions in 2010 and 2011, although some results from the literature are also used. Dust deposition generated by a global model serves as a reference against which the results from each observational strategy are compared. Observation-based dust fluxes disagree with one another by as much as two orders of magnitude, although most of the methods produce results that are consistent with the reference model to within a factor of 5. The large range of estimates indicates that further work is needed to reduce uncertainties associated with each method before it can be applied routinely to map dust deposition to the ocean. Calculated dust deposition using observational strategies thought to have the smallest uncertainties is lower than the reference model by a factor of 2-5, suggesting that the model may overestimate dust deposition in our study area.This article is part of the themed issue 'Biological and climatic impacts of ocean trace element chemistry'. © 2016 The Author(s).

  8. Model reference adaptive control of robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo

    1991-01-01

    This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.

  9. How well can we quantify dust deposition to the ocean?

    PubMed Central

    Cheng, H.; Edwards, R. L.; Fleisher, M. Q.; Hayes, C. T.; Huang, K.-F.; Kadko, D.; Lam, P. J.; Landing, W. M.; Lao, Y.; Lu, Y.; Measures, C. I.; Moran, S. B.; Morton, P. L.; Ohnemus, D. C.; Robinson, L. F.; Shelley, R. U.

    2016-01-01

    Deposition of continental mineral aerosols (dust) in the Eastern Tropical North Atlantic Ocean, between the coast of Africa and the Mid-Atlantic Ridge, was estimated using several strategies based on the measurement of aerosols, trace metals dissolved in seawater, particulate material filtered from the water column, particles collected by sediment traps and sediments. Most of the data used in this synthesis involve samples collected during US GEOTRACES expeditions in 2010 and 2011, although some results from the literature are also used. Dust deposition generated by a global model serves as a reference against which the results from each observational strategy are compared. Observation-based dust fluxes disagree with one another by as much as two orders of magnitude, although most of the methods produce results that are consistent with the reference model to within a factor of 5. The large range of estimates indicates that further work is needed to reduce uncertainties associated with each method before it can be applied routinely to map dust deposition to the ocean. Calculated dust deposition using observational strategies thought to have the smallest uncertainties is lower than the reference model by a factor of 2–5, suggesting that the model may overestimate dust deposition in our study area. This article is part of the themed issue ‘Biological and climatic impacts of ocean trace element chemistry’. PMID:29035251

  10. Food Web Bioaccumulation Model for Resident Killer Whales from the Northeastern Pacific Ocean as a Tool for the Derivation of PBDE-Sediment Quality Guidelines.

    PubMed

    Alava, Juan José; Ross, Peter S; Gobas, Frank A P C

    2016-01-01

    Resident killer whale populations in the NE Pacific Ocean are at risk due to the accumulation of pollutants, including polybrominated diphenyl ethers (PBDEs). To assess the impact of PBDEs in water and sediments in killer whale critical habitat, we developed a food web bioaccumulation model. The model was designed to estimate PBDE concentrations in killer whales based on PBDE concentrations in sediments and the water column throughout a lifetime of exposure. Calculated and observed PBDE concentrations exceeded the only toxicity reference value available for PBDEs in marine mammals (1500 μg/kg lipid) in southern resident killer whales but not in northern resident killer whales. Temporal trends (1993-2006) for PBDEs observed in southern resident killer whales showed a doubling time of ≈5 years. If current sediment quality guidelines available in Canada for polychlorinated biphenyls are applied to PBDEs, it can be expected that PBDE concentrations in killer whales will exceed available toxicity reference values by a large margin. Model calculations suggest that a PBDE concentration in sediments of approximately 1.0 μg/kg dw produces PBDE concentrations in resident killer whales that are below the current toxicity reference value for 95 % of the population, with this value serving as a precautionary benchmark for a management-based approach to reducing PBDE health risks to killer whales. The food web bioaccumulation model may be a useful risk management tool in support of regulatory protection for killer whales.

  11. A nonparametric statistical technique for combining global precipitation datasets: development and hydrological evaluation over the Iberian Peninsula

    NASA Astrophysics Data System (ADS)

    Abul Ehsan Bhuiyan, Md; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Quintana-Seguí, Pere; Barella-Ortiz, Anaïs

    2018-02-01

    This study investigates the use of a nonparametric, tree-based model, quantile regression forests (QRF), for combining multiple global precipitation datasets and characterizing the uncertainty of the combined product. We used the Iberian Peninsula as the study area, with a study period spanning 11 years (2000-2010). Inputs to the QRF model included three satellite precipitation products, CMORPH, PERSIANN, and 3B42 (V7); an atmospheric reanalysis precipitation and air temperature dataset; satellite-derived near-surface daily soil moisture data; and a terrain elevation dataset. We calibrated the QRF model for two seasons and two terrain elevation categories and used it to generate ensemble for these conditions. Evaluation of the combined product was based on a high-resolution, ground-reference precipitation dataset (SAFRAN) available at 5 km 1 h-1 resolution. Furthermore, to evaluate relative improvements and the overall impact of the combined product in hydrological response, we used the generated ensemble to force a distributed hydrological model (the SURFEX land surface model and the RAPID river routing scheme) and compared its streamflow simulation results with the corresponding simulations from the individual global precipitation and reference datasets. We concluded that the proposed technique could generate realizations that successfully encapsulate the reference precipitation and provide significant improvement in streamflow simulations, with reduction in systematic and random error on the order of 20-99 and 44-88 %, respectively, when considering the ensemble mean.

  12. The Effect of a Multi-Strategy Program on Developing Social Behaviors Based on Pender’s Health Promotion Model to Prevent Loneliness of Old Women Referred to Gonabad Urban Health Centers

    PubMed Central

    Alaviani, Mehri; Khosravan, Shahla; Alami, Ali; Moshki, Mahdi

    2015-01-01

    Background Loneliness is one of the most significant problems during aging. This research has been done to determine the effect of a multi-strategy program based on Pender’s Health Promotion model to prevent loneliness of elderly women by improving social relationships. Methods In this quasi-experimental study done in 2013 from January to November, 150 old women suffering medium loneliness referred to Gonabad urban Health Centers were enrolled. Data were gathered using Russell’s UCLA loneliness questionnaire and the questionnaires based on Pender’s Health Promotion Model about loneliness. The results were analyzed by descriptive statistics and Chi-square, T-pair, and independent-T tests through SPSS, version 20. Results Loneliness decreased significantly in the interventional group compared to the control group (P<0.00). In addition, mean scores related to variables of Health Promotion Model (received benefits and barriers, self-efficacy, interpersonal effectives of loneliness) in both groups were significantly different before and after the study (P<0.05). Conclusion Constructs of Pender’s Health Promotion Model can be used as a framework for planning interventions in order to anticipate, improve and modify related behaviors related to loneliness in old women.   PMID:26005693

  13. Calibrating Historical IR Sensors Using GEO, and AVHRR Infrared Tropical Mean Calibration Models

    NASA Technical Reports Server (NTRS)

    Scarino, Benjamin; Doelling, David R.; Minnis, Patrick; Gopalan, Arun; Haney, Conor; Bhatt, Rajendra

    2014-01-01

    Long-term, remote-sensing-based climate data records (CDRs) are highly dependent on having consistent, wellcalibrated satellite instrument measurements of the Earth's radiant energy. Therefore, by making historical satellite calibrations consistent with those of today's imagers, the Earth-observing community can benefit from a CDR that spans a minimum of 30 years. Most operational meteorological satellites rely on an onboard blackbody and space looks to provide on-orbit IR calibration, but neither target is traceable to absolute standards. The IR channels can also be affected by ice on the detector window, angle dependency of the scan mirror emissivity, stray-light, and detector-to-detector striping. Being able to quantify and correct such degradations would mean IR data from any satellite imager could contribute to a CDR. Recent efforts have focused on utilizing well-calibrated modern hyper-spectral sensors to intercalibrate concurrent operational IR imagers to a single reference. In order to consistently calibrate both historical and current IR imagers to the same reference, however, another strategy is needed. Large, well-characterized tropical-domain Earth targets have the potential of providing an Earth-view reference accuracy of within 0.5 K. To that effort, NASA Langley is developing an IR tropical mean calibration model in order to calibrate historical Advanced Very High Resolution Radiometer (AVHRR) instruments. Using Meteosat-9 (Met-9) as a reference, empirical models are built based on spatially/temporally binned Met-9 and AVHRR tropical IR brightness temperatures. By demonstrating the stability of the Met-9 tropical models, NOAA-18 AVHRR can be calibrated to Met-9 by matching the AVHRR monthly histogram averages with the Met-9 model. This method is validated with ray-matched AVHRR and Met-9 biasdifference time series. Establishing the validity of this empirical model will allow for the calibration of historical AVHRR sensors to within 0.5 K, and thereby establish a climate-quality IR data record.

  14. DNA-COMPACT: DNA COMpression Based on a Pattern-Aware Contextual Modeling Technique

    PubMed Central

    Li, Pinghao; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-01-01

    Genome data are becoming increasingly important for modern medicine. As the rate of increase in DNA sequencing outstrips the rate of increase in disk storage capacity, the storage and data transferring of large genome data are becoming important concerns for biomedical researchers. We propose a two-pass lossless genome compression algorithm, which highlights the synthesis of complementary contextual models, to improve the compression performance. The proposed framework could handle genome compression with and without reference sequences, and demonstrated performance advantages over best existing algorithms. The method for reference-free compression led to bit rates of 1.720 and 1.838 bits per base for bacteria and yeast, which were approximately 3.7% and 2.6% better than the state-of-the-art algorithms. Regarding performance with reference, we tested on the first Korean personal genome sequence data set, and our proposed method demonstrated a 189-fold compression rate, reducing the raw file size from 2986.8 MB to 15.8 MB at a comparable decompression cost with existing algorithms. DNAcompact is freely available at https://sourceforge.net/projects/dnacompact/for research purpose. PMID:24282536

  15. Koppen bioclimatic evaluation of CMIP historical climate simulations

    DOE PAGES

    Phillips, Thomas J.; Bonfils, Celine J. W.

    2015-06-05

    Köppen bioclimatic classification relates generic vegetation types to characteristics of the interactive annual-cycles of continental temperature (T) and precipitation (P). In addition to predicting possible bioclimatic consequences of past or prospective climate change, a Köppen scheme can be used to pinpoint biases in model simulations of historical T and P. In this study a Köppen evaluation of Coupled Model Intercomparison Project (CMIP) simulations of historical climate is conducted for the period 1980–1999. Evaluation of an example CMIP5 model illustrates how errors in simulating Köppen vegetation types (relative to those derived from observational reference data) can be deconstructed and related tomore » model-specific temperature and precipitation biases. Measures of CMIP model skill in simulating the reference Köppen vegetation types are also developed, allowing the bioclimatic performance of a CMIP5 simulation of T and P to be compared quantitatively with its CMIP3 antecedent. Although certain bioclimatic discrepancies persist across model generations, the CMIP5 models collectively display an improved rendering of historical T and P relative to their CMIP3 counterparts. Additionally, the Köppen-based performance metrics are found to be quite insensitive to alternative choices of observational reference data or to differences in model horizontal resolution.« less

  16. The effects of topography on magma chamber deformation models: Application to Mt. Etna and radar interferometry

    NASA Astrophysics Data System (ADS)

    Williams, Charles A.; Wadge, Geoff

    We have used a three-dimensional elastic finite element model to examine the effects of topography on the surface deformation predicted by models of magma chamber deflation. We used the topography of Mt. Etna to control the geometry of our model, and compared the finite element results to those predicted by an analytical solution for a pressurized sphere in an elastic half-space. Topography has a significant effect on the predicted surface deformation for both displacement profiles and synthetic interferograms. Not only are the predicted displacement magnitudes significantly different, but also the map-view patterns of displacement. It is possible to match the predicted displacement magnitudes fairly well by adjusting the elevation of a reference surface; however, the horizontal pattern of deformation is still significantly different. Thus, inversions based on constant-elevation reference surfaces may not properly estimate the horizontal position of a magma chamber. We have investigated an approach where the elevation of the reference surface varies for each computation point, corresponding to topography. For vertical displacements and tilts this method provides a good fit to the finite element results, and thus may form the basis for an inversion scheme. For radial displacements, a constant reference elevation provides a better fit to the numerical results.

  17. Study on the calibration and optimization of double theodolites baseline

    NASA Astrophysics Data System (ADS)

    Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao

    2018-01-01

    For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.

  18. Fuzzy model-based servo and model following control for nonlinear systems.

    PubMed

    Ohtake, Hiroshi; Tanaka, Kazuo; Wang, Hua O

    2009-12-01

    This correspondence presents servo and nonlinear model following controls for a class of nonlinear systems using the Takagi-Sugeno fuzzy model-based control approach. First, the construction method of the augmented fuzzy system for continuous-time nonlinear systems is proposed by differentiating the original nonlinear system. Second, the dynamic fuzzy servo controller and the dynamic fuzzy model following controller, which can make outputs of the nonlinear system converge to target points and to outputs of the reference system, respectively, are introduced. Finally, the servo and model following controller design conditions are given in terms of linear matrix inequalities. Design examples illustrate the utility of this approach.

  19. Evaluation of a System-Specific Function To Describe the Pharmacokinetics of Benzylpenicillin in Term Neonates Undergoing Moderate Hypothermia.

    PubMed

    Bijleveld, Yuma A; de Haan, Timo R; van der Lee, Johanna H; Groenendaal, Floris; Dijk, Peter H; van Heijst, Arno; de Jonge, Rogier C J; Dijkman, Koen P; van Straaten, Henrica L M; Rijken, Monique; Zonnenberg, Inge A; Cools, Filip; Zecic, Alexandra; Nuytemans, Debbie H G M; van Kaam, Anton H; Mathôt, Ron A A

    2018-04-01

    The pharmacokinetic (PK) properties of intravenous (i.v.) benzylpenicillin in term neonates undergoing moderate hypothermia after perinatal asphyxia were evaluated, as they have been unknown until now. A system-specific modeling approach was applied, in which our recently developed covariate model describing developmental and temperature-induced changes in amoxicillin clearance (CL) in the same patient study population was incorporated into a population PK model of benzylpenicillin with a priori birthweight (BW)-based allometric scaling. Pediatric population covariate models describing the developmental changes in drug elimination may constitute system-specific information and may therefore be incorporated into PK models of drugs cleared through the same pathway. The performance of this system-specific model was compared to that of a reference model. Furthermore, Monte-Carlo simulations were performed to evaluate the optimal dose. The system-specific model performed as well as the reference model. Significant correlations were found between CL and postnatal age (PNA), gestational age (GA), body temperature (TEMP), urine output (UO; system-specific model), and multiorgan failure (reference model). For a typical patient with a GA of 40 weeks, BW of 3,000 g, PNA of 2 days (TEMP, 33.5°C), and normal UO (2 ml/kg/h), benzylpenicillin CL was 0.48 liter/h (interindividual variability [IIV] of 49%) and the volume of distribution of the central compartment was 0.62 liter/kg (IIV of 53%) in the system-specific model. Based on simulations, we advise a benzylpenicillin i.v. dose regimen of 75,000 IU/kg/day every 8 h (q8h), 150,000 IU/kg/day q8h, and 200,000 IU/kg/day q6h for patients with GAs of 36 to 37 weeks, 38 to 41 weeks, and ≥42 weeks, respectively. The system-specific model may be used for other drugs cleared through the same pathway accelerating model development. Copyright © 2018 American Society for Microbiology.

  20. Evaluation of local electric fields generated by transcranial direct current stimulation with an extracephalic reference electrode based on realistic 3D body modeling

    NASA Astrophysics Data System (ADS)

    Im, Chang-Hwan; Park, Ji-Hye; Shim, Miseon; Chang, Won Hyuk; Kim, Yun-Hee

    2012-04-01

    In this study, local electric field distributions generated by transcranial direct current stimulation (tDCS) with an extracephalic reference electrode were evaluated to address extracephalic tDCS safety issues. To this aim, we generated a numerical model of an adult male human upper body and applied the 3D finite element method to electric current conduction analysis. In our simulations, the active electrode was placed over the left primary motor cortex (M1) and the reference electrode was placed at six different locations: over the right temporal lobe, on the right supraorbital region, on the right deltoid, on the left deltoid, under the chin, and on the right buccinator muscle. The maximum current density and electric field intensity values in the brainstem generated by the extracephalic reference electrodes were comparable to, or even less than, those generated by the cephalic reference electrodes. These results suggest that extracephalic reference electrodes do not lead to unwanted modulation of the brainstem cardio-respiratory and autonomic centers, as indicated by recent experimental studies. The volume energy density was concentrated at the neck area by the use of deltoid reference electrodes, but was still smaller than that around the active electrode locations. In addition, the distributions of elicited cortical electric fields demonstrated that the use of extracephalic reference electrodes might allow for the robust prediction of cortical modulations with little dependence on the reference electrode locations.

  1. Katja — the 24th week of virtual pregnancy for dosimetric calculations

    NASA Astrophysics Data System (ADS)

    Becker, Janine; Zankl, Maria; Fill, Ute; Hoeschen, Christoph

    2008-01-01

    Virtual human models, a.k.a. voxel models, are currently the state of the art in radiation protection for computing organ irradiation doses without difficult or morally unfeasible experiments. They are based on medical image data of human patients and offer a realistic, three dimensional representation of human anatomy. We present our newest voxel model Katja, a virtual woman in the 24th week of pregnancy. Katja integrates two previous voxel models, one obtained from the abdominal MRI scan of a pregnant patient and an already segmented model of a non-pregnant woman. The latter is the ICRP-AF, fitting the reference values for standard height, weight and organ masses given by the Internationals Committee of Radiological Protection (ICRP). The dataset was altered in order to fit the segmented foetus taken from the abdominal MRI scan. The resulting pregnant woman model, Katja, complies with the ICRP reference values for the adult female.

  2. The global reference atmospheric model, mod 2 (with two scale perturbation model)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Hargraves, W. R.

    1976-01-01

    The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.

  3. Ligand placement based on prior structures: the guided ligand-replacement method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klei, Herbert E.; Bristol-Myers Squibb, Princeton, NJ 08543-4000; Moriarty, Nigel W., E-mail: nwmoriarty@lbl.gov

    2014-01-01

    A new module, Guided Ligand Replacement (GLR), has been developed in Phenix to increase the ease and success rate of ligand placement when prior protein-ligand complexes are available. The process of iterative structure-based drug design involves the X-ray crystal structure determination of upwards of 100 ligands with the same general scaffold (i.e. chemotype) complexed with very similar, if not identical, protein targets. In conjunction with insights from computational models and assays, this collection of crystal structures is analyzed to improve potency, to achieve better selectivity and to reduce liabilities such as absorption, distribution, metabolism, excretion and toxicology. Current methods formore » modeling ligands into electron-density maps typically do not utilize information on how similar ligands bound in related structures. Even if the electron density is of sufficient quality and resolution to allow de novo placement, the process can take considerable time as the size, complexity and torsional degrees of freedom of the ligands increase. A new module, Guided Ligand Replacement (GLR), was developed in Phenix to increase the ease and success rate of ligand placement when prior protein–ligand complexes are available. At the heart of GLR is an algorithm based on graph theory that associates atoms in the target ligand with analogous atoms in the reference ligand. Based on this correspondence, a set of coordinates is generated for the target ligand. GLR is especially useful in two situations: (i) modeling a series of large, flexible, complicated or macrocyclic ligands in successive structures and (ii) modeling ligands as part of a refinement pipeline that can automatically select a reference structure. Even in those cases for which no reference structure is available, if there are multiple copies of the bound ligand per asymmetric unit GLR offers an efficient way to complete the model after the first ligand has been placed. In all of these applications, GLR leverages prior knowledge from earlier structures to facilitate ligand placement in the current structure.« less

  4. Video quality assessment based on correlation between spatiotemporal motion energies

    NASA Astrophysics Data System (ADS)

    Yan, Peng; Mou, Xuanqin

    2016-09-01

    Video quality assessment (VQA) has been a hot research topic because of rapid increase of huge demand of video communications. From the earliest PSNR metric to advanced models that are perceptual aware, researchers have made great progress in this field by introducing properties of human vision system (HVS) into VQA model design. Among various algorithms that model the property of HVS perceiving motion, the spatiotemporal energy model has been validated to be high consistent with psychophysical experiments. In this paper, we take the spatiotemporal energy model into VQA model design by the following steps. 1) According to the pristine spatiotemporal energy model proposed by Adelson et al, we apply the linear filters, which are oriented in space-time and tuned in spatial frequency, to filter the reference and test videos respectively. The outputs of quadrature pairs of above filters are then squared and summed to give two measures of motion energy, which are named rightward and leftward energy responses, respectively. 2) Based on the pristine model, we calculate summation of the rightward and leftward energy responses as spatiotemporal features to represent perceptual quality information for videos, named total spatiotemporal motion energy maps. 3) The proposed FR-VQA model, named STME, is calculated with statistics based on the pixel-wise correlation between the total spatiotemporal motion energy maps of the reference and distorted videos. The STME model was validated on the LIVE VQA Database by comparing with existing FR-VQA models. Experimental results show that STME performs with excellent prediction accuracy and stays in state-of-the-art VQA models.

  5. Cubical Mass-Spring Model design based on a tensile deformation test and nonlinear material model.

    PubMed

    San-Vicente, Gaizka; Aguinaga, Iker; Tomás Celigüeta, Juan

    2012-02-01

    Mass-Spring Models (MSMs) are used to simulate the mechanical behavior of deformable bodies such as soft tissues in medical applications. Although they are fast to compute, they lack accuracy and their design remains still a great challenge. The major difficulties in building realistic MSMs lie on the spring stiffness estimation and the topology identification. In this work, the mechanical behavior of MSMs under tensile loads is analyzed before studying the spring stiffness estimation. In particular, the performed qualitative and quantitative analysis of the behavior of cubical MSMs shows that they have a nonlinear response similar to hyperelastic material models. According to this behavior, a new method for spring stiffness estimation valid for linear and nonlinear material models is proposed. This method adjusts the stress-strain and compressibility curves to a given reference behavior. The accuracy of the MSMs designed with this method is tested taking as reference some soft-tissue simulations based on nonlinear Finite Element Method (FEM). The obtained results show that MSMs can be designed to realistically model the behavior of hyperelastic materials such as soft tissues and can become an interesting alternative to other approaches such as nonlinear FEM.

  6. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  7. Predicting the response of seven Asian glaciers to future climate scenarios using a simple linear glacier model

    NASA Astrophysics Data System (ADS)

    Ren, Diandong; Karoly, David J.

    2008-03-01

    Observations from seven Central Asian glaciers (35-55°N; 70-95°E) are used, together with regional temperature data, to infer uncertain parameters for a simple linear model of the glacier length variations. The glacier model is based on first order glacier dynamics and requires the knowledge of reference states of forcing and glacier perturbation magnitude. An adjoint-based variational method is used to optimally determine the glacier reference states in 1900 and the uncertain glacier model parameters. The simple glacier model is then used to estimate the glacier length variations until 2060 using regional temperature projections from an ensemble of climate model simulations for a future climate change scenario (SRES A2). For the period 2000-2060, all glaciers are projected to experience substantial further shrinkage, especially those with gentle slopes (e.g., Glacier Chogo Lungma retreats ˜4 km). Although nearly one-third of the year 2000 length will be reduced for some small glaciers, the existence of the glaciers studied here is not threatened by year 2060. The differences between the individual glacier responses are large. No straightforward relationship is found between glacier size and the projected fractional change of its length.

  8. Second Generation Weather Impacts Decision Aid User’s Manual

    DTIC Science & Technology

    2013-09-01

    from the pulldown Base Reference Time menu. Most models start at times based on Coordinated Universal Time (UTC) or Zulu time (Z) with the selections...Effects Matrix Z Zulu time 31 No. of Copies Organization 1 DEFENSE TECHNICAL (PDF) INFORMATION CTR DTIC OCA 1 DIRECTOR (PDF) US

  9. Decision Making: New Paradigm for Education.

    ERIC Educational Resources Information Center

    Wales, Charles E.; And Others

    1986-01-01

    Defines education's new paradigm as schooling based on decision making, the critical thinking skills serving it, and the knowledge base supporting it. Outlines a model decision-making process using a hypothetical breakfast problem; a late riser chooses goals, generates ideas, develops an action plan, and implements and evaluates it. (4 references)…

  10. High pressure common rail injection system modeling and control.

    PubMed

    Wang, H P; Zheng, D; Tian, Y

    2016-07-01

    In this paper modeling and common-rail pressure control of high pressure common rail injection system (HPCRIS) is presented. The proposed mathematical model of high pressure common rail injection system which contains three sub-systems: high pressure pump sub-model, common rail sub-model and injector sub-model is a relative complicated nonlinear system. The mathematical model is validated by the software Matlab and a virtual detailed simulation environment. For the considered HPCRIS, an effective model free controller which is called Extended State Observer - based intelligent Proportional Integral (ESO-based iPI) controller is designed. And this proposed method is composed mainly of the referred ESO observer, and a time delay estimation based iPI controller. Finally, to demonstrate the performances of the proposed controller, the proposed ESO-based iPI controller is compared with a conventional PID controller and ADRC. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Gaussian Process Kalman Filter for Focal Plane Wavefront Correction and Exoplanet Signal Extraction

    NASA Astrophysics Data System (ADS)

    Sun, He; Kasdin, N. Jeremy

    2018-01-01

    Currently, the ultimate limitation of space-based coronagraphy is the ability to subtract the residual PSF after wavefront correction to reveal the planet. Called reference difference imaging (RDI), the technique consists of conducting wavefront control to collect the reference point spread function (PSF) by observing a bright star, and then extracting target planet signals by subtracting a weighted sum of reference PSFs. Unfortunately, this technique is inherently inefficient because it spends a significant fraction of the observing time on the reference star rather than the target star with the planet. Recent progress in model based wavefront estimation suggests an alternative approach. A Kalman filter can be used to estimate the stellar PSF for correction by the wavefront control system while simultaneously estimating the planet signal. Without observing the reference star, the (extended) Kalman filter directly utilizes the wavefront correction data and combines the time series observations and model predictions to estimate the stellar PSF and planet signals. Because wavefront correction is used during the entire observation with no slewing, the system has inherently better stability. In this poster we show our results aimed at further improving our Kalman filter estimation accuracy by including not only temporal correlations but also spatial correlations among neighboring pixels in the images. This technique is known as a Gaussian process Kalman filter (GPKF). We also demonstrate the advantages of using a Kalman filter rather than RDI by simulating a real space exoplanet detection mission.

  12. ProbOnto: ontology and knowledge base of probability distributions.

    PubMed

    Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala

    2016-09-01

    Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. CLSI-based transference and verification of CALIPER pediatric reference intervals for 29 Ortho VITROS 5600 chemistry assays.

    PubMed

    Higgins, Victoria; Truong, Dorothy; Woroch, Amy; Chan, Man Khun; Tahmasebi, Houman; Adeli, Khosrow

    2018-03-01

    Evidence-based reference intervals (RIs) are essential to accurately interpret pediatric laboratory test results. To fill gaps in pediatric RIs, the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER) project developed an age- and sex-specific pediatric RI database based on healthy pediatric subjects. Originally established for Abbott ARCHITECT assays, CALIPER RIs were transferred to assays on Beckman, Roche, Siemens, and Ortho analytical platforms. This study provides transferred reference intervals for 29 biochemical assays for the Ortho VITROS 5600 Chemistry System (Ortho). Based on Clinical Laboratory Standards Institute (CLSI) guidelines, a method comparison analysis was performed by measuring approximately 200 patient serum samples using Abbott and Ortho assays. The equation of the line of best fit was calculated and the appropriateness of the linear model was assessed. This equation was used to transfer RIs from Abbott to Ortho assays. Transferred RIs were verified using 84 healthy pediatric serum samples from the CALIPER cohort. RIs for most chemistry analytes successfully transferred from Abbott to Ortho assays. Calcium and CO 2 did not meet statistical criteria for transference (r 2 <0.70). Of the 32 transferred reference intervals, 29 successfully verified with approximately 90% of results from reference samples falling within transferred confidence limits. Transferred RIs for total bilirubin, magnesium, and LDH did not meet verification criteria and are not reported. This study broadens the utility of the CALIPER pediatric RI database to laboratories using Ortho VITROS 5600 biochemical assays. Clinical laboratories should verify CALIPER reference intervals for their specific analytical platform and local population as recommended by CLSI. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  14. Low-Speed Stability-and-Control and Ground-Effects Measurements on the Industry Reference High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Kemmerly, Guy T.; Campbell, Bryan A.; Banks, Daniel W.; Yaros, Steven F.

    1999-01-01

    As a part of a national effort to develop an economically feasible High Speed Civil Transport (HSCT), a single configuration has been accepted as the testing baseline by the organizations working in the High Speed Research (HSR) program. The configuration is based on a design developed by the Boeing Company and is referred to as the Reference H (Ref H). The data contained in this report are low-speed stability-and-control and ground-effect measurements obtained on a 0.06 scale model of the Ref H in a subsonic tunnel.

  15. Plate and Plume Flux: Constraints for paleomagnetic reference frames and interpretation of deep mantle seismic heterogeneity. (Invited)

    NASA Astrophysics Data System (ADS)

    Bunge, H.; Schuberth, B. S.; Shephard, G. E.; Müller, D.

    2010-12-01

    Plate and plume flow are dominant modes of mantle convection, as pointed out by Geoff Davies early on. Driven, respectively, from a cold upper and a hot lower thermal boundary layer these modes are now sufficiently well imaged by seismic tomographers to exploit the thermal boundary layer concept as an effective tool in exploring two long standing geodynamic problems. One relates to the choice of an absolute reference frame in plate tectonic reconstructions. Several absolute reference frames have been proposed over the last decade, including those based on hotspot tracks displaying age progression and assuming either fixity or motion, as well as palaeomagnetically-based reference frames, a subduction reference frame and hybrid versions. Each reference frame implies a particular history of the location of subduction zones through time and thus the evolution of mantle heterogeneity via mixing of subducted slab material in the mantle. Here we compare five alternative absolute plate motion models in terms of their consequences for deep mantle structure. Taking global paleo-plate boundaries and plate velocities back to 140 Ma derived from the new plate tectonic reconstruction software GPlates and assimilating them into vigorous 3-D spherical mantle circulation models, we infer geodynamic mantle heterogeneity and compare it to seismic tomography for each absolute rotation model. We also focus on the challenging problem of interpreting deep mantle seismic heterogeneity in terms of thermal and compositional variations. Using published thermodynamically self-consistent mantle mineralogy models in the pyrolite composition, we find strong plume flux from the CMB, with a high temperature contrast (on the order of 1000 K) across the lower thermal boundary layer is entirely sufficient to explain elastic heterogeneity in the deep mantle for a number of quantitative measures. A high excess temperatures of +1000--1500 K for plumes in the lowermost mantle is particularly important in understanding the strong seismic velocity reduction mapped by tomography in low-velocity bodies of the deep mantle, as this produces significant negative anomalies of shear wave velocity of up to -4%. We note, however, that our results do not account for the curious observation of seismic anti-correlation, which appears difficult to explain in any case. Our results provide important constraints for the integration of plate tectonics and mantle dynamics and their use in forward and inverse geodynamic mantle models.

  16. Global Reference Atmospheric Model and Trace Constituents

    NASA Technical Reports Server (NTRS)

    Justus, C.; Johnson, D.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    Global Reference Atmospheric Model (GRAM-99) is an engineering-level model of the Earth's atmosphere. It provides both mean values and perturbations for density, temperature, pressure, and winds, as well as monthly- and geographically-varying trace constituent concentrations. From 0-27 km, thermodynamics and winds are based on National Oceanic and Atmospheric Administration Global Upper Air Climatic Atlas (GUACA) climatology. Above 120 km, GRAM is based on the NASA Marshall Engineering Thermosphere (MET) model. In the intervening altitude region, GRAM is based on Middle Atmosphere Program (MAP) climatology that also forms the basis of the 1986 COSPAR Intemationa1 Reference Atmosphere (CIRA). MAP data in GRAM are augmented by a specially-derived longitude variation climatology. Atmospheric composition is represented in GRAM by concentrations of both major and minor species. Above 120 km, MET provides concentration values for N2, O2, Ar, O, He, and H. Below 120 km, species represented also include H2O, O3, N2O, CO, CH, and CO2. Water vapor in GRAM is based on a combination of GUACA, Air Force Geophysics Laboratory (AFGL), and NASA Langley Research Center climatologies. Other constituents below 120 km are based on a combination of AFGL and h4AP/CIRA climatologies. This report presents results of comparisons between GRAM Constituent concentrations and those provided by the Naval Research Laboratory (NRL) climatology of Summers (NRL,/MR/7641-93-7416, 1993). GRAM and NRL concentrations were compared for seven species (CH4, CO, CO2, H2O, N2O, O2, and O3) for months January, April, July, and October, over height range 0-115 km, and latitudes -90deg to + 90deg at 10deg increments. Average GRAM-NRL correlations range from 0.878 (for CO) to 0.975 (for O3), with an average over all seven species of 0.936 (standard deviation 0.049).

  17. Communication architecture for AAL. Supporting patient care by health care providers in AAL-enhanced living quarters.

    PubMed

    Nitzsche, T; Thiele, S; Häber, A; Winter, A

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Using Data from Ambient Assisted Living and Smart Homes in Electronic Health Records". Concepts of Ambient Assisted Living (AAL) support a long-term health monitoring and further medical and other services for multi-morbid patients with chronic diseases. In Germany many AAL and telemedical applications exist. Synergy effects by common agreements for essential application components and standards are not achieved. It is necessary to define a communication architecture which is based on common definitions of communication scenarios, application components and communication standards. The development of a communication architecture requires different steps. To gain a reference model for the problem area different AAL and telemedicine projects were compared and relevant data elements were generalized. The derived reference model defines standardized communication links. As a result the authors present an approach towards a reference architecture for AAL-communication. The focus of the architecture lays on the communication layer. The necessary application components are identified and a communication based on standards and their extensions is highlighted. The exchange of patient individual events supported by an event classification model, raw and aggregated data from the personal home area over a telemedicine center to health care providers is possible.

  18. Pseudo-conformer models for linear molecules: Joint treatment of spectroscopic, electron diffraction and ab initio data for the C3O2 molecule

    NASA Astrophysics Data System (ADS)

    Tarasov, Yury I.; Kochikov, Igor V.

    2018-06-01

    Dynamic analysis of the molecules with large-amplitude motions (LAM) based on the pseudo-conformer approach has been successfully applied to various molecules. Floppy linear molecules present a special class of molecular structures that possess a pair of conjugate LAM coordinates but allow one-dimensional treatment. In this paper, previously developed treatment for the semirigid molecules is applied to the carbon suboxide molecule. This molecule characterized by the extremely large CCC bending has been thoroughly investigated by spectroscopic and ab initio methods. However, the earlier electron diffraction investigations were performed within a static approach, obtaining thermally averaged parameters. In this paper we apply a procedure aimed at obtaining the short list of self-consistent reference geometry parameters of a molecule, while all thermally averaged parameters are calculated based on reference geometry, relaxation dependencies and quadratic and cubic force constants. We show that such a model satisfactorily describes available electron diffraction evidence with various QC bending potential energy functions when r.m.s. CCC angle is in the interval 151 ± 2°. This leads to a self-consistent molecular model satisfying spectroscopic and GED data. The parameters for linear reference geometry have been defined as re(CO) = 1.161(2) Å and re(CC) = 1.273(2) Å.

  19. HDRK-Woman: whole-body voxel model based on high-resolution color slice images of Korean adult female cadaver

    NASA Astrophysics Data System (ADS)

    Yeom, Yeon Soo; Jeong, Jong Hwi; Kim, Chan Hyeong; Han, Min Cheol; Ham, Bo Kyoung; Cho, Kun Woo; Hwang, Sung Bae

    2014-07-01

    In a previous study, we constructed a male reference Korean phantom; HDRK-Man (High-Definition Reference Korean-Man), to represent Korean adult males for radiation protection purposes. In the present study, a female phantom; HDRK-Woman (High-Definition Reference Korean-Woman), was constructed to represent Korean adult females. High-resolution color photographic images obtained by serial sectioning of a 26 year-old Korean adult female cadaver were utilized. The body height and weight, the skeletal mass, and the dimensions of the individual organs and tissues were adjusted to the reference Korean data. The phantom was then compared with the International Commission on Radiological Protection (ICRP) female reference phantom in terms of calculated organ doses and organ-depth distributions. Additionally, the effective doses were calculated using both the HDRK-Man and HDRK-Woman phantoms, and the values were compared with those of the ICRP reference phantoms.

  20. HDRK-Woman: whole-body voxel model based on high-resolution color slice images of Korean adult female cadaver.

    PubMed

    Yeom, Yeon Soo; Jeong, Jong Hwi; Kim, Chan Hyeong; Han, Min Cheol; Ham, Bo Kyoung; Cho, Kun Woo; Hwang, Sung Bae

    2014-07-21

    In a previous study, we constructed a male reference Korean phantom; HDRK-Man (High-Definition Reference Korean-Man), to represent Korean adult males for radiation protection purposes. In the present study, a female phantom; HDRK-Woman (High-Definition Reference Korean-Woman), was constructed to represent Korean adult females. High-resolution color photographic images obtained by serial sectioning of a 26 year-old Korean adult female cadaver were utilized. The body height and weight, the skeletal mass, and the dimensions of the individual organs and tissues were adjusted to the reference Korean data. The phantom was then compared with the International Commission on Radiological Protection (ICRP) female reference phantom in terms of calculated organ doses and organ-depth distributions. Additionally, the effective doses were calculated using both the HDRK-Man and HDRK-Woman phantoms, and the values were compared with those of the ICRP reference phantoms.

  1. An analysis of methods for gravity determination and their utilization for the calculation of geopotential numbers in the Slovak national levelling network

    NASA Astrophysics Data System (ADS)

    Majkráková, Miroslava; Papčo, Juraj; Zahorec, Pavol; Droščák, Branislav; Mikuška, Ján; Marušiak, Ivan

    2016-09-01

    The vertical reference system in the Slovak Republic is realized by the National Levelling Network (NLN). The normal heights according to Molodensky have been introduced as reference heights in the NLN in 1957. Since then, the gravity correction, which is necessary to determine the reference heights in the NLN, has been obtained by an interpolation either from the simple or complete Bouguer anomalies. We refer to this method as the "original". Currently, the method based on geopotential numbers is the preferred way to unify the European levelling networks. The core of this article is an analysis of different ways to the gravity determination and their application for the calculation of geopotential numbers at the points of the NLN. The first method is based on the calculation of gravity at levelling points from the interpolated values of the complete Bouguer anomaly using the CBA2G_SK software. The second method is based on the global geopotential model EGM2008 improved by the Residual Terrain Model (RTM) approach. The calculated gravity is used to determine the normal heights according to Molodensky along parts of the levelling lines around the EVRF2007 datum point EH-V. Pitelová (UELN-1905325) and the levelling line of the 2nd order NLN to Kráľova hoľa Mountain (the highest point measured by levelling). The results from our analysis illustrate that the method based on the interpolated value of gravity is a better method for gravity determination when we do not know the measured gravity. It was shown that this method is suitable for the determination of geopotential numbers and reference heights in the Slovak national levelling network at the points in which the gravity is not observed directly. We also demonstrated the necessity of using the precise RTM for the refinement of the results derived solely from the EGM2008.

  2. Accuracy and coverage of the modernized Polish Maritime differential GPS system

    NASA Astrophysics Data System (ADS)

    Specht, Cezary

    2011-01-01

    The DGPS navigation service augments The NAVSTAR Global Positioning System by providing localized pseudorange correction factors and ancillary information which are broadcast over selected marine reference stations. The DGPS service position and integrity information satisfy requirements in coastal navigation and hydrographic surveys. Polish Maritime DGPS system has been established in 1994 and modernized (in 2009) to meet the requirements set out in IMO resolution for a future GNSS, but also to preserve backward signal compatibility of user equipment. Having finalized installation of the new technology L1, L2 reference equipment performance tests were performed.The paper presents results of the coverage modeling and accuracy measuring campaign based on long-term signal analyses of the DGPS reference station Rozewie, which was performed for 26 days in July 2009. Final results allowed to verify the coverage area of the differential signal from reference station and calculated repeatable and absolute accuracy of the system, after the technical modernization. Obtained field strength level area and position statistics (215,000 fixes) were compared to past measurements performed in 2002 (coverage) and 2005 (accuracy), when previous system infrastructure was in operation.So far, no campaigns were performed on differential Galileo. However, as signals, signal processing and receiver techniques are comparable to those know from DGPS. Because all satellite differential GNSS systems use the same transmission standard (RTCM), maritime DGPS Radiobeacons are standardized in all radio communication aspects (frequency, binary rate, modulation), then the accuracy results of differential Galileo can be expected as a similar to DGPS.Coverage of the reference station was calculated based on unique software, which calculate the signal strength level based on transmitter parameters or field signal strength measurement campaign, done in the representative points. The software works based on Baltic sea vector map, ground electric parameters and models atmospheric noise level in the transmission band.

  3. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    DTIC Science & Technology

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model , is able to model the rate of occurrence of...which adds specificity to the model and can make nonlinear data more manageable. Early results show that the 1. REPORT DATE (DD-MM-YYYY) 4. TITLE

  4. Orbital motions of astronomical bodies and their centre of mass from different reference frames: a conceptual step between the geocentric and heliocentric models

    NASA Astrophysics Data System (ADS)

    Guerra, André G. C.; Simeão Carvalho, Paulo

    2016-09-01

    The motion of astronomical bodies and the centre of mass of the system is not always well perceived by students. One of the struggles is the conceptual change of reference frame, which is the same that held back the acceptance of the Heliocentric model over the Geocentric one. To address the question, the notion of centre of mass, motion equations (and their numerical solution for a system of multiple bodies), and change of frame of reference is introduced. The discussion is done based on conceptual and real world examples, using the solar system. Consequently, through the use of simple ‘do it yourself’ methods and basic equations, students can debate complex motions, and have a wider and potentially effective understanding of physics.

  5. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  6. Alternative Methods for Estimating Plane Parameters Based on a Point Cloud

    NASA Astrophysics Data System (ADS)

    Stryczek, Roman

    2017-12-01

    Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.

  7. Enlarged leukocyte referent libraries can explain additional variance in blood-based epigenome-wide association studies.

    PubMed

    Kim, Stephanie; Eliot, Melissa; Koestler, Devin C; Houseman, Eugene A; Wetmur, James G; Wiencke, John K; Kelsey, Karl T

    2016-09-01

    We examined whether variation in blood-based epigenome-wide association studies could be more completely explained by augmenting existing reference DNA methylation libraries. We compared existing and enhanced libraries in predicting variability in three publicly available 450K methylation datasets that collected whole-blood samples. Models were fit separately to each CpG site and used to estimate the additional variability when adjustments for cell composition were made with each library. Calculation of the mean difference in the CpG-specific residual sums of squares error between models for an arthritis, aging and metabolic syndrome dataset, indicated that an enhanced library explained significantly more variation across all three datasets (p < 10(-3)). Pathologically important immune cell subtypes can explain important variability in epigenome-wide association studies done in blood.

  8. CoGI: Towards Compressing Genomes as an Image.

    PubMed

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  9. The Current Status and Tendency of China Millimeter Coordinate Frame Implementation and Maintenance

    NASA Astrophysics Data System (ADS)

    Cheng, P.; Cheng, Y.; Bei, J.

    2017-12-01

    China Geodetic Coordinate System 2000 (CGCS2000) was first officially declared as the national standard coordinate system on July 1, 2008. This reference frame was defined in the ITRF97 frame at epoch 2000.0 and included 2600 GPS geodetic control points. The paper discusses differences between China Geodetic Coordinate System 2000 (CGCS2000) and later updated ITRF versions, such as ITRF2014,in terms of technical implementation and maintenance. With the development of the Beidou navigation satellite system, especially third generation of BDS with signal global coverage in the future, and with progress of space geodetic technology, it is possible for us to establish a global millimeter-level reference frame based on space geodetic technology including BDS. The millimeter reference frame implementation concerns two factors: 1) The variation of geocenter motion estimation, and 2) the site nonlinear motion modeling. In this paper, the geocentric inversion methods are discussed and compared among results derived from various technical methods. Our nonlinear site movement modeling focuses on singular spectrum analysis method, which is of apparent advantages over earth physical effect modeling. All presented in the paper expected to provide reference to our future CGCS2000 maintenance.

  10. Optimal Estimation with Two Process Models and No Measurements

    DTIC Science & Technology

    2015-08-01

    models will be lost if either of the models includes deterministic modeling errors. 12 5. References and Notes 1. Brown RG, Hwang PYC. Introduction to...independent process models when no measurements are present. The observer follows a derivation similar to that of the discrete time Kalman filter. A simulation...discrete time Kalman filter. A simulation example is provided in which a process model based on the dynamics of a ballistic projectile is blended with an

  11. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  12. Ionosonde-based indices for improved representation of solar cycle variation in the International Reference Ionosphere model

    NASA Astrophysics Data System (ADS)

    Brown, Steven; Bilitza, Dieter; Yiǧit, Erdal

    2018-06-01

    A new monthly ionospheric index, IGNS, is presented to improve the representation of the solar cycle variation of the ionospheric F2 peak plasma frequency, foF2. IGNS is calculated using a methodology similar to the construction of the "global effective sunspot number", IG, given by Liu et al. (1983) but selects ionosonde observations based on hemispheres. We incorporated the updated index into the International Reference Ionosphere (IRI) model and compared the foF2 model predictions with global ionospheric observations. We also investigated the influence of the underlying foF2 model on the IG index. IRI has two options for foF2 specification, the CCIR-66 and URSI-88 foF2 models. For the first time, we have calculated IG using URSI-88 and assessed the impact on model predictions. Through a retrospective model-data comparison, results show that the inclusion of the new monthly IGNS index in place of the current 12-month smoothed IG index reduce the foF2 model prediction errors by nearly a factor of two. These results apply to both day-time and nightime predictions. This is due to an overall improved prediction of foF2 seasonal and solar cycle variations in the different hemispheres.

  13. Reconstruction of metabolic pathways by combining probabilistic graphical model-based and knowledge-based methods

    PubMed Central

    2014-01-01

    Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614

  14. Relationship between circadian typology and big five personality domains.

    PubMed

    Tonetti, Lorenzo; Fabbri, Marco; Natale, Vincenzo

    2009-02-01

    We explored the relationship between personality, based on the five-factor model, and circadian preference. To this end, 503 participants (280 females, 223 males) were administered the Morningness-Eveningness Questionnaire (MEQ) and the self-report version of the Big Five Observer (BFO) to determine circadian preference and personality features, respectively. Morning types scored significantly higher than evening and intermediate types on the conscientiousness factor. Evening types were found to be more neurotic than morning types. With reference to the big five personality model, our data, together with those of all the previous studies, indicate that the conscientiousness domain is the one that best discriminates among the three circadian types. Results are discussed with reference to neurobiological models of personality.

  15. Relevance of workplace social mixing during influenza pandemics: an experimental modelling study of workplace cultures.

    PubMed

    Timpka, T; Eriksson, H; Holm, E; Strömgren, M; Ekberg, J; Spreco, A; Dahlström, Ö

    2016-07-01

    Workplaces are one of the most important regular meeting places in society. The aim of this study was to use simulation experiments to examine the impact of different workplace cultures on influenza dissemination during pandemics. The impact is investigated by experiments with defined social-mixing patterns at workplaces using semi-virtual models based on authentic sociodemographic and geographical data from a North European community (population 136 000). A simulated pandemic outbreak was found to affect 33% of the total population in the community with the reference academic-creative workplace culture; virus transmission at the workplace accounted for 10·6% of the cases. A model with a prevailing industrial-administrative workplace culture generated 11% lower incidence than the reference model, while the model with a self-employed workplace culture (also corresponding to a hypothetical scenario with all workplaces closed) produced 20% fewer cases. The model representing an academic-creative workplace culture with restricted workplace interaction generated 12% lower cumulative incidence compared to the reference model. The results display important theoretical associations between workplace social-mixing cultures and community-level incidence rates during influenza pandemics. Social interaction patterns at workplaces should be taken into consideration when analysing virus transmission patterns during influenza pandemics.

  16. Exploring behavior of an unusual megaherbivore: A spatially explicit foraging model of the hippopotamus

    USGS Publications Warehouse

    Lewison, R.L.; Carter, J.

    2004-01-01

    Herbivore foraging theories have been developed for and tested on herbivores across a range of sizes. Due to logistical constraints, however, little research has focused on foraging behavior of megaherbivores. Here we present a research approach that explores megaherbivore foraging behavior, and assesses the applicability of foraging theories developed on smaller herbivores to megafauna. With simulation models as reference points for the analysis of empirical data, we investigate foraging strategies of the common hippopotamus (Hippopotamus amphibius). Using a spatially explicit individual based foraging model, we apply traditional herbivore foraging strategies to a model hippopotamus, compare model output, and then relate these results to field data from wild hippopotami. Hippopotami appear to employ foraging strategies that respond to vegetation characteristics, such as vegetation quality, as well as spatial reference information, namely distance to a water source. Model predictions, field observations, and comparisons of the two support that hippopotami generally conform to the central place foraging construct. These analyses point to the applicability of general herbivore foraging concepts to megaherbivores, but also point to important differences between hippopotami and other herbivores. Our synergistic approach of models as reference points for empirical data highlights a useful method of behavioral analysis for hard-to-study megafauna. ?? 2003 Elsevier B.V. All rights reserved.

  17. RANS Simulation (Rotating Reference Frame Model [RRF]) of Single Lab-Scaled DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph

    2014-04-15

    Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study taking advantage of the symmetry of lab-scaled DOE RM1 geometry, only half of the geometry is models using (Single) Rotating Reference Frame model [RRF]. In this model RANS equations, coupled with k-\\omega turbulence closure model, are solved in the rotating reference frame. The actual geometry of the turbine blade is included and the turbulent boundary layer along the blade span is simulated using wall-function approach. The rotation of the blade is modeled by applying periodic boundary condition to sets of plane of symmetry. This case study simulates the performance and flow field in the near and far wake of the device at the desired operating conditions. The results of these simulations were validated against in-house experimental data. Please see the attached paper.

  18. Two-body potential model based on cosine series expansion for ionic materials

    DOE PAGES

    Oda, Takuji; Weber, William J.; Tanigawa, Hisashi

    2015-09-23

    There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less

  19. Assessing the impacts of sea-level rise and precipitation change on the surficial aquifer in the low-lying coastal alluvial plains and barrier islands, east-central Florida (USA)

    NASA Astrophysics Data System (ADS)

    Xiao, Han; Wang, Dingbao; Hagen, Scott C.; Medeiros, Stephen C.; Hall, Carlton R.

    2016-11-01

    A three-dimensional variable-density groundwater flow and salinity transport model is implemented using the SEAWAT code to quantify the spatial variation of water-table depth and salinity of the surficial aquifer in Merritt Island and Cape Canaveral Island in east-central Florida (USA) under steady-state 2010 hydrologic and hydrogeologic conditions. The developed model is referred to as the `reference' model and calibrated against field-measured groundwater levels and a map of land use and land cover. Then, five prediction/projection models are developed based on modification of the boundary conditions of the calibrated `reference' model to quantify climate change impacts under various scenarios of sea-level rise and precipitation change projected to 2050. Model results indicate that west Merritt Island will encounter lowland inundation and saltwater intrusion due to its low elevation and flat topography, while climate change impacts on Cape Canaveral Island and east Merritt Island are not significant. The SEAWAT models developed for this study are useful and effective tools for water resources management, land use planning, and climate-change adaptation decision-making in these and other low-lying coastal alluvial plains and barrier island systems.

  20. Screening Chemicals for Estrogen Receptor Bioactivity Using a Computational Model.

    PubMed

    Browne, Patience; Judson, Richard S; Casey, Warren M; Kleinstreuer, Nicole C; Thomas, Russell S

    2015-07-21

    The U.S. Environmental Protection Agency (EPA) is considering high-throughput and computational methods to evaluate the endocrine bioactivity of environmental chemicals. Here we describe a multistep, performance-based validation of new methods and demonstrate that these new tools are sufficiently robust to be used in the Endocrine Disruptor Screening Program (EDSP). Results from 18 estrogen receptor (ER) ToxCast high-throughput screening assays were integrated into a computational model that can discriminate bioactivity from assay-specific interference and cytotoxicity. Model scores range from 0 (no activity) to 1 (bioactivity of 17β-estradiol). ToxCast ER model performance was evaluated for reference chemicals, as well as results of EDSP Tier 1 screening assays in current practice. The ToxCast ER model accuracy was 86% to 93% when compared to reference chemicals and predicted results of EDSP Tier 1 guideline and other uterotrophic studies with 84% to 100% accuracy. The performance of high-throughput assays and ToxCast ER model predictions demonstrates that these methods correctly identify active and inactive reference chemicals, provide a measure of relative ER bioactivity, and rapidly identify chemicals with potential endocrine bioactivities for additional screening and testing. EPA is accepting ToxCast ER model data for 1812 chemicals as alternatives for EDSP Tier 1 ER binding, ER transactivation, and uterotrophic assays.

  1. Impact of calcium and TOC on biological acidification assessment in Norwegian rivers.

    PubMed

    Schneider, Susanne C

    2011-02-15

    Acidification continues to be a major impact in freshwaters of northern Europe, and the biotic response to chemical recovery from acidification is often not a straightforward process. The focus on biological recovery is relevant within the context of the EU Water Framework Directive, where a biological monitoring system is needed that detects differences in fauna and flora compared to undisturbed reference conditions. In order to verify true reference sites for biological analyses, expected river pH is modeled based on Ca and TOC, and 94% of variability in pH at reference sites is explained by Ca alone, while 98% is explained by a combination of Ca and TOC. Based on 59 samples from 28 reference sites, compared to 547 samples from 285 non-reference sites, the impact of calcium and total organic carbon (TOC) on benthic algae species composition, expressed as acidification index periphyton (AIP), is analyzed. Rivers with a high Ca concentration have a naturally higher AIP, and TOC affects reference AIP only at low Ca concentrations. Four biological river types are needed for assessment of river acidification in Norway based on benthic algae: very calcium-poor, humic rivers (Ca<1 mg/l and TOC>2 mg/l); very calcium-poor, clear rivers (Ca<1 mg/l and TOC<2 mg/l); calcium-poor rivers (Ca between 1 and 4 mg/l); moderately calcium rich rivers (Ca>4 mg/l). A biological assessment system for river acidification in Norway based on benthic algae is presented, following the demands of the Water Framework Directive. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Reconstruction method for fringe projection profilometry based on light beams.

    PubMed

    Li, Xuexing; Zhang, Zhijiang; Yang, Chen

    2016-12-01

    A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.

  3. Rotor Position Sensorless Control and Its Parameter Sensitivity of Permanent Magnet Motor Based on Model Reference Adaptive System

    NASA Astrophysics Data System (ADS)

    Ohara, Masaki; Noguchi, Toshihiko

    This paper describes a new method for a rotor position sensorless control of a surface permanent magnet synchronous motor based on a model reference adaptive system (MRAS). This method features the MRAS in a current control loop to estimate a rotor speed and position by using only current sensors. This method as well as almost all the conventional methods incorporates a mathematical model of the motor, which consists of parameters such as winding resistances, inductances, and an induced voltage constant. Hence, the important thing is to investigate how the deviation of these parameters affects the estimated rotor position. First, this paper proposes a structure of the sensorless control applied in the current control loop. Next, it proves the stability of the proposed method when motor parameters deviate from the nominal values, and derives the relationship between the estimated position and the deviation of the parameters in a steady state. Finally, some experimental results are presented to show performance and effectiveness of the proposed method.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Kamal; Zhang, Yu; Sung, Chi -Jen

    We study the influence of blending n-butanol on the ignition delay times of n-heptane and iso-octane, the primary reference fuels for gasoline. The ignition delay times are measured using a rapid compression machine, with an emphasis on the low-to-intermediate temperature conditions. The experiments are conducted at equivalence ratios of 0.4 and 1.0, for a compressed pressure of 20 bar, with the temperatures at the end of compression ranging from 613 K to 979 K. The effect of n-butanol addition on the development of the two-stage ignition characteristics for the two primary reference fuels is also examined. The experimental results aremore » compared to predictions obtained using a detailed chemical kinetic mechanism, which has been obtained by a systematic merger of previously reported base models for the combustion of the individual fuel constituents. In conclusion, a sensitivity analysis on the base, and the merged models, is also performed to understand the dependence of autoignition delay times on the model parameters.« less

  5. Research on Model of Student Engagement in Online Learning

    ERIC Educational Resources Information Center

    Peng, Wang

    2017-01-01

    In this study, online learning refers students under the guidance of teachers through the online learning platform for organized learning. Based on the analysis of related research results, considering the existing problems, the main contents of this paper include the following aspects: (1) Analyze and study the current student engagement model.…

  6. Students' Problem-Solving in Mechanics: Preference of a Process Based Model.

    ERIC Educational Resources Information Center

    Stavy, Ruth; And Others

    Research in science and mathematics education has indicated that students often use inappropriate models for solving problems because they tend to mentally represent a problem according to surface features instead of referring to scientific concepts and features. The objective of the study reported in this paper was to determine whether 34 Israeli…

  7. Teaching Supply Chain Management Complexities: A SCOR Model Based Classroom Simulation

    ERIC Educational Resources Information Center

    Webb, G. Scott; Thomas, Stephanie P.; Liao-Troth, Sara

    2014-01-01

    The SCOR (Supply Chain Operations Reference) Model Supply Chain Classroom Simulation is an in-class experiential learning activity that helps students develop a holistic understanding of the processes and challenges of supply chain management. The simulation has broader learning objectives than other supply chain related activities such as the…

  8. Supplemental Tables to the Annual Energy Outlook

    EIA Publications

    2017-01-01

    The Annual Energy Outlook (AEO) Supplemental tables were generated for the reference case of the AEO using the National Energy Modeling System, a computer-based model which produces annual projections of energy markets. Most of the tables were not published in the AEO, but contain regional and other more detailed projections underlying the AEO projections.

  9. Growth in Mathematical Understanding: How Can We Characterise It and How Can We Represent It?

    ERIC Educational Resources Information Center

    Pirie, Susan; Kieren, Thomas

    1994-01-01

    Proposes a model for the growth of mathematical understanding based on the consideration of understanding as a whole, dynamic, leveled but nonlinear process. Illustrates the model using the concept of fractions. How to map the growth of understanding is explained in detail. (Contains 26 references.) (MKR)

  10. The Comprehensive Community Career and Vocational Guidance and Counseling Model.

    ERIC Educational Resources Information Center

    West Virginia State Dept. of Education, Charleston. Bureau of Vocational, Technical, and Adult Education.

    A model is presented which is intended to serve as a reference and provide guidelines for the establishment of community based vocational guidance and counseling programs in West Virginia. The first of six sections identifies and expands four components of the program: self-understanding, decision making, environmental awareness, and job…

  11. A reference model systesm of industrial yeasts Saccharomyces cerevisiae is needed for development of the next-generation biocatalyst toward advanced biofuels production

    USDA-ARS?s Scientific Manuscript database

    Diploid industrial yeast Saccharomyces cerevisiae has demonstrated distinct characteristics that differ from haploid laboratory model strains. However, as a workhorse for a broad range of fermentation-based industrial applications, it was poorly characterized at the genome level. Observations on the...

  12. Dynamic Fuzzy Model Development for a Drum-type Boiler-turbine Plant Through GK Clustering

    NASA Astrophysics Data System (ADS)

    Habbi, Ahcène; Zelmat, Mimoun

    2008-10-01

    This paper discusses a TS fuzzy model identification method for an industrial drum-type boiler plant using the GK fuzzy clustering approach. The fuzzy model is constructed from a set of input-output data that covers a wide operating range of the physical plant. The reference data is generated using a complex first-principle-based mathematical model that describes the key dynamical properties of the boiler-turbine dynamics. The proposed fuzzy model is derived by means of fuzzy clustering method with particular attention on structure flexibility and model interpretability issues. This may provide a basement of a new way to design model based control and diagnosis mechanisms for the complex nonlinear plant.

  13. Billing code algorithms to identify cases of peripheral artery disease from administrative data

    PubMed Central

    Fan, Jin; Arruda-Olson, Adelaide M; Leibson, Cynthia L; Smith, Carin; Liu, Guanghui; Bailey, Kent R; Kullo, Iftikhar J

    2013-01-01

    Objective To construct and validate billing code algorithms for identifying patients with peripheral arterial disease (PAD). Methods We extracted all encounters and line item details including PAD-related billing codes at Mayo Clinic Rochester, Minnesota, between July 1, 1997 and June 30, 2008; 22 712 patients evaluated in the vascular laboratory were divided into training and validation sets. Multiple logistic regression analysis was used to create an integer code score from the training dataset, and this was tested in the validation set. We applied a model-based code algorithm to patients evaluated in the vascular laboratory and compared this with a simpler algorithm (presence of at least one of the ICD-9 PAD codes 440.20–440.29). We also applied both algorithms to a community-based sample (n=4420), followed by a manual review. Results The logistic regression model performed well in both training and validation datasets (c statistic=0.91). In patients evaluated in the vascular laboratory, the model-based code algorithm provided better negative predictive value. The simpler algorithm was reasonably accurate for identification of PAD status, with lesser sensitivity and greater specificity. In the community-based sample, the sensitivity (38.7% vs 68.0%) of the simpler algorithm was much lower, whereas the specificity (92.0% vs 87.6%) was higher than the model-based algorithm. Conclusions A model-based billing code algorithm had reasonable accuracy in identifying PAD cases from the community, and in patients referred to the non-invasive vascular laboratory. The simpler algorithm had reasonable accuracy for identification of PAD in patients referred to the vascular laboratory but was significantly less sensitive in a community-based sample. PMID:24166724

  14. Cumulus cloud model estimates of trace gas transports

    NASA Technical Reports Server (NTRS)

    Garstang, Michael; Scala, John; Simpson, Joanne; Tao, Wei-Kuo; Thompson, A.; Pickering, K. E.; Harris, R.

    1989-01-01

    Draft structures in convective clouds are examined with reference to the results of the NASA Amazon Boundary Layer Experiments (ABLE IIa and IIb) and calculations based on a multidimensional time dependent dynamic and microphysical numerical cloud model. It is shown that some aspects of the draft structures can be calculated from measurements of the cloud environment. Estimated residence times in the lower regions of the cloud based on surface observations (divergence and vertical velocities) are within the same order of magnitude (about 20 min) as model trajectory estimates.

  15. Defining And Employing Reference Conditions For Ecological Restoration Of The Lower Missouri River, USA

    NASA Astrophysics Data System (ADS)

    Jacobson, R. B.; Elliott, C. M.; Reuter, J. M.

    2008-12-01

    Ecological reference conditions are especially challenging for large, intensively managed rivers like the Lower Missouri. Historical information provides broad understanding of how the river has changed, but translating historical information into quantitative reference conditions remains a challenge. Historical information is less available for biological and chemical conditions than for physical conditions. For physical conditions, much of the early historical condition is documented in date-specific measurements or maps, and it is difficult to determine how representative these conditions are for a river system that was characterized historically by large floods and high channel migration rates. As an alternative to a historically defined least- disturbed condition, spatial variation within the Missouri River basin provides potential for defining a best- attainable reference condition. A possibility for the best-attainable condition for channel morphology is an unchannelized segment downstream of the lowermost dam (rkm 1298 - 1203). This segment retains multiple channels and abundant sandbars although it has a highly altered flow regime and a greatly diminished sediment supply. Conversely, downstream river segments have more natural flow regimes, but have been narrowed and simplified for navigation and bank stability. We use two computational tools to compensate for the lack of ideal reference conditions. The first is a hydrologic model that synthesizes natural and altered flow regimes based on 100 years of daily inputs to the river (daily routing model, DRM, US Army Corps of Engineers, 1998); the second tool is hydrodynamic modeling of habitat availability. The flow-regime and hydrodynamic outputs are integrated to define habitat-duration curves as the basis for reference conditions (least-disturbed flow regime and least-disturbed channel morphology). Lacking robust biological response models, we use mean residence time of water and a habitat diversity index as generic ecosystem indicators.

  16. The Reference Elevation Model of Antarctica (REMA): A High Resolution, Time-Stamped Digital Elevation Model for the Antarctic Ice Sheet

    NASA Astrophysics Data System (ADS)

    Howat, I.; Noh, M. J.; Porter, C. C.; Smith, B. E.; Morin, P. J.

    2017-12-01

    We are creating the Reference Elevation Model of Antarctica (REMA), a continuous, high resolution (2-8 m), high precision (accuracy better than 1 m) reference surface for a wide range of glaciological and geodetic applications. REMA will be constructed from stereo-photogrammetric Digital Surface Models (DSM) extracted from pairs of submeter resolution DigitalGlobe satellite imagery and vertically registred to precise elevations from near-coincident airborne LiDAR, ground-based GPS surveys and Cryosat-2 radar altimetry. Both a seamless mosaic and individual, time-stamped DSM strips, collected primarily between 2012 and 2016, will be distributed to enable change measurement. These data will be used for mapping bed topography from ice thickness, measuring ice thickness changes, constraining ice flow and geodynamic models, mapping glacial geomorphology, terrain corrections and filtering of remote sensing observations, and many other science tasks. Is will also be critical for mapping ice traverse routes, landing sites and other field logistics planning. REMA will also provide a critical elevation benchmark for future satellite altimetry missions including ICESat-2. Here we report on REMA production progress, initial accuracy assessment and data availability.

  17. Short communication: Development of an equation for estimating methane emissions of dairy cows from milk Fourier transform mid-infrared spectra by using reference data obtained exclusively from respiration chambers.

    PubMed

    Vanlierde, A; Soyeurt, H; Gengler, N; Colinet, F G; Froidmont, E; Kreuzer, M; Grandl, F; Bell, M; Lund, P; Olijhoek, D W; Eugène, M; Martin, C; Kuhla, B; Dehareng, F

    2018-05-09

    Evaluation and mitigation of enteric methane (CH 4 ) emissions from ruminant livestock, in particular from dairy cows, have acquired global importance for sustainable, climate-smart cattle production. Based on CH 4 reference measurements obtained with the SF 6 tracer technique to determine ruminal CH 4 production, a current equation permits evaluation of individual daily CH 4 emissions of dairy cows based on milk Fourier transform mid-infrared (FT-MIR) spectra. However, the respiration chamber (RC) technique is considered to be more accurate than SF 6 to measure CH 4 production from cattle. This study aimed to develop an equation that allows estimating CH 4 emissions of lactating cows recorded in an RC from corresponding milk FT-MIR spectra and to challenge its robustness and relevance through validation processes and its application on a milk spectral database. This would permit confirming the conclusions drawn with the existing equation based on SF 6 reference measurements regarding the potential to estimate daily CH 4 emissions of dairy cows from milk FT-MIR spectra. A total of 584 RC reference CH 4 measurements (mean ± standard deviation of 400 ± 72 g of CH 4 /d) and corresponding standardized milk mid-infrared spectra were obtained from 148 individual lactating cows between 7 and 321 d in milk in 5 European countries (Germany, Switzerland, Denmark, France, and Northern Ireland). The developed equation based on RC measurements showed calibration and cross-validation coefficients of determination of 0.65 and 0.57, respectively, which is lower than those obtained earlier by the equation based on 532 SF 6 measurements (0.74 and 0.70, respectively). This means that the RC-based model is unable to explain the variability observed in the corresponding reference data as well as the SF 6 -based model. The standard errors of calibration and cross-validation were lower for the RC model (43 and 47 g/d vs. 66 and 70 g/d for the SF 6 version, respectively), indicating that the model based on RC data was closer to actual values. The root mean squared error (RMSE) of calibration of 42 g/d represents only 10% of the overall daily CH 4 production, which is 23 g/d lower than the RMSE for the SF 6 -based equation. During the external validation step an RMSE of 62 g/d was observed. When the RC equation was applied to a standardized spectral database of milk recordings collected in the Walloon region of Belgium between January 2012 and December 2017 (1,515,137 spectra from 132,658 lactating cows in 1,176 different herds), an average ± standard deviation of 446 ± 51 g of CH 4 /d was estimated, which is consistent with the range of the values measured using both RC and SF 6 techniques. This study confirmed that milk FT-MIR spectra could be used as a potential proxy to estimate daily CH 4 emissions from dairy cows provided that the variability to predict is covered by the model. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  18. Moving to a Modernized Height Reference System in Canada: Rationale, Status and Plans

    NASA Astrophysics Data System (ADS)

    Veronneau, M.; Huang, J.

    2007-05-01

    A modern society depends on a common coordinate reference system through which geospatial information can be interrelated and exploited reliably. For height measurements this requires the ability to measure mean sea level elevations easily, accurately, and at the lowest possible cost. The current national reference system for elevations, the Canadian Geodetic Vertical Datum of 1928 (CGVD28), offers only partial geographic coverage of the Canadian territory and is affected by inaccuracies that are becoming more apparent as users move to space- based technologies such as GPS. Furthermore, the maintenance and expansion of the national vertical network using spirit-levelling, a costly, time consuming and labour intensive proposition, has only been minimally funded over the past decade. It is now generally accepted that the most sustainable alternative for the realization of a national vertical datum is a gravimetric geoid model. This approach defines the datum in relation to an ellipsoid, making it compatible with space-based technologies for positioning. While simplifying access to heights above mean sea level all across the Canadian territory, this approach imposes additional demands on the quality of the geoid model. These are being met by recent and upcoming space gravimetry missions that have and will be measuring the Earth`s gravity field with increasing and unprecedented accuracy. To maintain compatibility with the CGVD28 datum materialized at benchmarks, the current first-order levelling can be readjusted by constraining geoid heights at selected stations of the Canadian Base Network. The new reference would change CGVD28 heights of benchmarks by up to 1 m across Canada. However, local height differences between benchmarks would maintain a relative precision of a few cm or better. CGVD28 will co-exist with the new height reference as long as it will be required, but it will undoubtedly disappear as benchmarks are destroyed over time. The adoption of GNSS technologies for positioning should naturally move users to the new height reference and offer the possibility of transferring heights over longer distances, within the precision of the geoid model. This transition will also reduce user dependency on a dense network of benchmarks and offer the possibility for geodetic agencies to provide the reference frame with a reduced number of 3D control points. While the rationale for moving to a modernized height system is easily understood, the acceptance of the new system by users will only occur gradually as they adopt new technologies and procedures to access the height reference. A stakeholder consultation indicates user readiness and an implementation plan is starting to unfold. This presentation will look at the current state of the geoid model and control networks that will support the modernized height system. Results of the consultation and the recommendations regarding the roles and responsibilities of the various stakeholders involved in implementing the transition will also be reported.

  19. Tailoring periodical collections to meet institutional needs.

    PubMed Central

    Delman, B S

    1984-01-01

    A system for tailoring journal collections to meet institutional needs is described. The approach is based on the view that reference work and collection development are variant and complementary forms of the same library function; both tasks have as their objective a literature response to information problems. Utilizing the tools and procedures of the reference search in response to a specific collection development problem topic, the author created a model ranked list of relevant journals. Finally, by linking the model to certain operational and environmental factors in three different health care organizations, he tailored the collection to meet the institutions' respective information needs. PMID:6375775

  20. QSPR model for bioconcentration factors of nonpolar organic compounds using molecular electronegativity distance vector descriptors.

    PubMed

    Qin, Li-Tang; Liu, Shu-Shen; Liu, Hai-Ling

    2010-02-01

    A five-variable model (model M2) was developed for the bioconcentration factors (BCFs) of nonpolar organic compounds (NPOCs) by using molecular electronegativity distance vector (MEDV) to characterize the structures of NPOCs and variable selection and modeling based on prediction (VSMP) to select the optimum descriptors. The estimated correlation coefficient (r (2)) and the leave-one-out cross-validation correlation coefficients (q (2)) of model M2 were 0.9271 and 0.9171, respectively. The model was externally validated by splitting the whole data set into a representative training set of 85 chemicals and a validation set of 29 chemicals. The results show that the main structural factors influencing the BCFs of NPOCs are -cCc, cCcc, -Cl, and -Br (where "-" refers to a single bond and "c" refers to a conjugated bond). The quantitative structure-property relationship (QSPR) model can effectively predict the BCFs of NPOCs, and the predictions of the model can also extend the current BCF database of experimental values.

  1. Analyzing and designing object-oriented missile simulations with concurrency

    NASA Astrophysics Data System (ADS)

    Randorf, Jeffrey Allen

    2000-11-01

    A software object model for the six degree-of-freedom missile modeling domain is presented. As a precursor, a domain analysis of the missile modeling domain was started, based on the Feature-Oriented Domain Analysis (FODA) technique described by the Software Engineering Institute (SEI). It was subsequently determined the FODA methodology is functionally equivalent to the Object Modeling Technique. The analysis used legacy software documentation and code from the ENDOSIM, KDEC, and TFrames 6-DOF modeling tools, including other technical literature. The SEI Object Connection Architecture (OCA) was the template for designing the object model. Three variants of the OCA were considered---a reference structure, a recursive structure, and a reference structure with augmentation for flight vehicle modeling. The reference OCA design option was chosen for maintaining simplicity while not compromising the expressive power of the OMT model. The missile architecture was then analyzed for potential areas of concurrent computing. It was shown how protected objects could be used for data passing between OCA object managers, allowing concurrent access without changing the OCA reference design intent or structure. The implementation language was the 1995 release of Ada. OCA software components were shown how to be expressed as Ada child packages. While acceleration of several low level and other high operations level are possible on proper hardware, there was a 33% degradation of 4th order Runge-Kutta integrator performance of two simultaneous ordinary differential equations using Ada tasking on a single processor machine. The Defense Department's High Level Architecture was introduced and explained in context with the OCA. It was shown the HLA and OCA were not mutually exclusive architectures, but complimentary. HLA was shown as an interoperability solution, with the OCA as an architectural vehicle for software reuse. Further directions for implementing a 6-DOF missile modeling environment are discussed.

  2. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    PubMed

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  3. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  4. A Network Method of Measuring Affiliation-Based Peer Influence: Assessing the Influences of Teammates' Smoking on Adolescent Smoking

    ERIC Educational Resources Information Center

    Fujimoto, Kayo; Unger, Jennifer B.; Valente, Thomas W.

    2012-01-01

    Using a network analytic framework, this study introduces a new method to measure peer influence based on adolescents' affiliations or 2-mode social network data. Exposure based on affiliations is referred to as the "affiliation exposure model." This study demonstrates the methodology using data on young adolescent smoking being influenced by…

  5. High resolution, MRI-based, segmented, computerized head phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zubal, I.G.; Harrell, C.R.; Smith, E.O.

    1999-01-01

    The authors have created a high-resolution software phantom of the human brain which is applicable to voxel-based radiation transport calculations yielding nuclear medicine simulated images and/or internal dose estimates. A software head phantom was created from 124 transverse MRI images of a healthy normal individual. The transverse T2 slices, recorded in a 256x256 matrix from a GE Signa 2 scanner, have isotropic voxel dimensions of 1.5 mm and were manually segmented by the clinical staff. Each voxel of the phantom contains one of 62 index numbers designating anatomical, neurological, and taxonomical structures. The result is stored as a 256x256x128 bytemore » array. Internal volumes compare favorably to those described in the ICRP Reference Man. The computerized array represents a high resolution model of a typical human brain and serves as a voxel-based anthropomorphic head phantom suitable for computer-based modeling and simulation calculations. It offers an improved realism over previous mathematically described software brain phantoms, and creates a reference standard for comparing results of newly emerging voxel-based computations. Such voxel-based computations lead the way to developing diagnostic and dosimetry calculations which can utilize patient-specific diagnostic images. However, such individualized approaches lack fast, automatic segmentation schemes for routine use; therefore, the high resolution, typical head geometry gives the most realistic patient model currently available.« less

  6. Building a semantic web-based metadata repository for facilitating detailed clinical modeling in cancer genome studies.

    PubMed

    Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian

    2017-06-05

    Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.

  7. Evidence for Model-based Computations in the Human Amygdala during Pavlovian Conditioning

    PubMed Central

    Prévost, Charlotte; McNamee, Daniel; Jessup, Ryan K.; Bossaerts, Peter; O'Doherty, John P.

    2013-01-01

    Contemporary computational accounts of instrumental conditioning have emphasized a role for a model-based system in which values are computed with reference to a rich model of the structure of the world, and a model-free system in which values are updated without encoding such structure. Much less studied is the possibility of a similar distinction operating at the level of Pavlovian conditioning. In the present study, we scanned human participants while they participated in a Pavlovian conditioning task with a simple structure while measuring activity in the human amygdala using a high-resolution fMRI protocol. After fitting a model-based algorithm and a variety of model-free algorithms to the fMRI data, we found evidence for the superiority of a model-based algorithm in accounting for activity in the amygdala compared to the model-free counterparts. These findings support an important role for model-based algorithms in describing the processes underpinning Pavlovian conditioning, as well as providing evidence of a role for the human amygdala in model-based inference. PMID:23436990

  8. Identification of candidate reference chemicals for in vitro steroidogenesis assays.

    PubMed

    Pinto, Caroline Lucia; Markey, Kristan; Dix, David; Browne, Patience

    2018-03-01

    The Endocrine Disruptor Screening Program (EDSP) is transitioning from traditional testing methods to integrating ToxCast/Tox21 in vitro high-throughput screening assays for identifying chemicals with endocrine bioactivity. The ToxCast high-throughput H295R steroidogenesis assay may potentially replace the low-throughput assays currently used in the EDSP Tier 1 battery to detect chemicals that alter the synthesis of androgens and estrogens. Herein, we describe an approach for identifying in vitro candidate reference chemicals that affect the production of androgens and estrogens in models of steroidogenesis. Candidate reference chemicals were identified from a review of H295R and gonad-derived in vitro assays used in methods validation and published in the scientific literature. A total of 29 chemicals affecting androgen and estrogen levels satisfied all criteria for positive reference chemicals, while an additional set of 21 and 15 chemicals partially fulfilled criteria for positive reference chemicals for androgens and estrogens, respectively. The identified chemicals included pesticides, pharmaceuticals, industrial and naturally-occurring chemicals with the capability to increase or decrease the levels of the sex hormones in vitro. Additionally, 14 and 15 compounds were identified as potential negative reference chemicals for effects on androgens and estrogens, respectively. These candidate reference chemicals will be informative for performance-based validation of in vitro steroidogenesis models. Copyright © 2017. Published by Elsevier Ltd.

  9. Reference values of bone stiffness index and C-terminal telopeptide in healthy European children.

    PubMed

    Herrmann, D; Intemann, T; Lauria, F; Mårild, S; Molnár, D; Moreno, L A; Sioen, I; Tornaritis, M; Veidebaum, T; Pigeot, I; Ahrens, W

    2014-09-01

    Quantitative ultrasound measurements and bone metabolic markers can help to monitor bone health and to detect impaired skeletal development. Population-based reference values for children may serve as a basis for preventive measures to reduce the risk of osteoporosis and osteoporotic fractures in later life. This is the first paper providing age-, sex- and height-specific reference values for bone stiffness index (SI) and serum carboxy-terminal cross-linking telopeptide of type I collagen (CTX) in healthy, apparently prepubertal children. In the population-based IDEFICS baseline survey (2007-2008) and follow-up (2009-2010), 18,745 children from eight European countries were newly recruited. A total of 10,791 2-10.9-year-old and 1646 3-8.9-year-old healthy children provided data on SI of the right and left calcaneus and serum CTX, respectively. Furthermore, height and weight were measured. Percentile curves were calculated using the General Additive Model for Location Scale and Shape (GAMLSS) to model the distribution of SI and CTX depending on multiple covariates while accounting for dispersion, skewness, and the kurtosis of this distribution. SI was negatively associated with age and height in children aged 2-5 years, whereas a positive association was observed in children aged 6-10 years. The dip in SI occurred at older age for higher SI percentiles and was observed earlier in taller children than in smaller children. The CTX reference curves showed a linear-positive association with age and height. No major sex differences were observed for the SI and CTX reference values. These reference data lay the ground to evaluate bone growth and metabolism in prepubertal children in epidemiological and clinical settings. They may also inform clinical practice to monitor skeletal development and to assess adverse drug reactions during medical treatments.

  10. A Novel Reference Security Model with the Situation Based Access Policy for Accessing EPHR Data.

    PubMed

    Gope, Prosanta; Amin, Ruhul

    2016-11-01

    Electronic Patient Health Record (EPHR) systems may facilitate a patient not only to share his/her health records securely with healthcare professional but also to control his/her health privacy, in a convenient and easy way even in case of emergency. In order to fulfill these requirements, it is greatly desirable to have the access control mechanism which can efficiently handle every circumstance without negotiating security. However, the existing access control mechanisms used in healthcare to regulate and restrict the disclosure of patient data are often bypassed in case of emergencies. In this article, we propose a way to securely share EPHR data under any situation including break-the-glass (BtG) without compromising its security. In this regard, we design a reference security model, which consists of a multi-level data flow hierarchy, and an efficient access control framework based on the conventional Role-Based Access Control (RBAC) and Mandatory Access Control (MAC) policies.

  11. FT-IR imaging for quantitative determination of liver fat content in non-alcoholic fatty liver.

    PubMed

    Kochan, K; Maslak, E; Chlopicki, S; Baranska, M

    2015-08-07

    In this work we apply FT-IR imaging of large areas of liver tissue cross-section samples (∼5 cm × 5 cm) for quantitative assessment of steatosis in murine model of Non-Alcoholic Fatty Liver (NAFLD). We quantified the area of liver tissue occupied by lipid droplets (LDs) by FT-IR imaging and Oil Red O (ORO) staining for comparison. Two alternative FT-IR based approaches are presented. The first, straightforward method, was based on average spectra from tissues and provided values of the fat content by using a PLS regression model and the reference method. The second one – the chemometric-based method – enabled us to determine the values of the fat content, independently of the reference method by means of k-means cluster (KMC) analysis. In summary, FT-IR images of large size liver sections may prove to be useful for quantifying liver steatosis without the need of tissue staining.

  12. Objects prompt authentic scientific activities among learners in a museum programme

    NASA Astrophysics Data System (ADS)

    Achiam, Marianne; Simony, Leonora; Kramer Lindow, Bent Erik

    2016-04-01

    Although the scientific disciplines conduct practical work in different ways, all consider practical work as the essential way of connecting objects and phenomena with ideas and the abstract. Accordingly, practical work is regarded as central to science education as well. We investigate a practical, object-based palaeontology programme at a natural history museum to identify how palaeontological objects prompt scientific activity among upper secondary school students. We first construct a theoretical framework based on an analysis of the programme's palaeontological content. From this, we build our reference model, which considers the specimens used in the programme, possible palaeontological interpretations of these specimens, and the conditions inherent in the programme. We use the reference model to analyse the activities of programme participants, and illustrate how these activities are palaeontologically authentic. Finally, we discuss our findings, examining the mechanism by which the specimens prompt scientific activities. We also discuss our discipline-based approach, and how it allows us to positively identify participants' activities as authentic. We conclude by discussing the implications of our findings.

  13. Neural network-based model reference adaptive control system.

    PubMed

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  14. Fractional Control of An Active Four-wheel-steering Vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Tianting; Tong, Jun; Chen, Ning; Tian, Jie

    2018-03-01

    A four-wheel-steering (4WS) vehicle model and reference model with a drop filter are constructed. The decoupling of 4WS vehicle model is carried out. And a fractional PIλDμ controller is introduced into the decoupling strategy to reduce the effects of the uncertainty of the vehicle parameters as well as the unmodelled dynamics on the system performance. Based on optimization techniques, the design of fractional controller are obtained to ensure the robustness of 4WS vehicle during the special range of frequencies through proper choice of the constraints. In order to compare with fractional robust controller, an optimal controller for the same vehicle is also designed. The simulations of the two control systems are carried out and it reveals that the decoupling and fractional robust controller is able to make vehicle model trace the reference model very well with better robustness.

  15. A model of geomagnetic secular variation for 1980-1983

    USGS Publications Warehouse

    Peddie, N.W.; Zunde, A.K.

    1987-01-01

    We developed an updated model of the secular variation of the main geomagnetic field during 1980 through 1983 based on annual mean values for that interval from 148 worldwide magnetic observatories. The model consists of a series of 80 spherical harmonics, up to and including those of degree and order 8. We used it to form a proposal for the 1985 revision of the International Geomagnetic Reference Field (IGRF). Comparison of the new model, whose mean epoch is approximately 1982.0, with the Provisional Geomagnetic Reference Field for 1975-1980 (PGRF 1975), indicates that the moment of the centered-dipole part of the geomagnetic field is now decreasing faster than it was 5 years ago. The rate (in field units) indicated by PGRF 1975 was about -25 nT a-1, while for the new model it is -28 nT a-1. ?? 1987.

  16. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  17. Towards a cell-based mechanostat theory of bone: the need to account for osteocyte desensitisation and osteocyte replacement.

    PubMed

    Lerebours, Chloé; Buenzli, Pascal R

    2016-09-06

    Bone׳s mechanostat theory describes the adaptation of bone tissues to their mechanical environment. Many experiments have investigated and observed such structural adaptation. However, there is still much uncertainty about how to define the reference mechanical state at which bone structure is adapted and stable. Clinical and experimental observations show that this reference state varies both in space and in time, over a wide range of timescales. We propose here an osteocyte-based mechanostat theory that encodes the mechanical reference state in osteocyte properties. This theory assumes that osteocytes are initially formed adapted to their current local mechanical environment through modulation of their properties. We distinguish two main types of physiological processes by which osteocytes subsequently modify the reference mechanical state at different timescales. One is cell desensitisation, which occurs rapidly and reversibly during an osteocyte׳s lifetime. The other is the replacement of osteocytes during bone remodelling, which occurs over the long timescales of bone turnover. The novelty of this theory is to propose that long-lasting morphological and genotypic osteocyte properties provide a material basis for a long-term mechanical memory of bone that is gradually reset by bone remodelling. We test this theory by simulating long-term mechanical disuse (modelling spinal cord injury), and short-term mechanical loadings (modelling daily exercises) with a mathematical model. The consideration of osteocyte desensitisation and of osteocyte replacement by remodelling is able to capture a number of phenomena and timescales observed during the mechanical adaptation of bone tissues, lending support to this theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. The twelve-step recovery model of AA: a voluntary mutual help association.

    PubMed

    Borkman, Thomasina

    2008-01-01

    Alcoholism treatment has evolved to mean professionalized, scientifically based rehabilitation. Alcoholics Anonymous (AA) is not a treatment method; it is far better understood as a Twelve-Step Recovery Program within a voluntary self-help/mutual aid organization of self-defined alcoholics. The Twelve-Step Recovery Model is elaborated in three sections, patterned on the AA logo (a triangle within a circle): The triangle's legs represent recovery, service, and unity; the circle represents the reinforcing effect of the three legs upon each other as well as the "technology" of the sharing circle and the fellowship. The first leg of the triangle, recovery, refers to the journey of individuals to abstinence and a new "way of living." The second leg, service, refers to helping other alcoholics which also connects the participants into a fellowship. The third leg, unity, refers to the fellowship of recovering alcoholics, their groups, and organizations. The distinctive AA organizational structure of an inverted pyramid is one in which the members in autonomous local groups direct input to the national service bodies creating a democratic, egalitarian organization maximizing recovery. Analysts describe the AA recovery program as complex, implicitly grounded in sound psychological principles, and more sophisticated than is typically understood. AA provides a nonmedicalized and anonymous "way of living" in the community and should probably be referred to as the Twelve-Step/Twelve Tradition Recovery Model in order to clearly differentiate it from professionally based twelve-step treatments. There are additional self-help/mutual aid groups for alcoholics who prefer philosophies other than AA.

  19. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  20. Mapping Evidence-Based Treatments for Children and Adolescents: Application of the Distillation and Matching Model to 615 Treatments from 322 Randomized Trials

    ERIC Educational Resources Information Center

    Chorpita, Bruce F.; Daleiden, Eric L.

    2009-01-01

    This study applied the distillation and matching model to 322 randomized clinical trials for child mental health treatments. The model involved initial data reduction of 615 treatment protocol descriptions by means of a set of codes describing discrete clinical strategies, referred to as practice elements. Practice elements were then summarized in…

  1. Progressive Transitions from Algorithmic to Conceptual Understanding in Student Ability To Solve Chemistry Problems: A Lakatosian Interpretation.

    ERIC Educational Resources Information Center

    Niaz, Mansoor

    The main objective of this study is to construct models based on strategies students use to solve chemistry problems and to show that these models form sequences of progressive transitions similar to what Lakatos (1970) in the history of science refers to as progressive 'problemshifts' that increase the explanatory' heuristic power of the models.…

  2. Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.

    PubMed

    Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz

    2015-01-01

    This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.

  3. Predictors of consistent condom use based on the Information-Motivation-Behavioral Skills (IMB) model among female sex workers in Jinan, China

    PubMed Central

    2011-01-01

    Background Female commercial sex workers (FSWs) are at high risk of human immunodeficiency virus (HIV) transmission in China. This study was designed to examine the predictors of condom use with clients during vaginal intercourse among FSWs based on the Information-Motivation-Behavioral Skills (IMB) model and to describe the relationships between IMB model constructs. Methods A cross-sectional study was conducted in Jinan of Shandong Province, from May to October, 2009. Participants (N = 432) were recruited using Respondent-Driven Sampling (RDS). A self-administered questionnaire was used to collect data. Structural equation modeling was used to assess the IMB model. Results A total of 427 (98.8%) participants completed their questionnaires. Condom use was significantly predicted by social referents support, experiences with and attitudes toward condoms, self-efficacy, and health behaviors and condom use skills. Significant indirect predictors of condom use mediated through behavioral skills included HIV knowledge, social referents support, and substance use. Conclusions These results suggest that the IMB model could be used to predict condom use among Chinese FSWs. Further research is warranted to develop preventive interventions on the basis of the IMB model to promote condom use among FSWs in China. PMID:21329512

  4. Acidity in DMSO from the embedded cluster integral equation quantum solvation model.

    PubMed

    Heil, Jochen; Tomazic, Daniel; Egbers, Simon; Kast, Stefan M

    2014-04-01

    The embedded cluster reference interaction site model (EC-RISM) is applied to the prediction of acidity constants of organic molecules in dimethyl sulfoxide (DMSO) solution. EC-RISM is based on a self-consistent treatment of the solute's electronic structure and the solvent's structure by coupling quantum-chemical calculations with three-dimensional (3D) RISM integral equation theory. We compare available DMSO force fields with reference calculations obtained using the polarizable continuum model (PCM). The results are evaluated statistically using two different approaches to eliminating the proton contribution: a linear regression model and an analysis of pK(a) shifts for compound pairs. Suitable levels of theory for the integral equation methodology are benchmarked. The results are further analyzed and illustrated by visualizing solvent site distribution functions and comparing them with an aqueous environment.

  5. Presentation-Practice-Production and Task-Based Learning in the Light of Second Language Learning Theories.

    ERIC Educational Resources Information Center

    Ritchie, Graeme

    2003-01-01

    Features of presentation-practice-production (PPP) and task-based learning (TBL) models for language teaching are discussed with reference to language learning theories. Pre-selection of target structures, use of controlled repetition, and explicit grammar instruction in a PPP lesson are given. Suggests TBL approaches afford greater learning…

  6. The Negotiation Model in Asynchronous Computer-Mediated Communication (CMC): Negotiation in Task-Based Email Exchanges

    ERIC Educational Resources Information Center

    Kitade, Keiko

    2006-01-01

    Based on recent studies, computer-mediated communication (CMC) has been considered a tool to aid in language learning on account of its distinctive interactional features. However, most studies have referred to "synchronous" CMC and neglected to investigate how "asynchronous" CMC contributes to language learning. Asynchronous CMC possesses…

  7. Collaborative localization in wireless sensor networks via pattern recognition in radio irregularity using omnidirectional antennas.

    PubMed

    Jiang, Joe-Air; Chuang, Cheng-Long; Lin, Tzu-Shiang; Chen, Chia-Pang; Hung, Chih-Hung; Wang, Jiing-Yi; Liu, Chang-Wang; Lai, Tzu-Yun

    2010-01-01

    In recent years, various received signal strength (RSS)-based localization estimation approaches for wireless sensor networks (WSNs) have been proposed. RSS-based localization is regarded as a low-cost solution for many location-aware applications in WSNs. In previous studies, the radiation patterns of all sensor nodes are assumed to be spherical, which is an oversimplification of the radio propagation model in practical applications. In this study, we present an RSS-based cooperative localization method that estimates unknown coordinates of sensor nodes in a network. Arrangement of two external low-cost omnidirectional dipole antennas is developed by using the distance-power gradient model. A modified robust regression is also proposed to determine the relative azimuth and distance between a sensor node and a fixed reference node. In addition, a cooperative localization scheme that incorporates estimations from multiple fixed reference nodes is presented to improve the accuracy of the localization. The proposed method is tested via computer-based analysis and field test. Experimental results demonstrate that the proposed low-cost method is a useful solution for localizing sensor nodes in unknown or changing environments.

  8. Multiple neural states of representation in short-term memory? It's a matter of attention.

    PubMed

    Larocque, Joshua J; Lewis-Peacock, Jarrod A; Postle, Bradley R

    2014-01-01

    Short-term memory (STM) refers to the capacity-limited retention of information over a brief period of time, and working memory (WM) refers to the manipulation and use of that information to guide behavior. In recent years it has become apparent that STM and WM interact and overlap with other cognitive processes, including attention (the selection of a subset of information for further processing) and long-term memory (LTM-the encoding and retention of an effectively unlimited amount of information for a much longer period of time). Broadly speaking, there have been two classes of memory models: systems models, which posit distinct stores for STM and LTM (Atkinson and Shiffrin, 1968; Baddeley and Hitch, 1974); and state-based models, which posit a common store with different activation states corresponding to STM and LTM (Cowan, 1995; McElree, 1996; Oberauer, 2002). In this paper, we will focus on state-based accounts of STM. First, we will consider several theoretical models that postulate, based on considerable behavioral evidence, that information in STM can exist in multiple representational states. We will then consider how neural data from recent studies of STM can inform and constrain these theoretical models. In the process we will highlight the inferential advantage of multivariate, information-based analyses of neuroimaging data (fMRI and electroencephalography (EEG)) over conventional activation-based analysis approaches (Postle, in press). We will conclude by addressing lingering questions regarding the fractionation of STM, highlighting differences between the attention to information vs. the retention of information during brief memory delays.

  9. Evaluation of quality of precipitation products: A case study using WRF and IMERG data over the central United States

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lin, L. F.; Bras, R. L.

    2017-12-01

    Hydrological applications rely on the availability and quality of precipitation products, specially model- and satellite-based products for use in areas without ground measurements. It is known that the quality of model- and satellite-based precipitation products are complementary—model-based products exhibiting high quality during winters while satellite-based products seem to be better during summers. To explore that behavior, this study uses 2-m air temperature as auxiliary information to evaluate high-resolution (0.1°×0.1° every hour) precipitation products from Weather Research and Forecasting (WRF) simulations and from version-4 Integrated Multi-satellite Retrievals for GPM (IMERG) early and final runs. The products are evaluated relative to the reference NCEP Stage IV precipitation estimates over the central United States in 2016. The results show that the WRF and IMERG final-run estimates are nearly unbiased while the IMERG early-run estimates positively biased. The results also show that the WRF estimates exhibit high correlations with the reference data when the temperature falls below 280°K and the IMERG estimates (i.e., both early and final runs) do so when the temperature exceeds 280°K. Moreover, the temperature threshold of 280°K, which distinguishes the quality of the WRF and the IMERG products, does not vary significantly with either season or location. This study not only adds insight into current precipitation research on the quality of precipitation products but also suggests a simple way for choosing either a model- or satellite-based product or a hybrid model/satellite product for applications.

  10. Dynamic route and departure time choice model based on self-adaptive reference point and reinforcement learning

    NASA Astrophysics Data System (ADS)

    Li, Xue-yan; Li, Xue-mei; Yang, Lingrun; Li, Jing

    2018-07-01

    Most of the previous studies on dynamic traffic assignment are based on traditional analytical framework, for instance, the idea of Dynamic User Equilibrium has been widely used in depicting both the route choice and the departure time choice. However, some recent studies have demonstrated that the dynamic traffic flow assignment largely depends on travelers' rationality degree, travelers' heterogeneity and what the traffic information the travelers have. In this paper, we develop a new self-adaptive multi agent model to depict travelers' behavior in Dynamic Traffic Assignment. We use Cumulative Prospect Theory with heterogeneous reference points to illustrate travelers' bounded rationality. We use reinforcement-learning model to depict travelers' route and departure time choosing behavior under the condition of imperfect information. We design the evolution rule of travelers' expected arrival time and the algorithm of traffic flow assignment. Compared with the traditional model, the self-adaptive multi agent model we proposed in this paper can effectively help travelers avoid the rush hour. Finally, we report and analyze the effect of travelers' group behavior on the transportation system, and give some insights into the relation between travelers' group behavior and the performance of transportation system.

  11. Industrial Adoption of Model-Based Systems Engineering: Challenges and Strategies

    NASA Astrophysics Data System (ADS)

    Maheshwari, Apoorv

    As design teams are becoming more globally integrated, one of the biggest challenges is to efficiently communicate across the team. The increasing complexity and multi-disciplinary nature of the products are also making it difficult to keep track of all the information generated during the design process by these global team members. System engineers have identified Model-based Systems Engineering (MBSE) as a possible solution where the emphasis is placed on the application of visual modeling methods and best practices to systems engineering (SE) activities right from the beginning of the conceptual design phases through to the end of the product lifecycle. Despite several advantages, there are multiple challenges restricting the adoption of MBSE by industry. We mainly consider the following two challenges: a) Industry perceives MBSE just as a diagramming tool and does not see too much value in MBSE; b) Industrial adopters are skeptical if the products developed using MBSE approach will be accepted by the regulatory bodies. To provide counter evidence to the former challenge, we developed a generic framework for translation from an MBSE tool (Systems Modeling Language, SysML) to an analysis tool (Agent-Based Modeling, ABM). The translation is demonstrated using a simplified air traffic management problem and provides an example of a potential quite significant value: the ability to use MBSE representations directly in an analysis setting. For the latter challenge, we are developing a reference model that uses SysML to represent a generic infusion pump and SE process for planning, developing, and obtaining regulatory approval of a medical device. This reference model demonstrates how regulatory requirements can be captured effectively through model-based representations. We will present another case study at the end where we will apply the knowledge gained from both case studies to a UAV design problem.

  12. Solving large test-day models by iteration on data and preconditioned conjugate gradient.

    PubMed

    Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A

    1999-12-01

    A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.

  13. Neural Flight Control System

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2003-01-01

    The Neural Flight Control System (NFCS) was developed to address the need for control systems that can be produced and tested at lower cost, easily adapted to prototype vehicles and for flight systems that can accommodate damaged control surfaces or changes to aircraft stability and control characteristics resulting from failures or accidents. NFCS utilizes on a neural network-based flight control algorithm which automatically compensates for a broad spectrum of unanticipated damage or failures of an aircraft in flight. Pilot stick and rudder pedal inputs are fed into a reference model which produces pitch, roll and yaw rate commands. The reference model frequencies and gains can be set to provide handling quality characteristics suitable for the aircraft of interest. The rate commands are used in conjunction with estimates of the aircraft s stability and control (S&C) derivatives by a simplified Dynamic Inverse controller to produce virtual elevator, aileron and rudder commands. These virtual surface deflection commands are optimally distributed across the aircraft s available control surfaces using linear programming theory. Sensor data is compared with the reference model rate commands to produce an error signal. A Proportional/Integral (PI) error controller "winds up" on the error signal and adds an augmented command to the reference model output with the effect of zeroing the error signal. In order to provide more consistent handling qualities for the pilot, neural networks learn the behavior of the error controller and add in the augmented command before the integrator winds up. In the case of damage sufficient to affect the handling qualities of the aircraft, an Adaptive Critic is utilized to reduce the reference model frequencies and gains to stay within a flyable envelope of the aircraft.

  14. A no-reference bitstream-based perceptual model for video quality estimation of videos affected by coding artifacts and packet losses

    NASA Astrophysics Data System (ADS)

    Pandremmenou, K.; Shahid, M.; Kondi, L. P.; Lövström, B.

    2015-03-01

    In this work, we propose a No-Reference (NR) bitstream-based model for predicting the quality of H.264/AVC video sequences, affected by both compression artifacts and transmission impairments. The proposed model is based on a feature extraction procedure, where a large number of features are calculated from the packet-loss impaired bitstream. Many of the features are firstly proposed in this work, and the specific set of the features as a whole is applied for the first time for making NR video quality predictions. All feature observations are taken as input to the Least Absolute Shrinkage and Selection Operator (LASSO) regression method. LASSO indicates the most important features, and using only them, it is possible to estimate the Mean Opinion Score (MOS) with high accuracy. Indicatively, we point out that only 13 features are able to produce a Pearson Correlation Coefficient of 0.92 with the MOS. Interestingly, the performance statistics we computed in order to assess our method for predicting the Structural Similarity Index and the Video Quality Metric are equally good. Thus, the obtained experimental results verified the suitability of the features selected by LASSO as well as the ability of LASSO in making accurate predictions through sparse modeling.

  15. Automatic left-atrial segmentation from cardiac 3D ultrasound: a dual-chamber model-based approach

    NASA Astrophysics Data System (ADS)

    Almeida, Nuno; Sarvari, Sebastian I.; Orderud, Fredrik; Gérard, Olivier; D'hooge, Jan; Samset, Eigil

    2016-04-01

    In this paper, we present an automatic solution for segmentation and quantification of the left atrium (LA) from 3D cardiac ultrasound. A model-based framework is applied, making use of (deformable) active surfaces to model the endocardial surfaces of cardiac chambers, allowing incorporation of a priori anatomical information in a simple fashion. A dual-chamber model (LA and left ventricle) is used to detect and track the atrio-ventricular (AV) plane, without any user input. Both chambers are represented by parametric surfaces and a Kalman filter is used to fit the model to the position of the endocardial walls detected in the image, providing accurate detection and tracking during the whole cardiac cycle. This framework was tested in 20 transthoracic cardiac ultrasound volumetric recordings of healthy volunteers, and evaluated using manual traces of a clinical expert as a reference. The 3D meshes obtained with the automatic method were close to the reference contours at all cardiac phases (mean distance of 0.03+/-0.6 mm). The AV plane was detected with an accuracy of -0.6+/-1.0 mm. The LA volumes assessed automatically were also in agreement with the reference (mean +/-1.96 SD): 0.4+/-5.3 ml, 2.1+/-12.6 ml, and 1.5+/-7.8 ml at end-diastolic, end-systolic and pre-atrial-contraction frames, respectively. This study shows that the proposed method can be used for automatic volumetric assessment of the LA, considerably reducing the analysis time and effort when compared to manual analysis.

  16. Effect of Time Varying Gravity on DORIS processing for ITRF2013

    NASA Astrophysics Data System (ADS)

    Zelensky, N. P.; Lemoine, F. G.; Chinn, D. S.; Beall, J. W.; Melachroinos, S. A.; Beckley, B. D.; Pavlis, D.; Wimert, J.

    2013-12-01

    Computations are under way to develop a new time series of DORIS SINEX solutions to contribute to the development of the new realization of the terrestrial reference frame (c.f. ITRF2013). One of the improvements that are envisaged is the application of improved models of time-variable gravity in the background orbit modeling. At GSFC we have developed a time series of spherical harmonics to degree and order 5 (using the GOC02S model as a base), based on the processing of SLR and DORIS data to 14 satellites from 1993 to 2013. This is compared with the standard approach used in ITRF2008, based on the static model EIGEN-GL04S1 which included secular variations in only a few select coefficients. Previous work on altimeter satellite POD (c.f. TOPEX/Poseidon, Jason-1, Jason-2) has shown that the standard model is not adequate and orbit improvements are observed with application of more detailed models of time-variable gravity. In this study, we quantify the impact of TVG modeling on DORIS satellite POD, and ascertain the impact on DORIS station positions estimated weekly from 1993 to 2013. The numerous recent improvements to SLR and DORIS processing at GSFC include a more complete compliance to IERS2010 standards, improvements to SLR/DORIS measurement modeling, and improved non-conservative force modeling to DORIS satellites. These improvements will affect gravity coefficient estimates, POD, and the station solutions. Tests evaluate the impact of time varying gravity on tracking data residuals, station consistency, and the geocenter and scale reference frame parameters.

  17. Comparison of CME/Shock Propagation Models with Heliospheric Imaging and In Situ Observations

    NASA Astrophysics Data System (ADS)

    Zhao, Xinhua; Liu, Ying D.; Inhester, Bernd; Feng, Xueshang; Wiegelmann, Thomas; Lu, Lei

    2016-10-01

    The prediction of the arrival time for fast coronal mass ejections (CMEs) and their associated shocks is highly desirable in space weather studies. In this paper, we use two shock propagation models, I.e., Data Guided Shock Time Of Arrival (DGSTOA) and Data Guided Shock Propagation Model (DGSPM), to predict the kinematical evolution of interplanetary shocks associated with fast CMEs. DGSTOA is based on the similarity theory of shock waves in the solar wind reference frame, and DGSPM is based on the non-similarity theory in the stationary reference frame. The inputs are the kinematics of the CME front at the maximum speed moment obtained from the geometric triangulation method applied to STEREO imaging observations together with the Harmonic Mean approximation. The outputs provide the subsequent propagation of the associated shock. We apply these models to the CMEs on 2012 January 19, January 23, and March 7. We find that the shock models predict reasonably well the shock’s propagation after the impulsive acceleration. The shock’s arrival time and local propagation speed at Earth predicted by these models are consistent with in situ measurements of WIND. We also employ the Drag-Based Model (DBM) as a comparison, and find that it predicts a steeper deceleration than the shock models after the rapid deceleration phase. The predictions of DBM at 1 au agree with the following ICME or sheath structure, not the preceding shock. These results demonstrate the applicability of the shock models used here for future arrival time prediction of interplanetary shocks associated with fast CMEs.

  18. Development of a clinical pharmacy model within an Australian home nursing service using co-creation and participatory action research: the Visiting Pharmacist (ViP) study.

    PubMed

    Elliott, Rohan A; Lee, Cik Yin; Beanland, Christine; Goeman, Dianne P; Petrie, Neil; Petrie, Barbara; Vise, Felicity; Gray, June

    2017-11-03

    To develop a collaborative, person-centred model of clinical pharmacy support for community nurses and their medication management clients. Co-creation and participatory action research, based on reflection, data collection, interaction and feedback from participants and other stakeholders. A large, non-profit home nursing service in Melbourne, Australia. Older people referred to the home nursing service for medication management, their carers, community nurses, general practitioners (GPs) and pharmacists, a multidisciplinary stakeholder reference group (including consumer representation) and the project team. Feedback and reflections from minutes, notes and transcripts from: project team meetings, clinical pharmacists' reflective diaries and interviews, meetings with community nurses, reference group meetings and interviews and focus groups with 27 older people, 18 carers, 53 nurses, 15 GPs and seven community pharmacists. The model was based on best practice medication management standards and designed to address key medication management issues raised by stakeholders. Pharmacist roles included direct client care and indirect care. Direct care included home visits, medication reconciliation, medication review, medication regimen simplification, preparation of medication lists for clients and nurses, liaison and information sharing with prescribers and pharmacies and patient/carer education. Indirect care included providing medicines information and education for nurses and assisting with review and implementation of organisational medication policies and procedures. The model allowed nurses to refer directly to the pharmacist, enabling timely resolution of medication issues. Direct care was provided to 84 older people over a 15-month implementation period. Ongoing feedback and consultation, in line with participatory action research principles, informed the development and refinement of the model and identification of enablers and challenges. A collaborative, person-centred clinical pharmacy model that addressed the needs of clients, carers, nurses and other stakeholders was successfully developed. The model is likely to have applicability to home nursing services nationally and internationally. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Macromolecule mapping of the brain using ultrashort-TE acquisition and reference-based metabolite removal.

    PubMed

    Lam, Fan; Li, Yudu; Clifford, Bryan; Liang, Zhi-Pei

    2018-05-01

    To develop a practical method for mapping macromolecule distribution in the brain using ultrashort-TE MRSI data. An FID-based chemical shift imaging acquisition without metabolite-nulling pulses was used to acquire ultrashort-TE MRSI data that capture the macromolecule signals with high signal-to-noise-ratio (SNR) efficiency. To remove the metabolite signals from the ultrashort-TE data, single voxel spectroscopy data were obtained to determine a set of high-quality metabolite reference spectra. These spectra were then incorporated into a generalized series (GS) model to represent general metabolite spatiospectral distributions. A time-segmented algorithm was developed to back-extrapolate the GS model-based metabolite distribution from truncated FIDs and remove it from the MRSI data. Numerical simulations and in vivo experiments have been performed to evaluate the proposed method. Simulation results demonstrate accurate metabolite signal extrapolation by the proposed method given a high-quality reference. For in vivo experiments, the proposed method is able to produce spatiospectral distributions of macromolecules in the brain with high SNR from data acquired in about 10 minutes. We further demonstrate that the high-dimensional macromolecule spatiospectral distribution resides in a low-dimensional subspace. This finding provides a new opportunity to use subspace models for quantification and accelerated macromolecule mapping. Robustness of the proposed method is also demonstrated using multiple data sets from the same and different subjects. The proposed method is able to obtain macromolecule distributions in the brain from ultrashort-TE acquisitions. It can also be used for acquiring training data to determine a low-dimensional subspace to represent the macromolecule signals for subspace-based MRSI. Magn Reson Med 79:2460-2469, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Effect of defuzzification method of fuzzy modeling

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.

  1. Retrieval of Venus' cloud parameters from VIRTIS nightside spectra in the latitude band 25°-55°N

    NASA Astrophysics Data System (ADS)

    Magurno, Davide; Maestri, Tiziano; Grassi, Davide; Piccioni, Giuseppe; Sindoni, Giuseppe

    2017-09-01

    Two years of data from the M-channel of the Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS), on board the European Space Agency mission Venus Express operating around the planet Venus, are analysed. Nocturnal data from a nadir viewpoint in the latitude band 25°N-55°N are selected for their configuration advantages and maximisation of the scene homogeneity. A reference model, and radiance spectrum, is defined based on average accepted values of the Venus main atmospheric and cloud parameters found in the literature. Extensive radiative transfer simulations are performed to provide a synthetic database of more than 10 000 VIRTIS radiances representing the natural variability of the system parameters (atmospheric temperature profile, cloud H2Osbnd H2SO4 solution concentration and vertical distribution, particle size distribution density and modal radius). A simulated-observed fitting algorithm of spectral radiances in window channels, based on a weighting procedure accounting for the latitudinal observed radiance variations, is used to derive the best atmosphere-cloud configuration for each observation. Results show that the reference Venus model does not adequately reproduce the observed VIRTIS spectra. In particular, the model accounting for a constant sulphuric acid concentration along the vertical extent of the clouds is never selected as a best fit. The 75%/96% and 84%/96% concentrations (the first values refer to the upper cloud layers and the second values to the lower ones) are the most commonly retrieved models representing more than 85% of the retrieved cases for any latitudinal band considered. It is shown that the assumption of stratified concentration of aqueous sulphuric acid allows to adequately fit the observed radiance, in particular the peak at 1.74 μm and around 4 μm. The analysis of the results concerning the microphysics suggests larger radii for the upper cloud layers in conjunction with a large reduction of their number density with respect to the reference standard. Considerable variation of the particle concentration in the Venus' atmosphere is retrieved for altitudes between 60 and 70 km. The retrieved models also suggest that lower cloud layers have smaller particle radii and larger number density than expected from the reference model. Latitudinal variations of microphysical and chemical parameters are also analysed.

  2. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    NASA Astrophysics Data System (ADS)

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  3. Accuracy and Specific Value of Cardiovascular 3D-Models in Pediatric CT-Angiography.

    PubMed

    Hammon, Matthias; Rompel, Oliver; Seuss, Hannes; Dittrich, Sven; Uder, Michael; Rüffer, Andrè; Cesnjevar, Robert; Ehret, Nicole; Glöckler, Martin

    2017-12-01

    Computed tomography (CT)-angiography is routinely performed prior to catheter-based and surgical treatment in congenital heart disease. To date, little is known about the accuracy and advantage of different 3D-reconstructions in CT-data. Exact anatomical information is crucial. We analyzed 35 consecutive CT-angiographies of infants with congenital heart disease. All datasets are reconstructed three-dimensionally using volume rendering technique (VRT) and threshold-based segmentation (stereolithographic model, STL). Additionally, the two-dimensional maximum intensity projection (MIP) reconstructs two-dimensional data. In each dataset and resulting image, measurements of vascular diameters for four different vessels were estimated and compared to the reference standard, measured via multiplanar reformation (MPR). The resulting measurements obtained via the STL-images, MIP-images, and the VRT-images were compared with the reference standard. There was a significant difference (p < 0.05) between measurements. The mean difference was 0.0 for STL-images, -0.1 for MIP-images, and -0.3 for VRT-images. The range of the differences was -0.7 to 1.0 mm for STL-images, -0.6 to 0.5 mm for MIP-images and -1.1 to 0.7 mm for VRT-images. There was an excellent correlation between the STL-, MIP-, VRT-measurements, and the reference standard. Inter-reader reliability was excellent (p < 0.01). STL-models of cardiovascular structures are more accurate than the traditional VRT-models. Additionally, they can be standardized and are reproducible.

  4. Standard solar model

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.; Demarque, P.; Kim, Y.-C.; Pinsonneault, M. H.

    1992-01-01

    A set of solar models have been constructed, each based on a single modification to the physics of a reference solar model. In addition, a model combining several of the improvements has been calculated to provide a best solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The impact on both the structure and the frequencies of the low-l p-modes of the model to these improvements are discussed. It is found that the combined solar model, which is based on the best physics available (and does not contain any ad hoc assumptions), reproduces the observed oscillation spectrum (for low-l) within the errors associated with the uncertainties in the model physics (primarily opacities).

  5. Reference equations of motion for automatic rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Henderson, David M.

    1992-01-01

    The analysis presented in this paper defines the reference coordinate frames, equations of motion, and control parameters necessary to model the relative motion and attitude of spacecraft in close proximity with another space system during the Automatic Rendezvous and Capture phase of an on-orbit operation. The relative docking port target position vector and the attitude control matrix are defined based upon an arbitrary spacecraft design. These translation and rotation control parameters could be used to drive the error signal input to the vehicle flight control system. Measurements for these control parameters would become the bases for an autopilot or feedback control system (FCS) design for a specific spacecraft.

  6. Research on energy efficiency design index for sea-going LNG carriers

    NASA Astrophysics Data System (ADS)

    Lin, Yan; Yu, Yanyun; Guan, Guan

    2014-12-01

    This paper describes the characteristics of liquefied natural gas (LNG) carriers briefly. The LNG carrier includes power plant selection, vapor treatment, liquid cargo tank type, etc. Two parameters—fuel substitution rate and recovery of boil of gas (BOG) volume to energy efficiency design index (EEDI) formula are added, and EEDI formula of LNG carriers is established based on ship EEDI formula. Then, based on steam turbine propulsion device of LNG carriers, mathematical models of LNG carriers' reference line value are established in this paper. By verification, the EEDI formula of LNG carriers described in this paper can provide a reference for LNG carrier EEDI calculation and green shipbuilding.

  7. Unified Bayesian Estimator of EEG Reference at Infinity: rREST (Regularized Reference Electrode Standardization Technique)

    PubMed Central

    Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A.

    2018-01-01

    The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs—with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the “oracle” choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance. PMID:29780302

  8. A Computer Model for Evaluating the Effects on Fighting Vehicle Crewmembers of Exposure to Carbon Monoxide Emissions.

    DTIC Science & Technology

    1980-01-01

    SUPPLEMENTARY NOTES Is. KEY WORDS (Continue on reverse ede If neceseay id Identify by block number) Carbon Monoxide (CO) Computer Program Carboxyhemoglobin ...several researchers, which predicts the instantaneous amount of carboxyhemoglobin (COHb) in the blood of a person based upon the amount of carbon monoxide...developed from an empirical equation (derived from reference I and detailed in reference 3) which predicts the amount of carboxyhemoglobin (COHb) in

  9. The International Reference Ionosphere - Status 2013

    NASA Astrophysics Data System (ADS)

    Bilitza, Dieter

    2015-04-01

    This paper describes the latest version of the International Reference Ionosphere (IRI) model. IRI-2012 includes new models for the electron density and ion densities in the region below the F-peak, a storm-time model for the auroral E-region, an improved electron temperature model that includes variations with solar activity, and for the first time a description of auroral boundaries. In addition, the thermosphere model required for baseline neutral densities and temperatures was upgraded from MSIS-86 to the newer NRLMSIS-00 model and Corrected Geomagnetic coordinates (CGM) were included in IRI as an additional coordinate system for a better representation of auroral and polar latitudes. Ongoing IRI activities towards the inclusion of an improved model for the F2 peak height hmF2 are discussed as are efforts to develop a "Real-Time IRI". The paper is based on an IRI status report presented at the 2013 IRI Workshop in Olsztyn, Poland. The IRI homepage is at

  10. Estimation of daily reference evapotranspiration (ETo) using artificial intelligence methods: Offering a new approach for lagged ETo data-based modeling

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid

    2018-04-01

    Evapotranspiration (ET) is considered as a key factor in hydrological and climatological studies, agricultural water management, irrigation scheduling, etc. It can be directly measured using lysimeters. Moreover, other methods such as empirical equations and artificial intelligence methods can be used to model ET. In the recent years, artificial intelligence methods have been widely utilized to estimate reference evapotranspiration (ETo). In the present study, local and external performances of multivariate adaptive regression splines (MARS) and gene expression programming (GEP) were assessed for estimating daily ETo. For this aim, daily weather data of six stations with different climates in Iran, namely Urmia and Tabriz (semi-arid), Isfahan and Shiraz (arid), Yazd and Zahedan (hyper-arid) were employed during 2000-2014. Two types of input patterns consisting of weather data-based and lagged ETo data-based scenarios were considered to develop the models. Four statistical indicators including root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (R2), and mean absolute percentage error (MAPE) were used to check the accuracy of models. The local performance of models revealed that the MARS and GEP approaches have the capability to estimate daily ETo using the meteorological parameters and the lagged ETo data as inputs. Nevertheless, the MARS had the best performance in the weather data-based scenarios. On the other hand, considerable differences were not observed in the models' accuracy for the lagged ETo data-based scenarios. In the innovation of this study, novel hybrid models were proposed in the lagged ETo data-based scenarios through combination of MARS and GEP models with autoregressive conditional heteroscedasticity (ARCH) time series model. It was concluded that the proposed novel models named MARS-ARCH and GEP-ARCH improved the performance of ETo modeling compared to the single MARS and GEP. In addition, the external analysis of the performance of models at stations with similar climatic conditions denoted the applicability of nearby station' data for estimation of the daily ETo at target station.

  11. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  12. Development of model reference adaptive control theory for electric power plant control applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mabius, L.E.

    1982-09-15

    The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less

  13. Temperamental reactivity and negative emotionality in uncooperative children referred to specialized paediatric dentistry compared to children in ordinary dental care.

    PubMed

    Arnrup, Kristina; Broberg, Anders G; Berggren, Ulf; Bodin, Lennart

    2007-11-01

    Current treatment of children with dental behaviour management problems (DBMP) is based on the presupposition that their difficulties are caused by dental fear, but is this always the case? The aim of this study was to study temperamental reactivity, negative emotionality, and other personal characteristics in relation to DBMP in 8- to 12-year-old children. Forty-six children referred because of DBMP (study group) and 110 children in ordinary dental care (reference group) participated. The EASI tempramental survey assessed temperamental reactivity and negative emotionality, the Child Behaviour Questionnaire internalizing and externalizing behaviour problems, and the Children's Fear Survey Schedule general and dental fears. Cluster analyses and tree-based modelling were used for data analysis. Among the five clusters identified, one could be characterized as 'balanced temperament'. Thirty-five per cent of the reference group compared to only 7% of the study group belonged to this cluster. Negative emotionality was the most important sorting variable. Children referred because of DBMP differed from children in ordinary dental care, not only in dental fear level, but also in personal characteristics. Few of the referred children were characterized by a balanced temperament profile. It is important to consider the dual impact of emotion dysregulation and emotional reactivity in the development of DBMP.

  14. A novel multi-model neuro-fuzzy-based MPPT for three-phase grid-connected photovoltaic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaouachi, Aymen; Kamel, Rashad M.; Nagasaka, Ken

    This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three multi-layered feed forwarded Artificial Neural Networks (ANN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated ANN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology,more » comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and nonlinear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network and the Perturb and Observe (P and O) algorithm dispositive. (author)« less

  15. Seasonal drought ensemble predictions based on multiple climate models in the upper Han River Basin, China

    NASA Astrophysics Data System (ADS)

    Ma, Feng; Ye, Aizhong; Duan, Qingyun

    2017-03-01

    An experimental seasonal drought forecasting system is developed based on 29-year (1982-2010) seasonal meteorological hindcasts generated by the climate models from the North American Multi-Model Ensemble (NMME) project. This system made use of a bias correction and spatial downscaling method, and a distributed time-variant gain model (DTVGM) hydrologic model. DTVGM was calibrated using observed daily hydrological data and its streamflow simulations achieved Nash-Sutcliffe efficiency values of 0.727 and 0.724 during calibration (1978-1995) and validation (1996-2005) periods, respectively, at the Danjiangkou reservoir station. The experimental seasonal drought forecasting system (known as NMME-DTVGM) is used to generate seasonal drought forecasts. The forecasts were evaluated against the reference forecasts (i.e., persistence forecast and climatological forecast). The NMME-DTVGM drought forecasts have higher detectability and accuracy and lower false alarm rate than the reference forecasts at different lead times (from 1 to 4 months) during the cold-dry season. No apparent advantage is shown in drought predictions during spring and summer seasons because of a long memory of the initial conditions in spring and a lower predictive skill for precipitation in summer. Overall, the NMME-based seasonal drought forecasting system has meaningful skill in predicting drought several months in advance, which can provide critical information for drought preparedness and response planning as well as the sustainable practice of water resource conservation over the basin.

  16. ESSG-based global spatial reference frame for datasets interrelation

    NASA Astrophysics Data System (ADS)

    Yu, J. Q.; Wu, L. X.; Jia, Y. J.

    2013-10-01

    To know well about the highly complex earth system, a large volume of, as well as a large variety of, datasets on the planet Earth are being obtained, distributed, and shared worldwide everyday. However, seldom of existing systems concentrates on the distribution and interrelation of different datasets in a common Global Spatial Reference Frame (GSRF), which holds an invisble obstacle to the data sharing and scientific collaboration. Group on Earth Obeservation (GEO) has recently established a new GSRF, named Earth System Spatial Grid (ESSG), for global datasets distribution, sharing and interrelation in its 2012-2015 WORKING PLAN.The ESSG may bridge the gap among different spatial datasets and hence overcome the obstacles. This paper is to present the implementation of the ESSG-based GSRF. A reference spheroid, a grid subdvision scheme, and a suitable encoding system are required to implement it. The radius of ESSG reference spheroid was set to the double of approximated Earth radius to make datasets from different areas of earth system science being covered. The same paramerters of positioning and orienting as Earth Centred Earth Fixed (ECEF) was adopted for the ESSG reference spheroid to make any other GSRFs being freely transformed into the ESSG-based GSRF. Spheroid degenerated octree grid with radius refiment (SDOG-R) and its encoding method were taken as the grid subdvision and encoding scheme for its good performance in many aspects. A triple (C, T, A) model is introduced to represent and link different datasets based on the ESSG-based GSRF. Finally, the methods of coordinate transformation between the ESSGbased GSRF and other GSRFs were presented to make ESSG-based GSRF operable and propagable.

  17. Interoperability of clinical decision-support systems and electronic health records using archetypes: a case study in clinical trial eligibility.

    PubMed

    Marcos, Mar; Maldonado, Jose A; Martínez-Salvador, Begoña; Boscá, Diego; Robles, Montserrat

    2013-08-01

    Clinical decision-support systems (CDSSs) comprise systems as diverse as sophisticated platforms to store and manage clinical data, tools to alert clinicians of problematic situations, or decision-making tools to assist clinicians. Irrespective of the kind of decision-support task CDSSs should be smoothly integrated within the clinical information system, interacting with other components, in particular with the electronic health record (EHR). However, despite decades of developments, most CDSSs lack interoperability features. We deal with the interoperability problem of CDSSs and EHRs by exploiting the dual-model methodology. This methodology distinguishes a reference model and archetypes. A reference model is represented by a stable and small object-oriented model that describes the generic properties of health record information. For their part, archetypes are reusable and domain-specific definitions of clinical concepts in the form of structured and constrained combinations of the entities of the reference model. We rely on archetypes to make the CDSS compatible with EHRs from different institutions. Concretely, we use archetypes for modelling the clinical concepts that the CDSS requires, in conjunction with a series of knowledge-intensive mappings relating the archetypes to the data sources (EHR and/or other archetypes) they depend on. We introduce a comprehensive approach, including a set of tools as well as methodological guidelines, to deal with the interoperability of CDSSs and EHRs based on archetypes. Archetypes are used to build a conceptual layer of the kind of a virtual health record (VHR) over the EHR whose contents need to be integrated and used in the CDSS, associating them with structural and terminology-based semantics. Subsequently, the archetypes are mapped to the EHR by means of an expressive mapping language and specific-purpose tools. We also describe a case study where the tools and methodology have been employed in a CDSS to support patient recruitment in the framework of a clinical trial for colorectal cancer screening. The utilisation of archetypes not only has proved satisfactory to achieve interoperability between CDSSs and EHRs but also offers various advantages, in particular from a data model perspective. First, the VHR/data models we work with are of a high level of abstraction and can incorporate semantic descriptions. Second, archetypes can potentially deal with different EHR architectures, due to their deliberate independence of the reference model. Third, the archetype instances we obtain are valid instances of the underlying reference model, which would enable e.g. feeding back the EHR with data derived by abstraction mechanisms. Lastly, the medical and technical validity of archetype models would be assured, since in principle clinicians should be the main actors in their development. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. The importance of different frequency bands in predicting subcutaneous glucose concentration in type 1 diabetic patients.

    PubMed

    Lu, Yinghui; Gribok, Andrei V; Ward, W Kenneth; Reifman, Jaques

    2010-08-01

    We investigated the relative importance and predictive power of different frequency bands of subcutaneous glucose signals for the short-term (0-50 min) forecasting of glucose concentrations in type 1 diabetic patients with data-driven autoregressive (AR) models. The study data consisted of minute-by-minute glucose signals collected from nine deidentified patients over a five-day period using continuous glucose monitoring devices. AR models were developed using single and pairwise combinations of frequency bands of the glucose signal and compared with a reference model including all bands. The results suggest that: for open-loop applications, there is no need to explicitly represent exogenous inputs, such as meals and insulin intake, in AR models; models based on a single-frequency band, with periods between 60-120 min and 150-500 min, yield good predictive power (error <3 mg/dL) for prediction horizons of up to 25 min; models based on pairs of bands produce predictions that are indistinguishable from those of the reference model as long as the 60-120 min period band is included; and AR models can be developed on signals of short length (approximately 300 min), i.e., ignoring long circadian rhythms, without any detriment in prediction accuracy. Together, these findings provide insights into efficient development of more effective and parsimonious data-driven models for short-term prediction of glucose concentrations in diabetic patients.

  19. Review of behavioral health integration in primary care at Baylor Scott and White Healthcare, Central Region

    PubMed Central

    Fluet, Norman R.; Reis, Michael D.; Stern, Charles H.; Thompson, Alexander W.; Jolly, Gillian A.

    2016-01-01

    The integration of behavioral health services in primary care has been referred to in many ways, but ultimately refers to common structures and processes. Behavioral health is integrated into primary care because it increases the effectiveness and efficiency of providing care and reduces costs in the care of primary care patients. Reimbursement is one factor, if not the main factor, that determines the level of integration that can be achieved. The federal health reform agenda supports changes that will eventually permit behavioral health to be fully integrated and will allow the health of the population to be the primary target of intervention. In an effort to develop more integrated services at Baylor Scott and White Healthcare, models of integration are reviewed and the advantages and disadvantages of each model are discussed. Recommendations to increase integration include adopting a disease management model with care management, planned guideline-based stepped care, follow-up, and treatment monitoring. Population-based interventions can be completed at the pace of the development of alternative reimbursement methods. The program should be based upon patient-centered medical home standards, and research is needed throughout the program development process. PMID:27034543

  20. A Model-Based Architecture Approach to Ship Design Linking Capability Needs to System Solutions

    DTIC Science & Technology

    2012-06-01

    NSSM NATO Sea Sparrow Missile RAM Rolling Airframe Missile CIWS Close-In Weapon System 3D Three Dimensional Ps Probability of Survival PHit ...example effectiveness model. The primary MOP is the inverse of the probability of taking a hit (1- PHit ), which in, this study, will be referred to as

  1. Localisation, Globalisation and SMEs in European Tourism: The "Virtual Enterprise" Model of Intervention.

    ERIC Educational Resources Information Center

    Davenport, Elisabeth

    2000-01-01

    Discussion of the effect of globalization on SMEs (small and medium enterprises) in Europe focuses on a case study of a current European Commission (EC) project, Net Quality, which is based on the virtual enterprise as an intervention model that may encourage small businesses to cooperate in strategic ventures. (Contains 29 references.)…

  2. IMPACTT5A model : enhancements and modifications since December 1994 with special reference to the effect of tripled-fuel-economy vehicles on fuel-cycle energy and emissions

    DOT National Transportation Integrated Search

    1998-09-01

    Version 5A of the Integrated Market Penetration and Anticipated Cost of Transportation Technologies (IMPACIT5A) model is a spreadsheet-based set of algorithms that calculates the effects of advanced-technology vehicles on baseline fuel use and emi...

  3. Improving Teaching and Learning in Higher Education: Metaphors and Models for Partnership Consultancy

    ERIC Educational Resources Information Center

    Morrison, Keith

    2003-01-01

    The management of partnerships with external consultants is discussed with reference to seven metaphors of partnership, illuminated by an external consultancy review of teaching and learning in a University Language Centre. Shortcomings are shown in each of the seven metaphors. A model of partnership is advocated, based on Habermas' principles of…

  4. Three Methods of Estimating a Model of Group Effects: A Comparison with Reference to School Effect Studies.

    ERIC Educational Resources Information Center

    Igra, Amnon

    1980-01-01

    Three methods of estimating a model of school effects are compared: ordinary least squares; an approach based on the analysis of covariance; and, a residualized input-output approach. Results are presented using a matrix algebra formulation, and advantages of the first two methods are considered. (Author/GK)

  5. The Development Model of Knowledge Management via Web-Based Learning to Enhance Pre-Service Teacher's Competency

    ERIC Educational Resources Information Center

    Rampai, Nattaphon; Sopeerak, Saroch

    2011-01-01

    This research explores that the model of knowledge management and web technology for teachers' professional development as well as its impact in the classroom on learning and teaching, especially in pre-service teacher's competency and practices that refer to knowledge creating, analyzing, nurturing, disseminating, and optimizing process as part…

  6. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    PubMed

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. MQAPRank: improved global protein model quality assessment by learning-to-rank.

    PubMed

    Jing, Xiaoyang; Dong, Qiwen

    2017-05-25

    Protein structure prediction has achieved a lot of progress during the last few decades and a greater number of models for a certain sequence can be predicted. Consequently, assessing the qualities of predicted protein models in perspective is one of the key components of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, which could be roughly divided into three categories: single methods, quasi-single methods and clustering (or consensus) methods. Although these methods achieve much success at different levels, accurate protein model quality assessment is still an open problem. Here, we present the MQAPRank, a global protein model quality assessment program based on learning-to-rank. The MQAPRank first sorts the decoy models by using single method based on learning-to-rank algorithm to indicate their relative qualities for the target protein. And then it takes the first five models as references to predict the qualities of other models by using average GDT_TS scores between reference models and other models. Benchmarked on CASP11 and 3DRobot datasets, the MQAPRank achieved better performances than other leading protein model quality assessment methods. Recently, the MQAPRank participated in the CASP12 under the group name FDUBio and achieved the state-of-the-art performances. The MQAPRank provides a convenient and powerful tool for protein model quality assessment with the state-of-the-art performances, it is useful for protein structure prediction and model quality assessment usages.

  8. Diffusion-controlled reference material for VOC emissions testing: proof of concept.

    PubMed

    Cox, S S; Liu, Z; Little, J C; Howard-Reed, C; Nabinger, S J; Persily, A

    2010-10-01

    Because of concerns about indoor air quality, there is growing awareness of the need to reduce the rate at which indoor materials and products emit volatile organic compounds (VOCs). To meet consumer demand for low emitting products, manufacturers are increasingly submitting materials to independent laboratories for emissions testing. However, the same product tested by different laboratories can result in very different emissions profiles because of a general lack of test validation procedures. There is a need for a reference material that can be used as a known emissions source and that will have the same emission rate when tested by different laboratories under the same conditions. A reference material was created by loading toluene into a polymethyl pentene film. A fundamental emissions model was used to predict the toluene emissions profile. Measured VOC emissions profiles using small-chamber emissions tests compared reasonably well to the emissions profile predicted using the emissions model, demonstrating the feasibility of the proposed approach to create a diffusion-controlled reference material. To calibrate emissions test chambers and improve the reproducibility of VOC emission measurements among different laboratories, a reference material has been created using a polymer film loaded with a representative VOC. Initial results show that the film's VOC emission profile measured in a conventional test chamber compares well to predictions based on independently determined material/chemical properties and a fundamental emissions model. The use of such reference materials has the potential to build consensus and confidence in emissions testing as well as 'level the playing field' for product testing laboratories and manufacturers.

  9. Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting

    2014-12-01

    This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.

  10. Laboratory methodologies for indicators of iron status: strengths, limitations, and analytical challenges.

    PubMed

    Pfeiffer, Christine M; Looker, Anne C

    2017-12-01

    Biochemical assessment of iron status relies on serum-based indicators, such as serum ferritin (SF), transferrin saturation, and soluble transferrin receptor (sTfR), as well as erythrocyte protoporphyrin. These indicators present challenges for clinical practice and national nutrition surveys, and often iron status interpretation is based on the combination of several indicators. The diagnosis of iron deficiency (ID) through SF concentration, the most commonly used indicator, is complicated by concomitant inflammation. sTfR concentration is an indicator of functional ID that is not an acute-phase reactant, but challenges in its interpretation arise because of the lack of assay standardization, common reference ranges, and common cutoffs. It is unclear which indicators are best suited to assess excess iron status. The value of hepcidin, non-transferrin-bound iron, and reticulocyte indexes is being explored in research settings. Serum-based indicators are generally measured on fully automated clinical analyzers available in most hospitals. Although international reference materials have been available for years, the standardization of immunoassays is complicated by the heterogeneity of antibodies used and the absence of physicochemical reference methods to establish "true" concentrations. From 1988 to 2006, the assessment of iron status in NHANES was based on the multi-indicator ferritin model. However, the model did not indicate the severity of ID and produced categorical estimates. More recently, iron status assessment in NHANES has used the total body iron stores (TBI) model, in which the log ratio of sTfR to SF is assessed. Together, sTfR and SF concentrations cover the full range of iron status. The TBI model better predicts the absence of bone marrow iron than SF concentration alone, and TBI can be analyzed as a continuous variable. Additional consideration of methodologies, interpretation of indicators, and analytic standardization is important for further improvements in iron status assessment. © 2017 American Society for Nutrition.

  11. A discrete time-varying internal model-based approach for high precision tracking of a multi-axis servo gantry.

    PubMed

    Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing

    2014-09-01

    In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Levelized Cost of Energy Analysis of Marine and Hydrokinetic Reference Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenne, D. S.; Yu, Y. H.; Neary, V.

    2015-04-24

    In 2010 the U.S. Department of Energy initiated the development of six marine energy converter reference models. The reference models are point designs of well-known marine energy converters. Each device was designed to operate in a specific marine resource, instead of a generic device that can be deployed at any location. This method allows each device to be used as a benchmark for future reference model to benchmark future devices. The six designs consist of three current energy converters and three wave energy converters. The reference model project has generated both technical and economic data sets that are available inmore » the public domain. The methodology to calculate the levelized cost of energy for the reference model project and an overall comparison of the cost of energy from these six reference-model designs are presented in this paper.« less

  13. Performance analysis of Supply Chain Management with Supply Chain Operation reference model

    NASA Astrophysics Data System (ADS)

    Hasibuan, Abdurrozzaq; Arfah, Mahrani; Parinduri, Luthfi; Hernawati, Tri; Suliawati; Harahap, Bonar; Rahmah Sibuea, Siti; Krianto Sulaiman, Oris; purwadi, Adi

    2018-04-01

    This research was conducted at PT. Shamrock Manufacturing Corpora, the company is required to think creatively to implement competition strategy by producing goods/services that are more qualified, cheaper. Therefore, it is necessary to measure the performance of Supply Chain Management in order to improve the competitiveness. Therefore, the company is required to optimize its production output to meet the export quality standard. This research begins with the creation of initial dimensions based on Supply Chain Management process, ie Plan, Source, Make, Delivery, and Return with hierarchy based on Supply Chain Reference Operation that is Reliability, Responsiveness, Agility, Cost, and Asset. Key Performance Indicator identification becomes a benchmark in performance measurement whereas Snorm De Boer normalization serves to equalize Key Performance Indicator value. Analiytical Hierarchy Process is done to assist in determining priority criteria. Measurement of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora produces SC. Responsiveness (0.649) has higher weight (priority) than other alternatives. The result of performance analysis using Supply Chain Reference Operation model of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora looks good because its monitoring system between 50-100 is good.

  14. Customization of a generic 3D model of the distal femur using diagnostic radiographs.

    PubMed

    Schmutz, B; Reynolds, K J; Slavotinek, J P

    2008-01-01

    A method for the customization of a generic 3D model of the distal femur is presented. The customization method involves two steps: acquisition of calibrated orthogonal planar radiographs; and linear scaling of the generic model based on the width of a subject's femoral condyles as measured on the planar radiographs. Planar radiographs of seven intact lower cadaver limbs were obtained. The customized generic models were validated by comparing their surface geometry with that of CT-reconstructed reference models. The overall mean error was 1.2 mm. The results demonstrate that uniform scaling as a first step in the customization process produced a base model of accuracy comparable to other models reported in the literature.

  15. Assessing the distinguishable cluster approximation based on the triple bond-breaking in the nitrogen molecule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rishi, Varun; Perera, Ajith; Bartlett, Rodney J., E-mail: bartlett@qtp.ufl.edu

    2016-03-28

    Obtaining the correct potential energy curves for the dissociation of multiple bonds is a challenging problem for ab initio methods which are affected by the choice of a spin-restricted reference function. Coupled cluster (CC) methods such as CCSD (coupled cluster singles and doubles model) and CCSD(T) (CCSD + perturbative triples) correctly predict the geometry and properties at equilibrium but the process of bond dissociation, particularly when more than one bond is simultaneously broken, is much more complicated. New modifications of CC theory suggest that the deleterious role of the reference function can be diminished, provided a particular subset of termsmore » is retained in the CC equations. The Distinguishable Cluster (DC) approach of Kats and Manby [J. Chem. Phys. 139, 021102 (2013)], seemingly overcomes the deficiencies for some bond-dissociation problems and might be of use in quasi-degenerate situations in general. DC along with other approximate coupled cluster methods such as ACCD (approximate coupled cluster doubles), ACP-D45, ACP-D14, 2CC, and pCCSD(α, β) (all defined in text) falls under a category of methods that are basically obtained by the deletion of some quadratic terms in the double excitation amplitude equation for CCD/CCSD (coupled cluster doubles model/coupled cluster singles and doubles model). Here these approximate methods, particularly those based on the DC approach, are studied in detail for the nitrogen molecule bond-breaking. The N{sub 2} problem is further addressed with conventional single reference methods but based on spatial symmetry-broken restricted Hartree–Fock (HF) solutions to assess the use of these references for correlated calculations in the situation where CC methods using fully symmetry adapted SCF solutions fail. The distinguishable cluster method is generalized: 1) to different orbitals for different spins (unrestricted HF based DCD and DCSD), 2) by adding triples correction perturbatively (DCSD(T)) and iteratively (DCSDT-n), and 3) via an excited state approximation through the equation of motion (EOM) approach (EOM-DCD, EOM-DCSD). The EOM-CC method is used to identify lower-energy CC solutions to overcome singularities in the CC potential energy curves. It is also shown that UHF based CC and DC methods behave very similarly in bond-breaking of N{sub 2}, and that using spatially broken but spin preserving SCF references makes the CCSD solutions better than those for DCSD.« less

  16. Ellipsoidal terrain correction based on multi-cylindrical equal-area map projection of the reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.

    2004-09-01

    An operational algorithm for computation of terrain correction (or local gravity field modeling) based on application of closed-form solution of the Newton integral in terms of Cartesian coordinates in multi-cylindrical equal-area map projection of the reference ellipsoid is presented. Multi-cylindrical equal-area map projection of the reference ellipsoid has been derived and is described in detail for the first time. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid are selected and the gravitational potential and vector of gravitational intensity (i.e. gravitational acceleration) of the mass elements are computed via numerical solution of the Newton integral in terms of geodetic coordinates {λ,ϕ,h}. Four base- edge points of the ellipsoidal mass elements are transformed into a multi-cylindrical equal-area map projection surface to build Cartesian mass elements by associating the height of the corresponding ellipsoidal mass elements to the transformed area elements. Using the closed-form solution of the Newton integral in terms of Cartesian coordinates, the gravitational potential and vector of gravitational intensity of the transformed Cartesian mass elements are computed and compared with those of the numerical solution of the Newton integral for the ellipsoidal mass elements in terms of geodetic coordinates. Numerical tests indicate that the difference between the two computations, i.e. numerical solution of the Newton integral for ellipsoidal mass elements in terms of geodetic coordinates and closed-form solution of the Newton integral in terms of Cartesian coordinates, in a multi-cylindrical equal-area map projection, is less than 1.6×10-8 m2/s2 for a mass element with a cross section area of 10×10 m and a height of 10,000 m. For a mass element with a cross section area of 1×1 km and a height of 10,000 m the difference is less than 1.5×10-4m2/s2. Since 1.5× 10-4 m2/s2 is equivalent to 1.5×10-5m in the vertical direction, it can be concluded that a method for terrain correction (or local gravity field modeling) based on closed-form solution of the Newton integral in terms of Cartesian coordinates of a multi-cylindrical equal-area map projection of the reference ellipsoid has been developed which has the accuracy of terrain correction (or local gravity field modeling) based on the Newton integral in terms of ellipsoidal coordinates.

  17. Analysis and model on space-time characteristics of wind power output based on the measured wind speed data

    NASA Astrophysics Data System (ADS)

    Shi, Wenhui; Feng, Changyou; Qu, Jixian; Zha, Hao; Ke, Dan

    2018-02-01

    Most of the existing studies on wind power output focus on the fluctuation of wind farms and the spatial self-complementary of wind power output time series was ignored. Therefore the existing probability models can’t reflect the features of power system incorporating wind farms. This paper analyzed the spatial self-complementary of wind power and proposed a probability model which can reflect temporal characteristics of wind power on seasonal and diurnal timescales based on sufficient measured data and improved clustering method. This model could provide important reference for power system simulation incorporating wind farms.

  18. The Effects of Observation and Intervention on the Judgment of Causal and Correlational Relationships

    DTIC Science & Technology

    2009-07-28

    further referred to as normative models of causation. A second type of model, which are based on Pavlovian classical conditioning , is associative... conditions of high cognitive load), the likelihood of the accuracy of the perception is compromised. If an inaccurate perception translates to an inaccurate...correlation and causation detection in specific military operations and under conditions of operational stress. Background Models of correlation

  19. On Application of the Ostwald-de Waele Model to Description of Non-Newtonian Fluid Flow in the Nip of Counter-Rotating Rolls

    NASA Astrophysics Data System (ADS)

    Shapovalov, V. M.

    2018-05-01

    The accuracy of the Ostwald-de Waele model in solving the problem of roll flow has been assessed by comparing with the "reference" solution for an Ellis fluid. As a result of the analysis, it has been shown that the model based on a power-law equation leads to substantial distortions of the flow pattern.

  20. Coda Calibration Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addair, Travis; Barno, Justin; Dodge, Doug

    CCT is a Java based application for calibrating 10 shear wave coda measurement models to observed data using a much smaller set of reference moment magnitudes (MWs) calculated from other means (waveform modeling, etc.). These calibrated measurement models can then be used in other tools to generate coda moment magnitude measurements, source spectra, estimated stress drop, and other useful measurements for any additional events and any new data collected in the calibrated region.

Top